The Tank Battler: The Wrap Up

Results are… varied. And by varied, I mean, well… let me explain the project further.

First off, it was overscoped. Not anyone’s fault honestly, it was a really cool idea, have a variety of players with a team of tanks to drive around and fight each other. But it took second priority to capstone (the student who programmed it was on one of the capstone teams, so he had trouble finding all the time to do both I’m sure), and didn’t have a designer looking over it.

We’ve been told this project will take a hiatus for a year, and may just get scrapped altogether, or get moved to Unity, which would fix a lot of the technical issues and make it easier on us developers.

That said, results came back varied because of a number of the above issues, mainly with people’s DLLs hitting breakpoints or otherwise not working, and the fact that with just a few people actually able to play, it was difficult to determine an exact winner (on top of a scoring system few of us paid attention to at all).

I didn’t go overboard on mine. It does well for what I made it do, which was mainly pathfind around. It moves around the map at random coordinates, mainly with the intention of getting away from the others. The idea being that if I could stall the match out, back myself into a corner, I might be able to take them down before they take me down.

I kind of wish I had gone more into it, but between the system problems (the unit circle was backwards and it was so so awful to work with, but that is apparently just part of SDL (which in itself baffles me)) and capstone, my drive for this final project was lower than it normally is.

Looking back at the semester, I’m still super happy with how minesweeper turned out, and I think my Gin Rummy did better than what it “scored,” so overall, on the two more structured projects, I did fairly well.

I think the class can only get better, and because our professor is really good at listening to feedback (both for this class, but he’s also the head of the game programming division, so hopefully the feedback we gave will actually change the curriculum as a whole for the better down the line), I think it turned out alright.

Our entire class has just kind of been a step behind in it’s own right. We learned flash, which got phased out for Unity the literal semester after us, and we had an entire class on XNA, which I doubt I’ll ever touch again, but in the end we’ve still put out some of the best work this college has seen to date. This should probably go in my postmortem for capstone, but I’ll put it there as well.

I guess to wrap up, all in all, it went well. Little bumpy at the end, but a good semester. And when it comes down to it, I don’t actually mind being a guinea pig, as long as the future will get better for it.

The Tank Battler, Combat: Prep

As stated in the last post, our Artificial Opponents class is wrapping up with an in house tank battler.

We already completed the movement part of that (kind of). Honestly, Capstone’s been eating away at most of our times (“our” being the collective class, including myself), so the quality of my movement AI and others was significantly weaker than it would’ve normally been.

Luckily (or unluckily, depending if our Capstone game makes it through), the final presentation is next week, so we’ll have more time to work on Artificial Opponents as Capstone moves to the backburner for the end of the semester.

As for the actual tank game, and plans – well, the first step is to fix the movement and actually get it working properly. That’ll be critical.

For the combat section, I figure I’ll be casting map-wide lines to check for friendly/enemy tanks in front of me. Fire if enemy, don’t fire otherwise.

I was imagining I’d have all 4 tanks line up in a diamond-square formation and just rotate as a massive supertank, covering all four 90 degree directions, that way the time to rotate in a circle is automatically cut by a quarter. It’s a big demand, I don’t know if I’ll be able to do that, especially if two tanks at different angles demand my attention, and I need to break them out of their supertank formation, destroy their enemies, then reform and continue moving.

I haven’t yet decided if this strategy will be best for attack or defense, but that will depend on how the scoring system is determined. If it’s best to go out, destroy tanks, pick up cash and spend it for better parts, I’ll go for that loop. If sitting in a corner and killing anyone who gets near using a spider-like supertank works best, I’ll go for that.

Only time will tell really, but for now, it’s all coming up Capstone, gotta get back to that. Until next post.

The Tank Battler, Movement: Prep

The final project for this semester’s Artifical Opponents class is a custom-made tank battler, wherein you control 4 tanks all at once and attempt to take out 3 other teams of tanks on the map (for a total of 16 roaming tanks).

For the first part, we have been tasked with getting the tanks to move. This ends up being more complicated than it sounds because the tanks don’t move on a standard “I want to go from position A to position B,” where you just follow the path. Instead, the tanks move using two treads, which each have a power level that you alter.

To turn with the tank requires uneven amounts of power, but either way, that will take experimental time to basically nail down how much power is required in each tread to get it to move as I want.

The plan as stands is to use A* for pathfinding. It’s a popular, strong, and simple pathfinding algorithm to use, and because the tank battler as a whole also has things like shooting and buying equipment, I don’t want to get bogged down in just the movement system. For this first part, movement is all that is necessary.

As for determining treads, I figure I’ll create a function that takes in the degree that the tank wants to turn, and from there factor that into two workable power tread levels, for a period of time. To turn 90 degrees to the right, I’d want to power the left tread forward at full and the right tread backward at half. Or so I think.

As of now I don’t see a reason to not make turning as fast as possible. If I can mange to spin my tanks 180 degrees in a quarter of a second, I would imagine that gives me a tactical advantage than doing it over 2 seconds. But if I manage to do that, others will too, so it might determine how I create AI later, when shooting and inventory come into play.

Anyhow, next up is to actually create it. See you in the report post.

Playing Gin Rummy: The Wrap Up

Like the Minesweeper post, this won’t be nearly as fun as the “This was my process” post. Instead, it’s just a postmortem on how I thought it all went.

This time I didn’t do as well as I’d hoped. It’s hard to rank the AIs, because we did a double elimination bracket system, so I didn’t get to play against everyone, but there were a few consistent factors.

My AI was roughly at the higher end of the middle of the pack. I could beat most AIs, but a few people had AIs who were consistently better. I actually lost to the same person in both rounds of my losses in very close matches that could have gone either way. In the round where I got knocked out, if I had won, I would have beaten him, it was that close.

There were some AIs that played less conservatively with their cards and would remove high cards from their hand very fast. In cases where they would get Gin, my AI might be sitting on a few dozen points. Needless to say, those AIs generally beat mine.

Compared to AIs that had a similar strategy to mine, it did generally well. A few had a similar strategy to my own, doing things like removing pairs of Kings and keeping single cards of low cost.

Alan once again was the definitive winner. His AI never lost a single match against anyone else (for clarification, a match was 40 games). I asked what he did, and he used a similar base as my own, but improved it with weights on discarding certain cards. As he fine tuned the weights, the AI got better.

I do wish I had added that myself, but time was unfortunately sparse, the capstone project consuming the majority of my time.

Either way, the next project is a custom made Tank Game, which is thoroughly complex (to code, it’s fairly simple to actually play), so that will be an interesting challenge. Because it’s custom made, right now it’s got a lot of bugs. Certainly going to be interesting with all the fixes coming in throughout. Until then.

Playing Gin Rummy: The Good Part


First off, since the last post, a new condition for the game was added: passing. Passing occurs right after the hands are dealt but before the first player makes their move, wherein both players trade a single card to each other.

Now then, much like with Minesweeper, all the AI logic used here is entirely my own, which feels more natural given that Minesweeper makes heavy use of algorithms, where Gin is more of a weighted system when determining which card you want to pick up or discard.

I won’t include a results section, as there’s no real test I can run purely against my own AI. Every time I want to play, I have to play against another classmate’s AI (or the really awful default AI), and my success can only be measured in comparison to theirs.

That said, my initial testing (and I tested against every single AI available to me) was fairly positive. I did very well against most people, with two individuals (Alan, who took first place in Minesweeper, and James, who placed first in the initial dry run (which to be fair is very preliminary, many AIs were not ready at the time)) I measured specifically against. With Alan, his AI is all over the place, and I’m not sure why. One series of games it would go from doing very well to doing very poorly, with no set pattern. James was more consistent, but in my favor, I was able to beat him 4 out of 5 times.

The Process

For this post, partly because of length, but mainly because posting code feels rather pointless (It’s our professor’s custom built engine, so while readable, copy-pasting would be useless), I’m just going to cover the step by step process I take through playing Gin.

Disclaimer: I will include parts that are not in use (but if I improved the AI further might’ve been). This AI is not fully optimized, it does not take probability into occurrence, nor does it find pairs among two separated cards (5 of spades and 7 of spades are not considered special by the AI unless a 6 of spades appears).

Notable Variables, Structs, & Functions


  • UINT myPlayerNum, keeps track of the player number (wow who could’ve guessed)
  • bool justReset, boolean for determining when the game was just reset, gets turned on once and then off in the first draw phase after an update
  • vector<Card> hasBeenSeen, a master list of all cards seen (Does update, but is not used; could be for probability)
  • vector<SetOrRun> lockedSORs, a list of all sets or runs, all cards in this list are not allowed to leave the hand
  • vector<Pair> heldPairs, a list of all pairs (both of potential runs and sets), cards in this list are only discarded if there are no “single” cards
  • vector<PCard> handPriority, a list of all cards in hand, each is given a priority number of 0, 1, or 2 (high to low priority)


  • Pair, keeps track of 2 cards, whether they’re the start of a set or run, and overloads the == operator to compare pairs
  • SetOrRun, keeps track of 3+ cards, whether they’re part of a set or run, and overloads the == operator to compare SORs
  • PCard, a struct that holds a card and a priority number


  • bool completesOrAddsToSetOrRun(Card c), determines whether a given card completes or adds to a pair or a SOR, true if it does
  • void cullDiscardFromPairs(Card c), removes all pairs that contain the discarded card from heldPairs
  • Card getCardToDiscard(), browses the hand and finds the highest priority card (a 2 if possible) with the highest value
  • void setHandPriority(Hand h), adds all cards in hand to handPriority with an associated priority number, clears the previous handPriority list when called

Step by Step

  1. Upon resetting the game
    1. Clear all lists (hasBeenSeen, lockedSORs, heldPairs, handPriority)
    2. Add the discard to hasBeenSeen
    3. Set justReset to true
  2. Determine passing step
    1. updateOnReset, which:
      1. Sets the player number
      2. Adds your hand to hasBeenSeen
      3. Finds any sets or runs in hand and adds them to lockedSORs
      4. Finds any pairs and adds them to heldPairs (includes pairs that are apart of SORs)
      5. Removes any pairs that are in SORs (so as to not trick your own system)
      6. Set hand priority
    2. Get the card to discard
    3. Cull the discard card from pairs
    4. Return the chosen card
  3. When it comes to your turn, determine whether you’ll draw from the deck or discard
    1. Update, which does everything that updateOnReset does except for setting the player number. However, it also checks for duplicates on each list, and does not add any cards to hasBeenSeen, pairs to heldPairs, or SORs to lockedSORs if they already exist within the list. It also assures that if pickedUpInitialDiscard is true, then do not access the top of the discard (because there is none)
    2. If completesOrAddsToSetOrRun is true, take the discard. Otherwise take the deck
    3. Flip a few reset variables (will only happen once)
      1. If justReset is true and you chose the discard, flip pickedUpInitialDiscard to true. This was a fix for this particular system, but it avoids the issue in the next update wherein you check the top card of the discard, and if its empty (because you just picked it up), it won’t throw an error
      2. If justReset is false and pickedUpInitialDiscard is true, flip pickedUpInitialDiscard back to false
      3. If justReset is true, flip it to false
    4. Draw your card
  4. Now you must discard
    1. Update to redo calculations for your newly drawn card
    2. Get the card to discard
    3. Remove that card from your hand and score your hand. If you are under the knock threshold, take the knock (set to true), doesn’t matter what value (our default value is 10)
    4. Cull the discard from pairs
    5. Discard your card
  5. Repeat steps 3 and 4 until the game ends

It’s a lot more compact that I expected to write, but there’s a lot of backend functionality that doesn’t warrant a spot on the list.

Functions that determine sets and runs from your hand, pairs, checks for duplicates among lists, checks the best form of scoring (whether it be from taking the 5 of hearts, 5 of diamonds and 5 of clubs, or the 5 of clubs, 6 of clubs, and 7 of clubs as your set/run), breaks apart a list of cards returned as SORs into the actual SOR struct, a lot goes on to actually run the base update function, which is where most of the information cataloging is done.

The deterministic functions, which get the card to discard or checks if the card completes a set or run end up using those lists of information to check whether they want a given card or not.

Final Thoughts

I predicted it last time, and I’ll be bold and predict it again: I think my AI will do well. I’m quietly hoping that it will place first this time, but just as Alan dark horsed his way in last time, there’s always that possibility that someone will have a strong AI that gives challenge.

I also don’t have a unique algorithm that’s leading the charge, just a series of logical choices that I as a player would make (albeit as a player I might use more nuance in some decisions, but god damn if nuance isn’t the trickiest thing to program), so I feel less confident that I will do well this time (but still fairly confident).

I guess we’ll just have to see how it all shakes down tomorrow or Friday, whenever the actual competition is. Till the next post, thanks for reading.

Playing Gin Rummy: Prep

Welcome to the second project in our Artificial Opponents (Advanced AI) class, where we are tasked with creating an AI that will play and ideally beat our opponent.

The System

As always, I am going to assume that you have a basic familiarity with the game. As far as I am aware, the current ruleset used in this version of Gin Rummy is as follows:

  • Standard 52 Card Deck
  • 2 players (My AI versus another classmate’s AI, tournament bracket style)
  • 10 card hands
  • A preset amount of rounds (flexible)
  • Sets are 3 or 4 of the same rank
  • Runs are 3 or more of the same suit in rank order
  • A single card can be in either a set or a run but not both
  • Points
    • Cards in sets or runs are 0 points, all other cards are deadwood
    • Aces are always low (rank of 1)
    • Face cards are worth 10 points
    • All other cards are worth their rank (2-10)
  • Play
    • Each turn the active player draws from either the deck or the discard pile, the top card of the discard is the only applicable draw, and the only card displayed
    • After a draw the active player must discard a card from their hand
    • When discarding the active player may knock if they have deadwood totaling 10 or less
    • When discarding, the active player may declare Gin and collect a 25 point bonus. There is no Big Gin in this version of the game
  • Ending the round
    • The active player knocks, the active player has gin, or the active player attempts to draw from an empty deck
    • A game ending in draw scores no points for anyone but the amount of rounds decrements by 1
  • End of round scoring
    • Gin grants 25 bonus points in addition to the points below
    • If the active player knocks and has fewer deadwood than the other player they collect points equal to the sum of the losing player’s deadwood minus their own deadwood
    • If the active player knocks but has a higher deadwood count than the opponent, the opponent collects 25 bonus points for undercutting and scores points as if they were the one who had knocked

My Approach

While I certainly haven’t played Gin as much as I have minesweeper (though thankfully my dad taught me a large amount of card games when I was younger), I still believe the best way to write AI is as if it were taking a human-like decision making approach. However, if I learned anything from the last project (and my experience with card games), probability and counting cards is going to be vital to performing the best. This isn’t Vegas after all. So to rewrite a thesis, if it were, the best way to write AI is to create it as if it were using human-like decision making while also having perfect memory and instantaneous processing power (strangely enough traits that a computer has).

That said, with Gin, I’ll need to do more research and testing and a lot of iterations, but I believe my initial thoughts to the decision making process would follow similar to this:

  1. Save all known cards to a list, for later probability calculations.
  2. Determine any sets or runs and “lock” those cards, preventing the AI from discarding them.
  3. Find pairs of two or runs of 2, with lower values being a priority over high ones, and put those in a “hold” category.
  4. Draw the discard if it completes a set or run.
  5. Otherwise draw from the deck. (There will be some finnicking for pulling from discard without the above condition true).
  6. Discard the highest card that doesn’t qualify within the “hold” category.

Theories and Analysis

One thing I found while playing is that its dangerous to hold onto a large amount of pairs, especially if they’re face cards, as sometimes you may never complete any pairs and get knocked on with a lot of points sitting deadwood in your hand.

Other things I noticed is that it might be worth it to do the exact opposite and grab a lot of face cards, as many people see them as high value and discard them, allowing you to pick them up.

Which on that note reminds me that I need to keep track of what cards my opponent is picking up. There’s good odds that if the opponent has just picked up the seven you put down that you shouldn’t drop another seven. Odds also could be that they’re using it in a run, but that would be a test for comparing it to the known cards and determining the probability of that being the case.

Either way, this is a far more complicated than minesweeper, in terms of the actions that you can choose. On the bright side, a single move won’t make you lose to the same degree that a wrong click in minesweeper (usually on a 50/50 or random guess) would.

That, of course, doesn’t stop the random chance that occurs in a card game, but it’s just what happens with a deck of cards.

More to come in about 3 weeks, a long hiatus for this section of the blog, but I’ll be back then with spicy solutions.

Solving Minesweeper: The Wrap Up

There’s nothing here as quite as exciting or fun as the last post, but this is the postmortem for how I thought everything went.

Spoiler Alert: It went pretty well.

First off, ups and downs, strengths and weaknesses. Overall, it does alright on the hard puzzles. It’s not on tier with the really advanced algorithms that pull 33% hard game winrates, but it managed an average of about 5%, which given the 2 weeks I had, I’m decent satisfied with.

It wasn’t as strong on the medium and easy puzzles as it could’ve been. It wasn’t bad, the but I only averaged about 75-80% on the easy, and 50% on the medium. There were a fair number of those that were just a corner click of 1 with no additional information that led me into a random guess loss, but my algorithm lacked 2 particular areas that had I had more time, I would’ve liked to implement: End game solutions and probability calculations.

Of course had I implemented those areas my winrate on hard would’ve increased significantly as well, but a better guessing system (probability) would’ve helped all across the board.

An end game algorithm would’ve been selective, and achieved a much smaller margin of increase in winrates, but every little bit is important.

The last to cover is that I improved the algorithm slightly from the last post. I’m going to go back and update that post and my modification to reflect the changes I made to my algorithm, but it basically was a small bump to opening up more squares using recognizable patterns (which the entire gist of my adjacency algorithm was capitalizing off common (and sometimes rarer) repeating patterns found in minesweeper).

Oh also! The actual results. Overall my algorithm placed second in the class. Alan, who I hadn’t really noticed before and didn’t showcase his algorithm the Wednesday a few of us did an initial test really surprised me with his algorithm. He successfully implemented a probability system that consistently topped my own adjacency algorithm. While both of our algorithms sat solidly in the top 2, his was definitely a slight step above mine, and is a large reason I would really look to implement a probability system if I ever did this in the future.

Overall, not a long post, just a quick update to wrap it all up.

Solving Minesweeper: The Good Part


So after quite some time, I have crafted a minesweeper solving AI, and it was… an exercise of imagination. Of sorts.

First thing to say, while I lightly peeked at one or two solvers that were pointed to me by other classmates, I disregarded anything useful in them – not because they were bad solvers – but because I just preferred to figure it out through my own methods in this particular situation.

That is to say, everything that I completed in regards to this solving algorithm was of my own mind, which was my intent in my last post: Create an AI that acts as similar to a human as possible (with the new clause of performing well – this becomes important when guessing becomes a factor).

And as you’ll see, I think I came up with a pretty impactful algorithm that increases the solver’s winrate by a notable amount.


First and foremost, to anyone coming to this post interested in how well the algorithm has done, I have run a multitude of what I am calling the “300 Test,” that is, 100 attempts at an official size large minesweeper board (30×16, 99 mines), 100 attempts at a medium board (16×16, 40 mines), and 100 attempts at a small board (9×9, 10 mines).

My results for the 300 Tests (read as hard board winrate, medium board winrate, small board winrate, and then time) are as follows:

Test 1: 4/100 50/100 73/100 (16.9m)
Test 2: 5/100 47/100 75/100 (No Time) – Forgot to record time
Test 3: 2/100 49/100 80/100 (16.2m)
Test 4: 1/100 33/100 52/100 (10.5m)
Test 5: 2/100 52/100 75/100 (16.2m)
Test 6: 4/100 41/100 55/100 (13.5m)
Test 7: 5/100 29/100 59/100(13.9m)
Test 8: 1/100 34/100 63/100 (13.1m)
Test 9: 5/100 49/100 68/100 (24.3m)
Test 10: 4/100 52/100 74/100 (23.0m)
Test 11: 4/100 52/100 74/100 (16.2m)
Test 12: 9/100 48/100 70/100 (17.3m)

It is of note that these scores are not of the same version of the algorithm. In fact, almost each one was tested on a different iteration of the algorithm, which accounts for the disparity in times and results for some. Around test 4 I implemented a bad guessing algorithm which resulted in a far lower time and score as it guessed bombs very frequently. At test 9, I added a condition to remove squares from the revealed list, which seemed to considerably slow down the algorithm (despite it’s purpose to speed it up considerably). At test 11, that same condition was still in the algorithm (and I removed another bit), so I’m thinking it might not be the cause.

The algorithm, as of this post, is unfinished in optimization. I am unsure if I ever will finish it, given all of the other work I have and will become busy with, and I will probably “finish” this and move on, unfortunately.

The Code

Beginning with the function list:

//Add any adjacent hidden cells around the index that are not in the permanent flagList to the temporary flagList
void addAdjacentHiddenCellsToTempFlagList(GameView theView, UINT index);

//Add all hidden cells around the index to the flagList (no duplicates)
void addAllHiddenCellsToFlagList(GameView theView, UINT index);

//Add all revealed cells to the revealed list (no duplicates)
void addAllRevealedCellsToRevealedList(GameView theView);

//Returns true if at least two adjacent cells are within the temp flag list (wow totally not the function name)
bool atLeastTwoAdjacentCellIsWithinTempFlagList(GameView theView, UINT index);

//Get an index of a cell adjacent to the index that is has no mine guaranteed (scans both flag lists)
UINT getCleanCell(GameView theView, UINT index);

//The number of cells adjacent to the index that are flagged
UINT getNumFlaggedCells(GameView theView, UINT index);

//The number of cells adjacent to the index that are hidden (not revealed). If bool is true, then ignore any cells on either flag list.
UINT getNumHiddenCells(GameView theView, UINT index, bool checkFlagLists);

//Returns true if the selected index is not on the permanent flag list
bool isNotOnFlagList(GameView theView, UINT index);

//Remove any revealed cells from the revealed list if all adjacent cells are revealed for flagged
void removeAllFinishedRevealedCells(GameView theView);

And important variables:

std::vector flagList, revealedList, tempFlagList;

I opted to not use a class, as I felt global variables would be fine for only needing 3 vectors.

The Algorithm

Now for the spicy part. Let’s go over in a step by step process how the entire algorithm works, then let’s take a look at actual code.

Step by Step

  1. Clear the temporary flag list. Important for clicking clean cells.
  2. Add any and all revealed cells to the revealed list, do not allow for duplicates.
  3. Remove any “finished” revealed cells (see comment above removeAllFinishedRevealedCells function). (This step is purely for optimization, I am unsure if it accomplishes that task. Fiddle as you wish).
  4. If the upper left cell is hidden, reveal it. This is the first move every game. (Personal choice of where to start).
  5. Main flag and reveal loop:
    1. Scan through every cell in the revealed list
    2. If the cell’s number (I will refer to this as cell(n) or iA(n) in the future, iA being indexA, iB is indexB, etc) is greater than 0 and equal to the amount of hidden adjacent cells, add every hidden cell around it to the flag list.
    3. If the cell(n) is greater than 0 and less than the amount of hidden adjacent cells, but the number of adjacent flagged cells is equal to the cell(n), reveal any hidden non-flagged cells around it.
  6. Now the real fun begins, the bread and butter of the algorithm as a whole. I crafted an adjacency algorithm that uses a temporary flagging system to identify safe squares that can be clicked. If you look at two pictures in my lastĀ minesweeper post, you can see the idea in practice.
    1. Same as last time, scan through every cell (iA) in the revealed list.
    2. Clear the temporary flag list, removing leftovers from the last iteration.
    3. If iA(n) + 1 is equal to the number of adjacent hidden cells, add all adjacent hidden cells to the temporary flag list. (You can only determine safe squares if you only have 1 square that is in contention).
    4. Scan through every cell adjacent to iA. These scanned cells will be called iB.
    5. Five checks here, you can only progress if:
      1. iB is revealed.
      2. iB(n) does not equal 0.
      3. iB has at least TWO adjacent cells in the temporary flag list. (This step is very important and the area that took me forever to nail down. There are a number of fringe cases that occur if iB only has one adjacent temp flag list, and it fails if you must have it touch every cell in the temp flag list).
      4. One of the two is true
        1. Both of the following are true
          1. iB(n) – iB(number of adjacent flagged cells) (ONLY on the permanent flag list, not temporary) equals the number of hidden cells minus the number of permanent adjacent flagged cells AND minus the number of temporarily flagged cells.
          2. The number of hidden cells minus the number of permanent adjacent flagged cells AND minus the number of temporarily flagged cells is equal to 1. (If it is 2, the choices of a clean cell are ambiguous, and you can easily hit a bomb, but with only 1 cell, you can be certain it is a safe square).
        2. Both of the following are true
          1. iB(n) is less thanĀ the number of hidden cells minus the number of permanent adjacent flagged cells AND minus the number of temporarily flagged cells.
          2. iB(n) is equal to 1.
    6. Clear the only safe square in the area.
  7. In the event that the main loop fails to get any hits and the adjacency algorithm falls flat, you are left to random guessing. I did attempt to do an estimated guessing algorithm, but I clearly didn’t put enough time into it (most of my time was spent on the previous algorithm), because it hit more bombs than a standard random cell choice.
    1. While (true)
    2. Get a random index from all available cells.
    3. If it is hidden and not on the permanent flag list, reveal it.


It is worth noting that I used my professor’s framework and in no way wrote the full minesweeper game (though I did do that in flash back in my sophomore year, which was actually very fun), so there will be references to classes and events I will cover real quickly.

GameView is a wrapper class around the GameState, giving access to the information of the board without allowing the user to cheat by just asking for the mines’ locations. Every time theView is referenced, it is an instance of GameView.

CellClickTransactions are the events sent back to the main program, telling it to click and reveal a given index.

All other references should be to the functions, which code I won’t post so this post doesn’t stretch for miles, but the general descriptions of how they work are in the comments above each’s name.



if (!theView.getCell(0).isRevealed())
  pHandler->postTransaction(new CellClickTransaction(0));

//Scan throught revealed list to clear and flag
for (UINT index : revealedList)
  UINT hiddenCellNum = getNumHiddenCells(theView, index, false);

  if (theView.getNumAdjacentMines(index) != 0)
    //If the cell's number equals the amount of hidden cells around it, flag em all
    if (theView.getNumAdjacentMines(index) == hiddenCellNum)
      addAllHiddenCellsToFlagList(theView, index);

    //If the cell's number is less than the amount of hidden cells around it, but the count is equal to the flagged amount, start revealing
    if (theView.getNumAdjacentMines(index) postTransaction(new CellClickTransaction(cleanIndex));

//Funtime adjacency algorithm
for (UINT indexA : revealedList)

  if (theView.getNumAdjacentMines(indexA) + 1 == getNumHiddenCells(theView, indexA, false))
    addAdjacentHiddenCellsToTempFlagList(theView, indexA);
    for (UINT indexB : theView.getAdjacentCellIndices(indexA))
      UINT numHiddenCellsTrue = getNumHiddenCells(theView, indexB, true);
      if (theView.getCell(indexB).isRevealed() && theView.getNumAdjacentMines(indexB) != 0 && atLeastTwoAdjacentCellIsWithinTempFlagList(theView, indexB) &&
      ((theView.getNumAdjacentMines(indexB) - getNumFlaggedCells(theView, indexB) == numHiddenCellsTrue && numHiddenCellsTrue == 1) ||
       (theView.getNumAdjacentMines(indexB) < numHiddenCellsTrue && theView.getNumAdjacentMines(indexB) == 1)))
        UINT cleanIndex = getCleanCell(theView, indexB);
        pHandler->postTransaction(new CellClickTransaction(cleanIndex));

while (true)
  UINT randomIndex = rand() % theView.getNumCells();
  if (!theView.getCell(randomIndex).isRevealed() && isNotOnFlagList(theView, randomIndex))
    pHandler->postTransaction(new CellClickTransaction(randomIndex));

And there you have it, the full minesweeper solving AI.

Final Thoughts

Overall I think it works fairly well, given that the guessing algorithm isn’t refined, a ~4-5% winrate on hard is pretty alright.

I also have to give an estimate of how well I think my AI will do compared to everyone else, and honestly I’m feeling good. I’m sure others will have thought the same thing I did, but apparently there’s a single set algorithm from a Harvard paper floating around in use in our class, and from what I understood, it certainly doesn’t do adjacency. Can’t predict much more than that, but I’m confident.

As usual, more to come next week after it goes into contention with everyone else in my class’ AIs.

Solving Minesweeper: Pt 1

Background for anyone reading this fresh:

Artificial Opponents is effectively Advanced Game AI, where we don’t create actual AI, we just create an algorithm that solves whatever problem we’re looking at, but never actually learns from its experiences.

The class has three games that we’re to create optimal solving algorithms for, the first being minesweeper. This blog post is to try and detail my thoughts going into the project, and how I aim to solve it.

The human brain is a powerful thing, able to perform a lot of quick calculations without our conscious knowing. It’s how professional players in esports are capable of pulling off split second moves and strategies before they themselves can fully comprehend what they’ve done.

Using that idea – that human brains subconsciously attempt to make the best decisions, especially after multiple attempts of trial and error where we develop a problem solving algorithm of our own, which can also be called ‘using your gut’ – as a basis for my algorithms, I first looked at myself, having played a fair bit of minesweeper – in particular, how exactly I solve minesweeper – and what particular moves I take to undergo that process successfully.

Also of note, at times there are situations of random chance, but before the time comes where you have to randomly click on a square to reveal it, you can narrow it down as far as possible without tripping a mine through a series of calculations.

The standard rotation of moves that can be made until real calculations need to be done I have determined to be as of follows:

  1. Click a square to begin, center or corner, author’s pick (I have chosen the upper left corner of (0, 0) for now, but it may more optimal to choose a center square).
  2. Add any revealed cells to a revealed cells list (for optimization, scanning just that you can see will be far quicker in the long run than scanning cells you have no information of).
  3. Add any cells that are obviously mines to the flag list. Ex. The cell you are currently viewing is marked as 1, and of the 8 surrounding cells, 7 are revealed. The 8th must be the mine, and can safely be flagged.
  4. Using the flag list, reveal all squares that are 100% assuredly not mines, adding those revealed cells to the revealed cell list.
  5. Return to step 2 and loop this process until there are no more guaranteed moves.

Typically the size of the map has some impact on the success of that loop alone. The smaller a map, the less conflict there will be, and you could use this process to determine with 100% accuracy the solution. As the size of the map increases at a rate that is not parallel to the increase in mine count (higher difficulties have a higher ratio of bombs to area), this loop will eventually hang, and you must begin calculations to determine what squares are safe.

Allow me to show you an example:


In the above example, we’re looking at the red squares in particular, and what we can determine from them, despite no obvious answers.

So let’s take a moment and get some answers. The 2 square within the red guarantees that we have 2 bombs within the orange area, each square having a 33% chance of having a bomb. So of the left two most squares, there is a 100% chance of being a bomb, as one of those squares must contain a bomb.

In this particular case, we don’t care which one it is, all we need to know is that the 1 square within the red knows that 1 bomb exists within the 3 squares below it, and because we know that a bomb must exist within the 2 right most squares (of the blue square and using orange’s left 2 squares), blue must be a safe square.

And voila:


In this case, that was but a small example, but ideally with proper calculations and the above 100% deterministic loop, much more can be solved than just the 100% loop alone.

Last thing to cover, there is absolutely the potential of running into a situation where you have to guess. In those situations, you can guess randomly, or guess intelligently, using much of the same logic as just demonstrated. Narrowing down the % odds of a square being safe will increase the rate to which you can be successful.

More to come, hopefully I can manage it to work.