When The Masses Ask…

The big area focused on this week was player feedback, as we got a lot of QA responses that wanted to know more information about the world and their physical relation to it, especially when travelling through paintings.

The big two areas that we improved feedback through was via a render texture mesh that we added to the gateways in the real world (this is of the forest landscape):

RenderTexture.png

And by adding a camera transition in two parts: moving in/out of a painting, and between paintings:

InOut

Between

The other bit of player feedback attempted to expand on was the narrative. Besides just fleshing it out in the week prior, we implemented a very basic NPC (can be seen as the red block in the gifs above – very creative I know), that when you walk past says a bit of dialogue to you. Nothing fancy for now, but paves the way for easy implementation later.

Finally, the two bits of functionality we threw in for this week were a new frame type, the latched painting, which is a frame that can be dived into but not moved from it’s location, and having interactive objects within paintings.

This is our latched painting:

Latched.png

The interactive objects, first off, replace the collider that it’s on the side of. For example, our snow painting has an igloo on the left side, which counts as our interactive object that disables the left collider travel option (you can still collide, you just won’t travel between paintings), and you will only travel upon hitting E (which later will be accompanied by an animation).

Interact

This next week’s focus is on improving gameplay, mainly in the painting world, but also adding new hook and frame types to add puzzle variety, so tune in then.

Advertisements

Infrastructure Is Key

We’ve taken our Painting game, which still does not have an official name, and are working toward fleshing it out considerable. Giving it pizzazz where we can, and working toward that final vertical slice for the end of the semester.

My main area of focus, being the first week of solely working on just this one game, was on improving the infrastructure of the areas I more quickly set up, and cleaning up our repository so it doesn’t take four centuries to download.

Repository cleanup went well, I’m somewhat anal in my organizational formatting for the folders that I use and access, so I like to have things easy to find and not all over the place. Helps me at the very least, I like to imagine it helps others too.

Matt (Designer) added a third, more complex puzzle, and a background piano track that he composed himself. It’s a little simple, but it’s only the start, I expect great things.

We got two bits of player feedback in. First, when you are holding a painting and are near enough to place it on a hook, the hooks glow. Considering our hook model is somewhat visually diminished, I found it pretty helpful when I was testing the levels. It does help that Tucker (Artist) put a nice background to accentuate the “You can place a painting here!”

HookGlow.png

The second bit is the frame’s give you a direct visual indication of where they’re connected to. Easier to show then tell so…

EdgeGlow

And that updates on the fly as you go.

Final bits worth noting are a variety of bugfixes and infrastructure improvements. Three of note:

  • Each painting type now has far easier designations of entrances and exits on all four sides.
  • Paintings would occasionally stay highlighted when they shouldn’t and only one object could be highlighted at a time, both of those are now fixed.
  • The distance at which you can pick up/put down a painting is differentiated from the distance at which you can dive into a painting. Good for some puzzles. Matt has a puzzle with a gate that enforces that limit. In the future we’ll have different painting and hook types so we don’t need direct physical barriers.

That’s this week. We’ve got a lot planned for next week, so check in then.

The Chopping Block

The pain of working with full 3D physics is over. Snails have defeated me.

But honestly I’m not sad at all. While it was a challenge, it taught me a lot about Unreal as an engine, both in blueprints and with their C++ code base. And I even went to a professor who was super helpful in terms of teaching me about quaternions and how they work (much thanks Dan).

On the positive note, we have decided to move forward with the Painting game (name still pending), our Gothic themed puzzler!

To show off some highlights, here is our starting room:

FirstRoom.png

And when you near the paintings, they glow to signify that you can carry them!

ezgif-3-2b5a4f125a

So far, our first two painting types, forest and snow:

ForestPainting.pngSnowPainting.png

The little copper-ish capsule (placeholder) is your character, who moves around the paintings and bumps into walls to switch between paintings.

And finally I’ll show off the (technically second) puzzle, where you have to grab the two paintings on the walls and move them to the hooks (which are the stone squares on the wall right now). That allows you to dive into the bottom painting, maneuver your way into the upper painting (by using the tree and cloud in the foreground of the forest painting) and then go right to appear in the painting on the balcony, which you can then pop out of, and you’ll have solved it!

ezgif-3-f6a8a24b04ezgif-3-ea5943310b

Overall, super happy with the progress made, it’s a fairly solid initial prototype. We’ll be looking to challenge this week, so ideally that will succeed and put us into deep dive where we can take this game and push forward with it even farther (though we’ll end up doing that anyways, but professors’ blessings are always nice).

Final bit, if you want to try out the prototype, it can be found here!

More to come in another seven days!

Playing Gin Rummy: Prep

Welcome to the second project in our Artificial Opponents (Advanced AI) class, where we are tasked with creating an AI that will play and ideally beat our opponent.

The System

As always, I am going to assume that you have a basic familiarity with the game. As far as I am aware, the current ruleset used in this version of Gin Rummy is as follows:

  • Standard 52 Card Deck
  • 2 players (My AI versus another classmate’s AI, tournament bracket style)
  • 10 card hands
  • A preset amount of rounds (flexible)
  • Sets are 3 or 4 of the same rank
  • Runs are 3 or more of the same suit in rank order
  • A single card can be in either a set or a run but not both
  • Points
    • Cards in sets or runs are 0 points, all other cards are deadwood
    • Aces are always low (rank of 1)
    • Face cards are worth 10 points
    • All other cards are worth their rank (2-10)
  • Play
    • Each turn the active player draws from either the deck or the discard pile, the top card of the discard is the only applicable draw, and the only card displayed
    • After a draw the active player must discard a card from their hand
    • When discarding the active player may knock if they have deadwood totaling 10 or less
    • When discarding, the active player may declare Gin and collect a 25 point bonus. There is no Big Gin in this version of the game
  • Ending the round
    • The active player knocks, the active player has gin, or the active player attempts to draw from an empty deck
    • A game ending in draw scores no points for anyone but the amount of rounds decrements by 1
  • End of round scoring
    • Gin grants 25 bonus points in addition to the points below
    • If the active player knocks and has fewer deadwood than the other player they collect points equal to the sum of the losing player’s deadwood minus their own deadwood
    • If the active player knocks but has a higher deadwood count than the opponent, the opponent collects 25 bonus points for undercutting and scores points as if they were the one who had knocked

My Approach

While I certainly haven’t played Gin as much as I have minesweeper (though thankfully my dad taught me a large amount of card games when I was younger), I still believe the best way to write AI is as if it were taking a human-like decision making approach. However, if I learned anything from the last project (and my experience with card games), probability and counting cards is going to be vital to performing the best. This isn’t Vegas after all. So to rewrite a thesis, if it were, the best way to write AI is to create it as if it were using human-like decision making while also having perfect memory and instantaneous processing power (strangely enough traits that a computer has).

That said, with Gin, I’ll need to do more research and testing and a lot of iterations, but I believe my initial thoughts to the decision making process would follow similar to this:

  1. Save all known cards to a list, for later probability calculations.
  2. Determine any sets or runs and “lock” those cards, preventing the AI from discarding them.
  3. Find pairs of two or runs of 2, with lower values being a priority over high ones, and put those in a “hold” category.
  4. Draw the discard if it completes a set or run.
  5. Otherwise draw from the deck. (There will be some finnicking for pulling from discard without the above condition true).
  6. Discard the highest card that doesn’t qualify within the “hold” category.

Theories and Analysis

One thing I found while playing is that its dangerous to hold onto a large amount of pairs, especially if they’re face cards, as sometimes you may never complete any pairs and get knocked on with a lot of points sitting deadwood in your hand.

Other things I noticed is that it might be worth it to do the exact opposite and grab a lot of face cards, as many people see them as high value and discard them, allowing you to pick them up.

Which on that note reminds me that I need to keep track of what cards my opponent is picking up. There’s good odds that if the opponent has just picked up the seven you put down that you shouldn’t drop another seven. Odds also could be that they’re using it in a run, but that would be a test for comparing it to the known cards and determining the probability of that being the case.

Either way, this is a far more complicated than minesweeper, in terms of the actions that you can choose. On the bright side, a single move won’t make you lose to the same degree that a wrong click in minesweeper (usually on a 50/50 or random guess) would.

That, of course, doesn’t stop the random chance that occurs in a card game, but it’s just what happens with a deck of cards.

More to come in about 3 weeks, a long hiatus for this section of the blog, but I’ll be back then with spicy solutions.

Iterations Galore

Just a quick update for this week’s progress.

I worked on our puzzle painting game and the snail racer. The puzzle painter had far less to add, I dropped in a node system that the paintings hook up and talk to each other through, so instead of the paintings being directly linked to each other, the static hooks on the walls are the jumping points, and the paintings just the means by which you move from point to point.

The snail game got much more attention as I attempted to write my own physics engine. That went alright, the big challenge being that the snail has to move up walls as well, so trying to get it to stick to walls was a problem (but a solved one).

The snail game initially also had some trouble with moving up slopes (this was before I even attempted vertical walls), but I fixed it initially by adding the up vector, and then by actually rotating the object. In hindsight it sounds rather simple and plain, and maybe it is, I did forget that I could get the angle between the impact normal and my own location because I thought I only had the impact normal vector (and forgot to use my own location’s vector).

Overall, I don’t know exactly where we’ll go. Earlier I was leaning toward the snail racer, but these last few hours have been tough. I’m beginning to think that all these minor issues will keep cropping up and the movement will never feel fluid. There aren’t any good guides for the specific problems I want to solve, and while I’ve figured out a lot on my own, with some guidance here and there from the thoughts of others, I’m not sure I can keep it up.

Either way, we’ll decide our final game on Thursday, and then work on polishing that for the Monday after this Monday, but its at least notable that I haven’t had any real issues with the painting game (although I’m sure they’ll appear) which may just make it far more appealing at this rate.

More to follow next week as we determine which game we shall move forward with!

Solving Minesweeper: The Wrap Up

There’s nothing here as quite as exciting or fun as the last post, but this is the postmortem for how I thought everything went.

Spoiler Alert: It went pretty well.

First off, ups and downs, strengths and weaknesses. Overall, it does alright on the hard puzzles. It’s not on tier with the really advanced algorithms that pull 33% hard game winrates, but it managed an average of about 5%, which given the 2 weeks I had, I’m decent satisfied with.

It wasn’t as strong on the medium and easy puzzles as it could’ve been. It wasn’t bad, the but I only averaged about 75-80% on the easy, and 50% on the medium. There were a fair number of those that were just a corner click of 1 with no additional information that led me into a random guess loss, but my algorithm lacked 2 particular areas that had I had more time, I would’ve liked to implement: End game solutions and probability calculations.

Of course had I implemented those areas my winrate on hard would’ve increased significantly as well, but a better guessing system (probability) would’ve helped all across the board.

An end game algorithm would’ve been selective, and achieved a much smaller margin of increase in winrates, but every little bit is important.

The last to cover is that I improved the algorithm slightly from the last post. I’m going to go back and update that post and my modification to reflect the changes I made to my algorithm, but it basically was a small bump to opening up more squares using recognizable patterns (which the entire gist of my adjacency algorithm was capitalizing off common (and sometimes rarer) repeating patterns found in minesweeper).

Oh also! The actual results. Overall my algorithm placed second in the class. Alan, who I hadn’t really noticed before and didn’t showcase his algorithm the Wednesday a few of us did an initial test really surprised me with his algorithm. He successfully implemented a probability system that consistently topped my own adjacency algorithm. While both of our algorithms sat solidly in the top 2, his was definitely a slight step above mine, and is a large reason I would really look to implement a probability system if I ever did this in the future.

Overall, not a long post, just a quick update to wrap it all up.

The Snail Prototype

What a disaster.

I’m personally just really disappointed with what I was able to get done this week. Not because of time constraints or anything else, but because of how difficult it is to work with Unreal’s vehicle movement system.

For some clarity, this week our team made two prototypes, one of which I worked on, which was a snail racer, and one that our designer worked on, which was a ladder-climbing party brawler. My snail racer project went less than optimally.

At the end of almost 10 hours of work solely on movement, we made a car that we disguised as a snail who’s movement is… well it’s not bad, but it doesn’t feel good. I didn’t expect it to be perfect, but given that my entire time was effectively dedicated on trying to set up the snail to move, and then actually moving it, I feel like loads of time was wasted.

I think that mostly stems from the fact that in an optimal scenario, even if I don’t quite get things as I want, I at least have an idea of how to move forward – the feeling that I learned from my time.

With this prototype, I genuinely don’t feel like my grasp on Unreal’s physics and vehicle system was vastly improved. Sure, I learned a bit more about how it all works, and tweaking with variables has visible effects that I can tune, but I don’t feel like I obtained a grander understanding of how in the hell it all works, and that’s what really bothers me right now.

I like to understand the systems I’m working with. If, for example, my friend Tyler were to have created a physics engine, and I was confused on how it worked, my first choice would be to look in the documentation. If I can’t find what I need from that, or I have a lack of comprehension on a topic, I would absolutely go right to Tyler and ask how it works.

But I can’t do that with what I need. Despite Unreal and Unity both having forums, I’m on a time clock. If I have a problem, I’m either solving it, devising a pseudo-solution that solves it for the situation I need, or moving past it and calling it a learning experience. There are numerous questions on there that go unanswered for long stretches of time, and maybe my thinking is flawed here, but taking the time to write out and explain my problem in full could very well be a larger waste of my time than solving it another way or just moving on.

We were supposed to challenge this week (tomorrow), which meant pitching our three game ideas and prototypes, so that we could move on to the next stage. While I don’t mind another week of working on all 3 games, I feel as if I’ve let my group down, as if this failure is my fault, despite Matt (the designer) and Tucker (one of the artists) putting in a large amount of time of their own to the snail prototype.

I’m not entirely confident moving forward with the snail game, but I want to be. In our vision of the game’s completion, it can be loads of fun, and if we can get past the initial hurdle that is physics movement, then we’ll only have all of networking and multiplayer to hate ourselves with. But actually, this game could be really exciting to play and develop. I’m reserving final judgments for next Sunday, after we’ve given the games another week of tweaks.

More to come then.

Solving Minesweeper: The Good Part

Intro

So after quite some time, I have crafted a minesweeper solving AI, and it was… an exercise of imagination. Of sorts.

First thing to say, while I lightly peeked at one or two solvers that were pointed to me by other classmates, I disregarded anything useful in them – not because they were bad solvers – but because I just preferred to figure it out through my own methods in this particular situation.

That is to say, everything that I completed in regards to this solving algorithm was of my own mind, which was my intent in my last post: Create an AI that acts as similar to a human as possible (with the new clause of performing well – this becomes important when guessing becomes a factor).

And as you’ll see, I think I came up with a pretty impactful algorithm that increases the solver’s winrate by a notable amount.

Results

First and foremost, to anyone coming to this post interested in how well the algorithm has done, I have run a multitude of what I am calling the “300 Test,” that is, 100 attempts at an official size large minesweeper board (30×16, 99 mines), 100 attempts at a medium board (16×16, 40 mines), and 100 attempts at a small board (9×9, 10 mines).

My results for the 300 Tests (read as hard board winrate, medium board winrate, small board winrate, and then time) are as follows:

Test 1: 4/100 50/100 73/100 (16.9m)
Test 2: 5/100 47/100 75/100 (No Time) – Forgot to record time
Test 3: 2/100 49/100 80/100 (16.2m)
Test 4: 1/100 33/100 52/100 (10.5m)
Test 5: 2/100 52/100 75/100 (16.2m)
Test 6: 4/100 41/100 55/100 (13.5m)
Test 7: 5/100 29/100 59/100(13.9m)
Test 8: 1/100 34/100 63/100 (13.1m)
Test 9: 5/100 49/100 68/100 (24.3m)
Test 10: 4/100 52/100 74/100 (23.0m)
Test 11: 4/100 52/100 74/100 (16.2m)
Test 12: 9/100 48/100 70/100 (17.3m)

It is of note that these scores are not of the same version of the algorithm. In fact, almost each one was tested on a different iteration of the algorithm, which accounts for the disparity in times and results for some. Around test 4 I implemented a bad guessing algorithm which resulted in a far lower time and score as it guessed bombs very frequently. At test 9, I added a condition to remove squares from the revealed list, which seemed to considerably slow down the algorithm (despite it’s purpose to speed it up considerably). At test 11, that same condition was still in the algorithm (and I removed another bit), so I’m thinking it might not be the cause.

The algorithm, as of this post, is unfinished in optimization. I am unsure if I ever will finish it, given all of the other work I have and will become busy with, and I will probably “finish” this and move on, unfortunately.

The Code

Beginning with the function list:

//Add any adjacent hidden cells around the index that are not in the permanent flagList to the temporary flagList
void addAdjacentHiddenCellsToTempFlagList(GameView theView, UINT index);

//Add all hidden cells around the index to the flagList (no duplicates)
void addAllHiddenCellsToFlagList(GameView theView, UINT index);

//Add all revealed cells to the revealed list (no duplicates)
void addAllRevealedCellsToRevealedList(GameView theView);

//Returns true if at least two adjacent cells are within the temp flag list (wow totally not the function name)
bool atLeastTwoAdjacentCellIsWithinTempFlagList(GameView theView, UINT index);

//Get an index of a cell adjacent to the index that is has no mine guaranteed (scans both flag lists)
UINT getCleanCell(GameView theView, UINT index);

//The number of cells adjacent to the index that are flagged
UINT getNumFlaggedCells(GameView theView, UINT index);

//The number of cells adjacent to the index that are hidden (not revealed). If bool is true, then ignore any cells on either flag list.
UINT getNumHiddenCells(GameView theView, UINT index, bool checkFlagLists);

//Returns true if the selected index is not on the permanent flag list
bool isNotOnFlagList(GameView theView, UINT index);

//Remove any revealed cells from the revealed list if all adjacent cells are revealed for flagged
void removeAllFinishedRevealedCells(GameView theView);

And important variables:

std::vector flagList, revealedList, tempFlagList;

I opted to not use a class, as I felt global variables would be fine for only needing 3 vectors.

The Algorithm

Now for the spicy part. Let’s go over in a step by step process how the entire algorithm works, then let’s take a look at actual code.

Step by Step

  1. Clear the temporary flag list. Important for clicking clean cells.
  2. Add any and all revealed cells to the revealed list, do not allow for duplicates.
  3. Remove any “finished” revealed cells (see comment above removeAllFinishedRevealedCells function). (This step is purely for optimization, I am unsure if it accomplishes that task. Fiddle as you wish).
  4. If the upper left cell is hidden, reveal it. This is the first move every game. (Personal choice of where to start).
  5. Main flag and reveal loop:
    1. Scan through every cell in the revealed list
    2. If the cell’s number (I will refer to this as cell(n) or iA(n) in the future, iA being indexA, iB is indexB, etc) is greater than 0 and equal to the amount of hidden adjacent cells, add every hidden cell around it to the flag list.
    3. If the cell(n) is greater than 0 and less than the amount of hidden adjacent cells, but the number of adjacent flagged cells is equal to the cell(n), reveal any hidden non-flagged cells around it.
  6. Now the real fun begins, the bread and butter of the algorithm as a whole. I crafted an adjacency algorithm that uses a temporary flagging system to identify safe squares that can be clicked. If you look at two pictures in my last minesweeper post, you can see the idea in practice.
    1. Same as last time, scan through every cell (iA) in the revealed list.
    2. Clear the temporary flag list, removing leftovers from the last iteration.
    3. If iA(n) + 1 is equal to the number of adjacent hidden cells, add all adjacent hidden cells to the temporary flag list. (You can only determine safe squares if you only have 1 square that is in contention).
    4. Scan through every cell adjacent to iA. These scanned cells will be called iB.
    5. Five checks here, you can only progress if:
      1. iB is revealed.
      2. iB(n) does not equal 0.
      3. iB has at least TWO adjacent cells in the temporary flag list. (This step is very important and the area that took me forever to nail down. There are a number of fringe cases that occur if iB only has one adjacent temp flag list, and it fails if you must have it touch every cell in the temp flag list).
      4. One of the two is true
        1. Both of the following are true
          1. iB(n) – iB(number of adjacent flagged cells) (ONLY on the permanent flag list, not temporary) equals the number of hidden cells minus the number of permanent adjacent flagged cells AND minus the number of temporarily flagged cells.
          2. The number of hidden cells minus the number of permanent adjacent flagged cells AND minus the number of temporarily flagged cells is equal to 1. (If it is 2, the choices of a clean cell are ambiguous, and you can easily hit a bomb, but with only 1 cell, you can be certain it is a safe square).
        2. Both of the following are true
          1. iB(n) is less than the number of hidden cells minus the number of permanent adjacent flagged cells AND minus the number of temporarily flagged cells.
          2. iB(n) is equal to 1.
    6. Clear the only safe square in the area.
  7. In the event that the main loop fails to get any hits and the adjacency algorithm falls flat, you are left to random guessing. I did attempt to do an estimated guessing algorithm, but I clearly didn’t put enough time into it (most of my time was spent on the previous algorithm), because it hit more bombs than a standard random cell choice.
    1. While (true)
    2. Get a random index from all available cells.
    3. If it is hidden and not on the permanent flag list, reveal it.

Code

It is worth noting that I used my professor’s framework and in no way wrote the full minesweeper game (though I did do that in flash back in my sophomore year, which was actually very fun), so there will be references to classes and events I will cover real quickly.

GameView is a wrapper class around the GameState, giving access to the information of the board without allowing the user to cheat by just asking for the mines’ locations. Every time theView is referenced, it is an instance of GameView.

CellClickTransactions are the events sent back to the main program, telling it to click and reveal a given index.

All other references should be to the functions, which code I won’t post so this post doesn’t stretch for miles, but the general descriptions of how they work are in the comments above each’s name.

tempFlagList.clear();

addAllRevealedCellsToRevealedList(theView);
removeAllFinishedRevealedCells(theView);

if (!theView.getCell(0).isRevealed())
{
  pHandler->postTransaction(new CellClickTransaction(0));
  return;
}

//Scan throught revealed list to clear and flag
for (UINT index : revealedList)
{
  UINT hiddenCellNum = getNumHiddenCells(theView, index, false);

  if (theView.getNumAdjacentMines(index) != 0)
  {
    //If the cell's number equals the amount of hidden cells around it, flag em all
    if (theView.getNumAdjacentMines(index) == hiddenCellNum)
    {
      addAllHiddenCellsToFlagList(theView, index);
    }

    //If the cell's number is less than the amount of hidden cells around it, but the count is equal to the flagged amount, start revealing
    if (theView.getNumAdjacentMines(index) postTransaction(new CellClickTransaction(cleanIndex));
      return;
    }
  }
}

//Funtime adjacency algorithm
for (UINT indexA : revealedList)
{
  tempFlagList.clear();

  if (theView.getNumAdjacentMines(indexA) + 1 == getNumHiddenCells(theView, indexA, false))
  {
    addAdjacentHiddenCellsToTempFlagList(theView, indexA);
    for (UINT indexB : theView.getAdjacentCellIndices(indexA))
    {
      UINT numHiddenCellsTrue = getNumHiddenCells(theView, indexB, true);
      if (theView.getCell(indexB).isRevealed() && theView.getNumAdjacentMines(indexB) != 0 && atLeastTwoAdjacentCellIsWithinTempFlagList(theView, indexB) &&
      ((theView.getNumAdjacentMines(indexB) - getNumFlaggedCells(theView, indexB) == numHiddenCellsTrue && numHiddenCellsTrue == 1) ||
       (theView.getNumAdjacentMines(indexB) < numHiddenCellsTrue && theView.getNumAdjacentMines(indexB) == 1)))
      {
        UINT cleanIndex = getCleanCell(theView, indexB);
        pHandler->postTransaction(new CellClickTransaction(cleanIndex));
        return;
      }
    }
  }
}

while (true)
{
  UINT randomIndex = rand() % theView.getNumCells();
  if (!theView.getCell(randomIndex).isRevealed() && isNotOnFlagList(theView, randomIndex))
  {
    pHandler->postTransaction(new CellClickTransaction(randomIndex));
    return;
  }
}

And there you have it, the full minesweeper solving AI.

Final Thoughts

Overall I think it works fairly well, given that the guessing algorithm isn’t refined, a ~4-5% winrate on hard is pretty alright.

I also have to give an estimate of how well I think my AI will do compared to everyone else, and honestly I’m feeling good. I’m sure others will have thought the same thing I did, but apparently there’s a single set algorithm from a Harvard paper floating around in use in our class, and from what I understood, it certainly doesn’t do adjacency. Can’t predict much more than that, but I’m confident.

As usual, more to come next week after it goes into contention with everyone else in my class’ AIs.

The Painting Prototype

Let me tell you, I am actually excited for this game, or at least working with Unreal’s blueprints.

At first, I was worried when I looked at Unreal’s C++ side, and while I could take the time to figure it out, the blueprints system is packaged so nicely. Plus, even though you can use it without having coding knowledge, coding knowledge makes the entire system make much more sense than trying to learn it from scratch.

But enough of that, more importantly is the prototype. First off, Painting Game (as we have so affectionately labeled it as of now) is a first person puzzle game wherein you as a character are trapped in a Gothic/Victorian mansion, and aside from the mansions strange use of locked doors and gates (which prevent easy access throughout the mansion), there are an abundance of paintings scattered around.

You also have the convenient ability to dive into paintings and from a 2.5 dimensional sidescrolling perspective, can navigate through them to travel to surrounding paintings, such that you have the ability to get around those pesky gates and locked doors. Eventually, you’ll even be able to travel through walls using this painting network.

So I had to develop the ability to a) move paintings around, b) dive into paintings (which was the bulk of the work, and c) travel between paintings.

Moving paintings was the beginning, and after I had finished, I had this to give to my team:

x62mmlt6co

Which was pretty basic, but only the start (to be fair everything in a prototype is basic, but you gotta have fun with it).

I messed with line casting and impact normals to get the paintings in the proper positions they were supposed to be in.

The big part came when I wanted to dive into paintings. I realized I had to create a mini-me and give it a different control scheme, which wasn’t actually that bad. The part that caused me a while of trouble (incoming trigonometry), was getting the mini-me to move left/right in the proper direction I wanted based on the rotation and normal of the painting. I won’t go super deep into the math, but there were some conditionals and a lot of taking the rotation around Z (up/down axis) and converting it into a useful sin/cos unit circle type values so “moving right” was always the players right, despite right not being the same as it was previously.

Then I had to let players travel between paintings, and for that I decided on a simple node system (which turned out to work with an advancement we chose later on), wherein each painting knows what paintings are around it, and hitting a given panel of each panel will take you to the next one (e.g. Hitting the right panel of a painting will travel you to the painting to its right, fairly straightforward).

This works with a hook system we’ll implement later, where painting hooks will always be static, and while paintings may move, the hooks will not, so the hooks will always know who their neighbors are, and we’ll be able to access two paintings information by asking their hooks to contact each other.

To wrap up, here’s a slightly more comprehensive version of what I was able to get done by the end of the week (and it’s not too bad if I say so myself):

babj1quacg

More to come next week!

Solving Minesweeper: Pt 1

Background for anyone reading this fresh:

Artificial Opponents is effectively Advanced Game AI, where we don’t create actual AI, we just create an algorithm that solves whatever problem we’re looking at, but never actually learns from its experiences.

The class has three games that we’re to create optimal solving algorithms for, the first being minesweeper. This blog post is to try and detail my thoughts going into the project, and how I aim to solve it.

The human brain is a powerful thing, able to perform a lot of quick calculations without our conscious knowing. It’s how professional players in esports are capable of pulling off split second moves and strategies before they themselves can fully comprehend what they’ve done.

Using that idea – that human brains subconsciously attempt to make the best decisions, especially after multiple attempts of trial and error where we develop a problem solving algorithm of our own, which can also be called ‘using your gut’ – as a basis for my algorithms, I first looked at myself, having played a fair bit of minesweeper – in particular, how exactly I solve minesweeper – and what particular moves I take to undergo that process successfully.

Also of note, at times there are situations of random chance, but before the time comes where you have to randomly click on a square to reveal it, you can narrow it down as far as possible without tripping a mine through a series of calculations.

The standard rotation of moves that can be made until real calculations need to be done I have determined to be as of follows:

  1. Click a square to begin, center or corner, author’s pick (I have chosen the upper left corner of (0, 0) for now, but it may more optimal to choose a center square).
  2. Add any revealed cells to a revealed cells list (for optimization, scanning just that you can see will be far quicker in the long run than scanning cells you have no information of).
  3. Add any cells that are obviously mines to the flag list. Ex. The cell you are currently viewing is marked as 1, and of the 8 surrounding cells, 7 are revealed. The 8th must be the mine, and can safely be flagged.
  4. Using the flag list, reveal all squares that are 100% assuredly not mines, adding those revealed cells to the revealed cell list.
  5. Return to step 2 and loop this process until there are no more guaranteed moves.

Typically the size of the map has some impact on the success of that loop alone. The smaller a map, the less conflict there will be, and you could use this process to determine with 100% accuracy the solution. As the size of the map increases at a rate that is not parallel to the increase in mine count (higher difficulties have a higher ratio of bombs to area), this loop will eventually hang, and you must begin calculations to determine what squares are safe.

Allow me to show you an example:

MinesweeperSit1-1

In the above example, we’re looking at the red squares in particular, and what we can determine from them, despite no obvious answers.

So let’s take a moment and get some answers. The 2 square within the red guarantees that we have 2 bombs within the orange area, each square having a 33% chance of having a bomb. So of the left two most squares, there is a 100% chance of being a bomb, as one of those squares must contain a bomb.

In this particular case, we don’t care which one it is, all we need to know is that the 1 square within the red knows that 1 bomb exists within the 3 squares below it, and because we know that a bomb must exist within the 2 right most squares (of the blue square and using orange’s left 2 squares), blue must be a safe square.

And voila:

MinesweeperSit1-2

In this case, that was but a small example, but ideally with proper calculations and the above 100% deterministic loop, much more can be solved than just the 100% loop alone.

Last thing to cover, there is absolutely the potential of running into a situation where you have to guess. In those situations, you can guess randomly, or guess intelligently, using much of the same logic as just demonstrated. Narrowing down the % odds of a square being safe will increase the rate to which you can be successful.

More to come, hopefully I can manage it to work.