Ceasar's Mind

Follow me: @Ceasar_Bautista

Posts Tagged ‘artificial intelligence

Command 0.02

leave a comment »

The initial AI failed because nodes failed to consider the strength of their moves beyond attacking the nodes that they were immediately connected to. This version attempts to fix that, using a depth first search on each node:

The AI still has a way to go though. It makes no effort to keep track of how many units it has sent to a node, often sending more than it should. On the same note, it makes no effort to keep track of how many units the player has sent to a node, or for that matter, could send. Hopefully I can fix that soon.

Basically, it works like this: First it selects a blue node to analyze, then it finds all of its neighbors (called children). If the any of the children are not blue or don’t connect anything except for the parent, then it scores the node by figuring out how much time it would take to capture the node. If the node is blue and has children of its own however, then it analyzes that node, but all the while keeping track of the ancestors so that the current generation doesn’t mistake an ancestor for one of it’s children, preventing infinite loops in the search.

However, through testing the AI I encountered a few problems. Perhaps the worst thing was that I had labeled all my variables weird names, which made it very difficult to track what was going on. When my code performed badly, it was extremely hard to debug because I had not only had to figure out what each variable was doing, but I also had to figure out what variable was keeping track of what. So the point here: Make sure you label stuff nicely. Typically, it’s okay to use shorthand in keeping with your style, but for complicated stuff save your self the trouble and label things nicely.

Secondly, while going through the search the AI actually would overwrite the values of nodes sometimes, so I had to make special variables just to keep track of the scores of the nodes at each level of search.

Written by Ceasar Bautista

2010/07/16 at 19:37

Progress

leave a comment »

So it’s been kind of quiet here lately, and that’s because I’ve been studying up on Java so that I could develop a proper game in order to develop a proper AI. The game here (currently titled “Command”) is based off of Little Stars for Little Wars, almost exactly at the moment, although I have a few minor plans for the future. Anyhow, here is a quick demo:

As you can see, the AI is still in need of work (I just put that together hours ago). But I’m excited for the prospects!

Currently, the AI works by making a list of all the planets it is connected to, and then scoring them. The score is calculated by dividing the gains by the costs. The gains in most cases is the additional production, except of course if the planet is already owned by you. The costs include the distance, the current strength, and the projected strength of the planet when the army arrives. The depth as you can tell, is still rather shallow though, which is why the AI functions so oddly in the video.

Also, I should note, that this AI will be used plainly to understand macro level decision making. I suppose I will have to play around with it in the future in order to make it simulate real conditions a bit better, but at the moment it ought to give a fairly decent idea of how to one should manage armies strategically.

Written by Ceasar Bautista

2010/07/13 at 23:41

An AI for the RTS

leave a comment »

The two last articles on chess and artificial intelligence were written for two reasons. First, to show that the successful development of an AI reveals how a game is to be played optimally, or almost optimally. Second, the development of an AI often reveals how our minds work. 

Combined with the fact that, at least as far as I noticed, I have no idea why I make the decisions I do while playing strategy games most times leads me to want to try and make an AI for the RTS. Typically, when making decisions in RTS games I go by what feels right rather than analyzing my options. And while I suppose I could give reasons for my actions, I don’t think I could give reasons for my reasons.

This problem was especially apparent while trying to design an AI for Naval Commander, a simplified RTS game I developed, I noticed that I couldn’t understand my own decisions. And you’ll notice that my discussion of the RTS focuses on very narrow patterns in a very a complicated system.

As a designer, this is a bit of a problem for me. It basically ensures that my games will come out badly if I try to be innovative with the mechanics and it does a good job at throwing off balance too.

The solution therefore, is to design an AI. The better my understanding of the mind, mathematics, and game theory, the better I come to understand the interaction of the mechanics. From there, I can look at what makes it tick, and reverse engineer its workings to figure out how people think and supplement play and design. However, I’ve thought about RTS AI before, and it’s not at all as simple as the AI for chess. (To be continued…)

Written by Ceasar Bautista

2010/07/05 at 00:55

Alpha Beta Pruning

with 2 comments

The most significant improvement that can be made to the AI is the implementation of alpha beta pruning. Alpha beta pruning basically evaluates the board the same way that human minds do and allows both us and computers to save drastic amounts of time when figuring out the best move.

How it Works

If I analyze a move and my opponent’s best response leads to a position that is equal and then I start analyzing another position and see at one point that my opponent has a response that is worse than equal for me, then I immediately stop looking at that move since I know that I could always just play my original move and end up with a better game. Analyzing my opponent’s alternatives would be a waste since I knew I’d wouldn’t be playing the move when I found that first response that was bad for me. Alpha beta works exactly the same way.

Let me put that in concrete terms- If for example, I analyze a Knight move and see that at best my opponent can’t do better than equalize, and then I start analyzing an alternative, say,  my Queen capturing a guarded enemy Pawn, I don’t have to analyze all the moves that my opponent has in response to my Queen capturing his Pawn if I see that he can simply capture my Queen. (Remember, I could always just play my Knight move.) For three ply though, it gets a little bit more tricky. For example, if I make a pawn move, my opponent makes a pawn move, and I capture a Knight, my opponent probably won’t make that pawn move if a different move would save the Knight (or maybe even capture one of mine). Of course, I can always reject my initial pawn move though, so it can get a little tricky.

Back to the pruning idea though: Alpha and beta both represent numbers. Alpha I assign to my best move (and let’s say I am playing black). So every time it’s white’s turn, I go through all the responses, score them, and assign the best score to alpha. Then I start checking white’s next alternative and all of my responses. If I find one that scores higher than alpha, I stop looking at that move. White won’t play it. If however, I find no response better than alpha, if they’re all worse for example, then I find the best of them and set that as the new alpha. In this way, I find my worst best move.

Beta on the other hand, keeps the tab for white. I do the same thing, except I look for scores worse than beta.

Here is basically the code I use, all written in Java.  (My actual code has some ugly stuff to make it work in the context I’m using it.) And here is an excellent link that also explains the idea, but more importantly, it has an interactive application on the bottom that shows how the algorithm works if I haven’t explained it fully.

Alpha Beta pruning reduces the amount of searching drastically. However, to get the full effect, one last improvement has to be made. Since the order of the moves to be searched affects when a search will be terminated, it’s best to sort the moves (I believe at each level) from strongest to weakest. This indeed involves scoring every possible position (minus those at the last ply to be searched), but I found that it is still far more efficient. The only level not worth sorting is the final level, at depth 0, since that’s where the alpha beta pruning kicks in (reducing the need to score everything).

Anyhow, I hope this all makes sense.

Written by Ceasar Bautista

2010/06/28 at 21:31

Chess & Artificial Intelligence

with one comment

So I’ve been developing artificial intelligence for a computer chess game recently, and I thought the subject was interesting as it related to design and decision making. For now I’ll just explain the basics and then go on with my thoughts in future posts.

The Basics

The actual AI works rather simply. For a depth of one ply, or a turn for a single player, the AI moves a piece, scores the position based on mobility and piece count, moves the piece back, and repeats with all the possible moves in a given position. It then sorts all of the scores, takes the best score, and makes what it has determined to be the best move.

A deeper search is slightly more intense. For two ply, it makes a move, makes another move, scores the position, undoes the last move, and then searches the rest of the alternatives until it finds the best. From there, the AI assigns the best score to the initial move. (It’s worth noting that since it would be the opponent’s turn at that point, the best move is the worst for the AI.) The AI repeats the whole process with each of its alternatives, that is, finding the opponent’s best response, and then determines what its best move is. The basic idea is the same for a depth for any amount of ply.

Now theoretically, you could solve chess by just setting the depth to infinity and playing all games through to their end. However, on average, in any given position their are on average about 40 moves available. Every extension of the depth therefore increases the amount of positions searched by 40-fold for a total of 40^n. In other words, the amount of positions to be searched gets very big, very fast. (Too big for computers to compute anywhere near reasonably, a statement I’ve come to understand through experience.) In fact, the amount of time spent searching is often so unreasonable that the program must be either executed on impressive hardware or be set to play at a very dumb level- Otherwise, it’ll end up taking hours or even days per move.

Obviously though, that’s not exactly the case- Quite a few chess programs exist that play at a reasonable pace and superb strength. The solution, at least in terms of making something that can beat the grandmasters, is to not look at all the moves. Or rather, only look at the moves worth looking at. (The process is known as “pruning”.) Basically, the idea is that certain moves would be a waste of time to search, so we can ignore them. And less positions to score means that the AI will work faster, allowing it to search deeper. More on that in the next post though.

Written by Ceasar Bautista

2010/06/28 at 19:03