Ceasar's Mind

Follow me: @Ceasar_Bautista

Posts Tagged ‘programming

Potential Fields

leave a comment »

So while surfing for a solution to my own AI problems, I found an article on potential fields. I had used these before in trying to develop an AI for Naval Commander, and achieved limited success, but I couldn’t figure it out well enough and so I ditched the whole thing.

Anyway, a potential field is kind of like a magnetic field. Basically, the AI places charges around the map, positive charges near high value targets, and negative ones around dangerous areas and impassable terrain. Allied units use the field by testing a few points around them, figure out where the most potential is, and moving toward the location. By strategically placing the charges, the AI can guide armies in a very dynamic and simple way.

So here our potential field is represented by the lightness of the square, with light squares being more attractive. The rocks, being impassable, and the white enemies, being dangerous, emit negative potential, coloring the nearby squares dark, while the goal areas emits positive potential, lighting the map up. Together, these fields  provide a way for the green unit to get to its destination all without any kind of path finding algorithm.

The article goes on to explain a few useful tricks, such as placing positive charges on top of enemy units to attract allied units to them, and then placing a weaker negative charge on top of them in order to get our units to attack from a certain range. Anyway, I think there is a lot of potential with this idea. I highly recommend you check the article out and you can count on me investigating the idea in the future.

(A thunderstorm won’t let me embed the link. Check it out here for now: http://aigamedev.com/open/tutorials/potential-fields/)

Written by Ceasar Bautista

2010/07/19 at 20:22

Command 0.04

leave a comment »

This update is very minor, but it’s the start of something bigger, of which I’m having trouble with. Basically, I’ve made it so that the AI tries to send the least amount of units to necessary to capture planets. Unfortunately, it only works with terminal nodes:

The new AI works by allowing each planet to store the number of armies that it needs to conduct the AI’s plan. It sends that value to it’s parent node, and that goes all the way up, so that all the nodes know how much they need to conduct the AI’s plan.

The image at the right is a little hard to read, but basically, node A tells B that it needs 2 armies, then B tells E that it needs 10, 2 to capture A and 8 to capture itself. The process, repeats upward.

This works for terminal nodes, so why doesn’t it work for B? The problem, I’m almost certain, is that I’m using a depth first search, and A is never being notified that B is receiving enough forces to capture A.  Now that would be very easy to fix, except that if I did that, then E would never send to B, since it would consider A and B both captured.

Anyway, I need some help.

Written by Ceasar Bautista

2010/07/19 at 17:41

Command 0.03

leave a comment »

Building off of 0.02, I made a few changes to the algorithm. In the old algorithm, the search would stop whenever it hit an enemy node. However, that would prevent it from seeing anything beyond the front line, which could potentially be a problem. If, for example, our base connects to a 10+1 node and a 25+1 node, both the same distance away, our base should probably attack the 10+1 node. However, if behind the 25+1 node is a 0+100 node, then we may wish to rethink our initial decision.

Additionally, I rewrote parts of the code here and there to make things a little more organized for myself. With that, I’ve created an AI for Red, to showoff in the videos anyhow, but for the player I’ve implemented a new system for attacking. Where previously, a player’s click would send all immediately available units to the targeted node, the new system is set up so that it not only does that, but continues to stream afterward. The stream can be disabled by clicking the streaming node twice.

But anyway, here is a video of the new AI in action:

Written by Ceasar Bautista

2010/07/17 at 00:40

Command 0.02

leave a comment »

The initial AI failed because nodes failed to consider the strength of their moves beyond attacking the nodes that they were immediately connected to. This version attempts to fix that, using a depth first search on each node:

The AI still has a way to go though. It makes no effort to keep track of how many units it has sent to a node, often sending more than it should. On the same note, it makes no effort to keep track of how many units the player has sent to a node, or for that matter, could send. Hopefully I can fix that soon.

Basically, it works like this: First it selects a blue node to analyze, then it finds all of its neighbors (called children). If the any of the children are not blue or don’t connect anything except for the parent, then it scores the node by figuring out how much time it would take to capture the node. If the node is blue and has children of its own however, then it analyzes that node, but all the while keeping track of the ancestors so that the current generation doesn’t mistake an ancestor for one of it’s children, preventing infinite loops in the search.

However, through testing the AI I encountered a few problems. Perhaps the worst thing was that I had labeled all my variables weird names, which made it very difficult to track what was going on. When my code performed badly, it was extremely hard to debug because I had not only had to figure out what each variable was doing, but I also had to figure out what variable was keeping track of what. So the point here: Make sure you label stuff nicely. Typically, it’s okay to use shorthand in keeping with your style, but for complicated stuff save your self the trouble and label things nicely.

Secondly, while going through the search the AI actually would overwrite the values of nodes sometimes, so I had to make special variables just to keep track of the scores of the nodes at each level of search.

Written by Ceasar Bautista

2010/07/16 at 19:37

Progress

leave a comment »

So it’s been kind of quiet here lately, and that’s because I’ve been studying up on Java so that I could develop a proper game in order to develop a proper AI. The game here (currently titled “Command”) is based off of Little Stars for Little Wars, almost exactly at the moment, although I have a few minor plans for the future. Anyhow, here is a quick demo:

As you can see, the AI is still in need of work (I just put that together hours ago). But I’m excited for the prospects!

Currently, the AI works by making a list of all the planets it is connected to, and then scoring them. The score is calculated by dividing the gains by the costs. The gains in most cases is the additional production, except of course if the planet is already owned by you. The costs include the distance, the current strength, and the projected strength of the planet when the army arrives. The depth as you can tell, is still rather shallow though, which is why the AI functions so oddly in the video.

Also, I should note, that this AI will be used plainly to understand macro level decision making. I suppose I will have to play around with it in the future in order to make it simulate real conditions a bit better, but at the moment it ought to give a fairly decent idea of how to one should manage armies strategically.

Written by Ceasar Bautista

2010/07/13 at 23:41

An AI for the RTS

leave a comment »

The two last articles on chess and artificial intelligence were written for two reasons. First, to show that the successful development of an AI reveals how a game is to be played optimally, or almost optimally. Second, the development of an AI often reveals how our minds work. 

Combined with the fact that, at least as far as I noticed, I have no idea why I make the decisions I do while playing strategy games most times leads me to want to try and make an AI for the RTS. Typically, when making decisions in RTS games I go by what feels right rather than analyzing my options. And while I suppose I could give reasons for my actions, I don’t think I could give reasons for my reasons.

This problem was especially apparent while trying to design an AI for Naval Commander, a simplified RTS game I developed, I noticed that I couldn’t understand my own decisions. And you’ll notice that my discussion of the RTS focuses on very narrow patterns in a very a complicated system.

As a designer, this is a bit of a problem for me. It basically ensures that my games will come out badly if I try to be innovative with the mechanics and it does a good job at throwing off balance too.

The solution therefore, is to design an AI. The better my understanding of the mind, mathematics, and game theory, the better I come to understand the interaction of the mechanics. From there, I can look at what makes it tick, and reverse engineer its workings to figure out how people think and supplement play and design. However, I’ve thought about RTS AI before, and it’s not at all as simple as the AI for chess. (To be continued…)

Written by Ceasar Bautista

2010/07/05 at 00:55

Alpha Beta Pruning

with 2 comments

The most significant improvement that can be made to the AI is the implementation of alpha beta pruning. Alpha beta pruning basically evaluates the board the same way that human minds do and allows both us and computers to save drastic amounts of time when figuring out the best move.

How it Works

If I analyze a move and my opponent’s best response leads to a position that is equal and then I start analyzing another position and see at one point that my opponent has a response that is worse than equal for me, then I immediately stop looking at that move since I know that I could always just play my original move and end up with a better game. Analyzing my opponent’s alternatives would be a waste since I knew I’d wouldn’t be playing the move when I found that first response that was bad for me. Alpha beta works exactly the same way.

Let me put that in concrete terms- If for example, I analyze a Knight move and see that at best my opponent can’t do better than equalize, and then I start analyzing an alternative, say,  my Queen capturing a guarded enemy Pawn, I don’t have to analyze all the moves that my opponent has in response to my Queen capturing his Pawn if I see that he can simply capture my Queen. (Remember, I could always just play my Knight move.) For three ply though, it gets a little bit more tricky. For example, if I make a pawn move, my opponent makes a pawn move, and I capture a Knight, my opponent probably won’t make that pawn move if a different move would save the Knight (or maybe even capture one of mine). Of course, I can always reject my initial pawn move though, so it can get a little tricky.

Back to the pruning idea though: Alpha and beta both represent numbers. Alpha I assign to my best move (and let’s say I am playing black). So every time it’s white’s turn, I go through all the responses, score them, and assign the best score to alpha. Then I start checking white’s next alternative and all of my responses. If I find one that scores higher than alpha, I stop looking at that move. White won’t play it. If however, I find no response better than alpha, if they’re all worse for example, then I find the best of them and set that as the new alpha. In this way, I find my worst best move.

Beta on the other hand, keeps the tab for white. I do the same thing, except I look for scores worse than beta.

Here is basically the code I use, all written in Java.  (My actual code has some ugly stuff to make it work in the context I’m using it.) And here is an excellent link that also explains the idea, but more importantly, it has an interactive application on the bottom that shows how the algorithm works if I haven’t explained it fully.

Alpha Beta pruning reduces the amount of searching drastically. However, to get the full effect, one last improvement has to be made. Since the order of the moves to be searched affects when a search will be terminated, it’s best to sort the moves (I believe at each level) from strongest to weakest. This indeed involves scoring every possible position (minus those at the last ply to be searched), but I found that it is still far more efficient. The only level not worth sorting is the final level, at depth 0, since that’s where the alpha beta pruning kicks in (reducing the need to score everything).

Anyhow, I hope this all makes sense.

Written by Ceasar Bautista

2010/06/28 at 21:31