Ceasar's Mind

Follow me: @Ceasar_Bautista

Posts Tagged ‘game theory

Command 0.03

leave a comment »

Building off of 0.02, I made a few changes to the algorithm. In the old algorithm, the search would stop whenever it hit an enemy node. However, that would prevent it from seeing anything beyond the front line, which could potentially be a problem. If, for example, our base connects to a 10+1 node and a 25+1 node, both the same distance away, our base should probably attack the 10+1 node. However, if behind the 25+1 node is a 0+100 node, then we may wish to rethink our initial decision.

Additionally, I rewrote parts of the code here and there to make things a little more organized for myself. With that, I’ve created an AI for Red, to showoff in the videos anyhow, but for the player I’ve implemented a new system for attacking. Where previously, a player’s click would send all immediately available units to the targeted node, the new system is set up so that it not only does that, but continues to stream afterward. The stream can be disabled by clicking the streaming node twice.

But anyway, here is a video of the new AI in action:

Written by Ceasar Bautista

2010/07/17 at 00:40

Command 0.02

leave a comment »

The initial AI failed because nodes failed to consider the strength of their moves beyond attacking the nodes that they were immediately connected to. This version attempts to fix that, using a depth first search on each node:

The AI still has a way to go though. It makes no effort to keep track of how many units it has sent to a node, often sending more than it should. On the same note, it makes no effort to keep track of how many units the player has sent to a node, or for that matter, could send. Hopefully I can fix that soon.

Basically, it works like this: First it selects a blue node to analyze, then it finds all of its neighbors (called children). If the any of the children are not blue or don’t connect anything except for the parent, then it scores the node by figuring out how much time it would take to capture the node. If the node is blue and has children of its own however, then it analyzes that node, but all the while keeping track of the ancestors so that the current generation doesn’t mistake an ancestor for one of it’s children, preventing infinite loops in the search.

However, through testing the AI I encountered a few problems. Perhaps the worst thing was that I had labeled all my variables weird names, which made it very difficult to track what was going on. When my code performed badly, it was extremely hard to debug because I had not only had to figure out what each variable was doing, but I also had to figure out what variable was keeping track of what. So the point here: Make sure you label stuff nicely. Typically, it’s okay to use shorthand in keeping with your style, but for complicated stuff save your self the trouble and label things nicely.

Secondly, while going through the search the AI actually would overwrite the values of nodes sometimes, so I had to make special variables just to keep track of the scores of the nodes at each level of search.

Written by Ceasar Bautista

2010/07/16 at 19:37


leave a comment »

So it’s been kind of quiet here lately, and that’s because I’ve been studying up on Java so that I could develop a proper game in order to develop a proper AI. The game here (currently titled “Command”) is based off of Little Stars for Little Wars, almost exactly at the moment, although I have a few minor plans for the future. Anyhow, here is a quick demo:

As you can see, the AI is still in need of work (I just put that together hours ago). But I’m excited for the prospects!

Currently, the AI works by making a list of all the planets it is connected to, and then scoring them. The score is calculated by dividing the gains by the costs. The gains in most cases is the additional production, except of course if the planet is already owned by you. The costs include the distance, the current strength, and the projected strength of the planet when the army arrives. The depth as you can tell, is still rather shallow though, which is why the AI functions so oddly in the video.

Also, I should note, that this AI will be used plainly to understand macro level decision making. I suppose I will have to play around with it in the future in order to make it simulate real conditions a bit better, but at the moment it ought to give a fairly decent idea of how to one should manage armies strategically.

Written by Ceasar Bautista

2010/07/13 at 23:41

An AI for the RTS

leave a comment »

The two last articles on chess and artificial intelligence were written for two reasons. First, to show that the successful development of an AI reveals how a game is to be played optimally, or almost optimally. Second, the development of an AI often reveals how our minds work. 

Combined with the fact that, at least as far as I noticed, I have no idea why I make the decisions I do while playing strategy games most times leads me to want to try and make an AI for the RTS. Typically, when making decisions in RTS games I go by what feels right rather than analyzing my options. And while I suppose I could give reasons for my actions, I don’t think I could give reasons for my reasons.

This problem was especially apparent while trying to design an AI for Naval Commander, a simplified RTS game I developed, I noticed that I couldn’t understand my own decisions. And you’ll notice that my discussion of the RTS focuses on very narrow patterns in a very a complicated system.

As a designer, this is a bit of a problem for me. It basically ensures that my games will come out badly if I try to be innovative with the mechanics and it does a good job at throwing off balance too.

The solution therefore, is to design an AI. The better my understanding of the mind, mathematics, and game theory, the better I come to understand the interaction of the mechanics. From there, I can look at what makes it tick, and reverse engineer its workings to figure out how people think and supplement play and design. However, I’ve thought about RTS AI before, and it’s not at all as simple as the AI for chess. (To be continued…)

Written by Ceasar Bautista

2010/07/05 at 00:55

Chess & Artificial Intelligence

with one comment

So I’ve been developing artificial intelligence for a computer chess game recently, and I thought the subject was interesting as it related to design and decision making. For now I’ll just explain the basics and then go on with my thoughts in future posts.

The Basics

The actual AI works rather simply. For a depth of one ply, or a turn for a single player, the AI moves a piece, scores the position based on mobility and piece count, moves the piece back, and repeats with all the possible moves in a given position. It then sorts all of the scores, takes the best score, and makes what it has determined to be the best move.

A deeper search is slightly more intense. For two ply, it makes a move, makes another move, scores the position, undoes the last move, and then searches the rest of the alternatives until it finds the best. From there, the AI assigns the best score to the initial move. (It’s worth noting that since it would be the opponent’s turn at that point, the best move is the worst for the AI.) The AI repeats the whole process with each of its alternatives, that is, finding the opponent’s best response, and then determines what its best move is. The basic idea is the same for a depth for any amount of ply.

Now theoretically, you could solve chess by just setting the depth to infinity and playing all games through to their end. However, on average, in any given position their are on average about 40 moves available. Every extension of the depth therefore increases the amount of positions searched by 40-fold for a total of 40^n. In other words, the amount of positions to be searched gets very big, very fast. (Too big for computers to compute anywhere near reasonably, a statement I’ve come to understand through experience.) In fact, the amount of time spent searching is often so unreasonable that the program must be either executed on impressive hardware or be set to play at a very dumb level- Otherwise, it’ll end up taking hours or even days per move.

Obviously though, that’s not exactly the case- Quite a few chess programs exist that play at a reasonable pace and superb strength. The solution, at least in terms of making something that can beat the grandmasters, is to not look at all the moves. Or rather, only look at the moves worth looking at. (The process is known as “pruning”.) Basically, the idea is that certain moves would be a waste of time to search, so we can ignore them. And less positions to score means that the AI will work faster, allowing it to search deeper. More on that in the next post though.

Written by Ceasar Bautista

2010/06/28 at 19:03

Prioritizing Targets

with one comment

Not sure where this belongs exactly yet, but I figured it was worth mentioning. In regards to all of these Hit Points and Damage things, there is a way to prioritize which targets you take out besides what’s most convenient and it’s simple really: Determine which unit is dealing the most damage per hit point, and kill him first.

Take for example, Halo 3. Consider you encounter the two enemies above simultaneously. The one on the left is called a Hunter, armed with a large Assault Cannon, and on the right is a Grunt, armed with a Fuel Rod Cannon. Both weapons are extremely dangerous. However, the Hunter is very difficult to kill, possessing thick armor and a large metallic shield on his left arm which no projectile can penetrate, while the Grunt is very easy to kill, possessing no armor and very little hit points.

The decision before you is this: To target the Hunter first and attempt to dodge both weapons while attempting to find a weak spot in its armor,  or to shoot at the Grunt and kill him quickly and then to fight the Hunter alone. Not much of a choice unless you like a challenge.

Mathematics tell us to take exactly the same course of action in RTS battles. Put simply, to figure out which unit to strike first, determine the DPS of all enemy units and divide it by their hit points and then strike the unit which has the highest result. Usually though, this is intuitively obvious.

Factor in Capital

In strategy games though, there is also capital. That is, while in most cases a damaged unit can be regenerated or repaired, sometimes even for no cost besides time, a dead unit stays dead, and is a permanent loss of resources. With this in mind, the priority can shift to striking the unit with the highest cost to hit point ratio.

Ultimately, it comes down to whether or not you can successfully ensure a kill. If it’s not possible, then it’s best to prioritize by damage to health ratios in order to preserve the hit points of your force. Otherwise, it’s significantly better to take out what you can and then regenerate your units before fighting again on much less level ground.

Why Cheap Units Are Still Useful

The idea of worrying about capital losses usually means trying to invest minimally in weak units since they die very easily. I’ve often wondered if I should make a new formula to adjust for this problem, making expensive units less and less cost effective. I think though that cheap units are underestimated. The one power that they have over their more expensive counterparts is that they can be in multiple places at once. Utilized correctly, this feature can overwork an enemy defense and allow you to penetrate where a more expensive force couldn’t. (The trick is to set it up so that the cheap provides provide an extra kick for your other more expensive units, making each one more powerful individually, and then striking the enemy. He’ll be unable to divert his resources as necessary since they are all centralized in expensive units.) However, this is based off of my experiences with Naval Commander, which accidentally was balanced “incorrectly” since the attack mechanism would choose a random target in range instead of focusing on the weak units, effectively making the strength of units scale linearly. I’ll have to play with the idea a little more in the future.

Written by Ceasar Bautista

2010/06/21 at 23:56

Flanking – Part 1

leave a comment »

With the math behind pricing in mind, it’s obvious that forces grow in strength exponentially with each unit in the party. Not only is our force stronger though, but we take less damage. However, party strength is limited by space. If a unit can’t get the enemy in range, he can’t help the party.


The basic idea here is rather simple. All of that math explained before assumes that all units in a force can attack. But that isn’t necessarily the case. What battles are really all about are creating situations where you have more units attacking than your opponent does.

The simplest and perhaps most common of these scenarios is the flank of a line formation. In the image to the right, the black army is positioned to flank the white army. All three of the black units can attack the bottom white unit, but only the bottom white unit can strike back. The other two white units don’t have any targets in range.

If white does not react quickly, black will fight each unit individually, and crush all three of them losing only one unit in the process.

(Line formations are extremely effective at flanking because of their ability to wrap around vertices, but their vertices are also extremely vulnerable.)

The Impact of Formation

Besides the actual size of units, formation can affect the effectiveness of an enemy flank. Consider the two images presented here.

In the image above, four black units are surrounded by an army of white. Due to their shape, they reduce up to 16 attacks down to 8.

Now consider this second image.  In it, an army of nine black units fend off a larger army of white. Due to their shape however, they reduce up to 36 attacks down to 12.

Notice the number of vertices stays the same despite the increasing amount of units. Side length increases instead, decreasing the percentage of your units being flanked. Consequently, bigger shapes are better. Furthermore,  if the terrain is favorable, it can sometimes be possible to construct a straight line from one impassable point to another, eliminating vertices altogether!

This knowledge is particularly useful when using units that are expensive since, at least in my experience, they are balanced assuming that the enemy strikes in full force. By rearranging your army’s formation or exploiting impassable terrain, you can force enemy units to strike individually and increase the effectiveness of your units exponentially!

The Impact of Range

Besides collision size, the other factor that affects a player ability to flank is range. Basically, more range means more unit can sit along the circumference of the range from the target, and increase the effectiveness of a flank. Furthermore,  if a group of units possess a range greater than their targets, not only will they able to position themselves along a wider circumference, but units can position themselves in a series of lines, drastically increasing the effectiveness of their attacks.

In the picture above, the white army has a small range, and the circumference along which they can align is also fairly tiny.

Compare the above picture to the previous one. In this, the white army has significantly increased attack range, permitting many more units to join in the attack.

Because of this property of Range, it can easily disrupt the balance of games. Given enough time, a patient player might construct an large army of long-ranged units that knock-out enemy units before receiving any return fire, instantly spelling defeat for his opponent. (I’ve seen this happen in many Tower War games, including my own Expansion. But this is not at all an uncommon phenomenon.)


So basically, at this point, it should be obvious that besides collision size, formation and range can impact the effectiveness of a flank.

Written by Ceasar Bautista

2010/06/17 at 21:33

Posted in Uncategorized

Tagged with , , , ,