Ceasar's Mind

Follow me: @Ceasar_Bautista

Posts Tagged ‘games

Indifference Curves

leave a comment »

An indifference curve is a line that shows combinations of goods among which a consumer is indifferent. Since consumers always prefer more over less, a curve shifted to the top and right is preferable to a curve shifted left and down, but any point on any particular curve is equally preferable as any other point on that curve. (So, in this picture here, X is just as good as A which is just as good as B.)

The marginal rate of substitution (MRS) is the rate at which a person will give up good y (measured on the y axis) for good x (measured on the x axis) while remaining indifferent. The magnitude of the slope of an indifference curve measures the marginal rate of substitution. That is, if the indifference curve is steep, then the marginal rate of substitution is high and a person would be willing to give up a very large amount of y to obtain very little of x. If the curve is flat, the marginal rate of substitution is low. The person is willing to give up very little of y to obtain large quantities of x. Generally, there is a diminishing marginal rate of substitution, which means that people becomes more and more willing to give up large amounts of x for y when they have very little y.

For ordinary goods, we see curves that look like the one above. But sometimes the value of a good is influenced by another. For example, perfect substitutes will produce straight diagonal lines with slope -1: a pen from Walmart is equally preferable to the same pen from anywhere else. On the other hand, perfect complements on the other have L-shaped curves: a left shoe is worth nothing without the right, and two left shoes are worth nothing without two right shoes.

Predicting Consumer Choice

The best affordable choice is 1)on the budget line and 2) on the highest attainable indifference curve. This point is where marginal rate of substitution equals relative price. Changes in the price or utility of a good or a person’s income change the best affordable point.

Written by Ceasar Bautista

2010/11/14 at 17:28

Posted in Uncategorized

Tagged with , ,


with one comment

So a recent post on the TDG forums got me thinking about the value of upgrades and research, and it’s really rather simple, but I think worth saying anyhow since they are so common and to amateurs often a bit bizarre.


Research enables new the construction of new units and technology can often be researched in a variety of ways. New units can often  be unlocked by simply building or upgrading structures, and upgrades or new abilities can often be unlocked via researching the technology directly at a building.

However, the difficult part is really just determining if it’s really worth it to buy many of these upgrades or research the technology. Sometimes the choice can be simple- If for example, a player already owns a hundred soldiers and researching an attack upgrade is cheap, it’s a no brainer. The player will get his money’s worth as soon as the research is completed.

If however, a player is spending resources to simply unlock the ability to construct a new unit (or alternatively use a new ability) the price at first is a little more difficult to figure out. If we assume that the new units is more cost effective than existing units, the player could simply determine if he will reasonably build enough units to get his money’s worth. If for example, the current tier of units get a 100:10 strength to resource ratio, and the new tier would get a 200:10 strength to resource ratio, but initially costs 200 resources, then to make those lost resources back, the player would have to purchase 10 of the new tier units. Beyond that is a profit in strength.

It’s in my opinion though, that having such strict tiers, with each new tier nullifying old ones, is often bad design. (Sometimes, as in the case of Tower Defense War games this is necessary.) The alternative approach, as appears to be most common in RTS games, is to make units cost effective in certain scenarios, say good at defeating groups of units via splash or effective at combating biological units via poison, such that no unit is truly superior to any other, except in certain scenarios. It then becomes the player’s job to guess at how often the player will be able to effectively use unlocked units, and if the benefits outweigh the costs, then to purchase the technology.

To give an example- Consider you are playing StarCraft. You are the Terran (an army of gritty future humans) playing against the Zerg (an alien army that has evolved to kill).  The game begins, and you start building Marines, the basic component of early Terran armies. Very quickly, the option to purchase Medics arrives, which are extremely useful units when the opponent is unable to kill your Marines quickly. Knowing that the Zerg only has access to Zerglings and Hydralisks, both of which are relatively fragile and also unable to overwhelm the healing effects of Medics, it’s a logical decision to buy them. The benefits greatly outweigh the costs, and you will make up the money spent on the research very quickly. You will not however, proceed to buy Firebats, because despite their splash attack, which is effective against clustered enemies, their short range makes them particularly weak against Hydralisk, which your enemy will have many of, and therefore, your Firebats will always be fighting in situations where they are not cost effective, and they will not make up the money spent on research.

As designers, this effectively means that the price of research is effectively arbitrary. Higher costs however, will mean that the research will much less often be purchased, since players will only see it as useful if they can pretty much guarantee that the unlocked units will be able to be deployed in cost effective circumstances.

Research is kind of interesting because players will shape their armies based on the conditions they are fighting in. Based on the map, it may or may not be cost effective to purchase certain units. For example, ground units lose a lot of their effectiveness of island maps. Additionally, the race of the opponent will have to be factored in. For example, Science Vessels are pretty much a must have against Zerg, with their ability to Irradiate enemy units and detect Lurkers. And perhaps even more interestingly, the units that you have already unlocked has to be considered. Certain units may work extremely well together, such as Marines and Medics.

So basically, research is kind of the depth aspect of RTS games. It takes experience to figure out what’s worth researching and what’s not, and these decisions, strategically, are often the most important, and really fuel the interesting interplay behind the RTS.

Rethinking research in the frame of context of group combat, it occurred to me that beyond the points covered in my original post on research, the real value of research comes from the ability to construct more specialized units, which in turn result in stronger heterogeneous groups. From this perspective, I would claim that in any well made RTS, the lowest tiered units would be largely unspecialized- namely, stats that are effective for 1v1 combat and as more research permits the creation of more units, the new units become more and more specialized. We see this in Starcraft- Terrans have the well rounded Marines, Zerg have well rounded Hydralisks, and Protoss typically produces well rounded combinations of Zealots and Dragoons.


Written by Ceasar Bautista

2010/08/16 at 01:11

Posted in Uncategorized

Tagged with , , , , ,

Potential Fields

leave a comment »

So while surfing for a solution to my own AI problems, I found an article on potential fields. I had used these before in trying to develop an AI for Naval Commander, and achieved limited success, but I couldn’t figure it out well enough and so I ditched the whole thing.

Anyway, a potential field is kind of like a magnetic field. Basically, the AI places charges around the map, positive charges near high value targets, and negative ones around dangerous areas and impassable terrain. Allied units use the field by testing a few points around them, figure out where the most potential is, and moving toward the location. By strategically placing the charges, the AI can guide armies in a very dynamic and simple way.

So here our potential field is represented by the lightness of the square, with light squares being more attractive. The rocks, being impassable, and the white enemies, being dangerous, emit negative potential, coloring the nearby squares dark, while the goal areas emits positive potential, lighting the map up. Together, these fields  provide a way for the green unit to get to its destination all without any kind of path finding algorithm.

The article goes on to explain a few useful tricks, such as placing positive charges on top of enemy units to attract allied units to them, and then placing a weaker negative charge on top of them in order to get our units to attack from a certain range. Anyway, I think there is a lot of potential with this idea. I highly recommend you check the article out and you can count on me investigating the idea in the future.

(A thunderstorm won’t let me embed the link. Check it out here for now: http://aigamedev.com/open/tutorials/potential-fields/)

Written by Ceasar Bautista

2010/07/19 at 20:22

Command 0.04

leave a comment »

This update is very minor, but it’s the start of something bigger, of which I’m having trouble with. Basically, I’ve made it so that the AI tries to send the least amount of units to necessary to capture planets. Unfortunately, it only works with terminal nodes:

The new AI works by allowing each planet to store the number of armies that it needs to conduct the AI’s plan. It sends that value to it’s parent node, and that goes all the way up, so that all the nodes know how much they need to conduct the AI’s plan.

The image at the right is a little hard to read, but basically, node A tells B that it needs 2 armies, then B tells E that it needs 10, 2 to capture A and 8 to capture itself. The process, repeats upward.

This works for terminal nodes, so why doesn’t it work for B? The problem, I’m almost certain, is that I’m using a depth first search, and A is never being notified that B is receiving enough forces to capture A.  Now that would be very easy to fix, except that if I did that, then E would never send to B, since it would consider A and B both captured.

Anyway, I need some help.

Written by Ceasar Bautista

2010/07/19 at 17:41

Command 0.03

leave a comment »

Building off of 0.02, I made a few changes to the algorithm. In the old algorithm, the search would stop whenever it hit an enemy node. However, that would prevent it from seeing anything beyond the front line, which could potentially be a problem. If, for example, our base connects to a 10+1 node and a 25+1 node, both the same distance away, our base should probably attack the 10+1 node. However, if behind the 25+1 node is a 0+100 node, then we may wish to rethink our initial decision.

Additionally, I rewrote parts of the code here and there to make things a little more organized for myself. With that, I’ve created an AI for Red, to showoff in the videos anyhow, but for the player I’ve implemented a new system for attacking. Where previously, a player’s click would send all immediately available units to the targeted node, the new system is set up so that it not only does that, but continues to stream afterward. The stream can be disabled by clicking the streaming node twice.

But anyway, here is a video of the new AI in action:

Written by Ceasar Bautista

2010/07/17 at 00:40

Command 0.02

leave a comment »

The initial AI failed because nodes failed to consider the strength of their moves beyond attacking the nodes that they were immediately connected to. This version attempts to fix that, using a depth first search on each node:

The AI still has a way to go though. It makes no effort to keep track of how many units it has sent to a node, often sending more than it should. On the same note, it makes no effort to keep track of how many units the player has sent to a node, or for that matter, could send. Hopefully I can fix that soon.

Basically, it works like this: First it selects a blue node to analyze, then it finds all of its neighbors (called children). If the any of the children are not blue or don’t connect anything except for the parent, then it scores the node by figuring out how much time it would take to capture the node. If the node is blue and has children of its own however, then it analyzes that node, but all the while keeping track of the ancestors so that the current generation doesn’t mistake an ancestor for one of it’s children, preventing infinite loops in the search.

However, through testing the AI I encountered a few problems. Perhaps the worst thing was that I had labeled all my variables weird names, which made it very difficult to track what was going on. When my code performed badly, it was extremely hard to debug because I had not only had to figure out what each variable was doing, but I also had to figure out what variable was keeping track of what. So the point here: Make sure you label stuff nicely. Typically, it’s okay to use shorthand in keeping with your style, but for complicated stuff save your self the trouble and label things nicely.

Secondly, while going through the search the AI actually would overwrite the values of nodes sometimes, so I had to make special variables just to keep track of the scores of the nodes at each level of search.

Written by Ceasar Bautista

2010/07/16 at 19:37


leave a comment »

So it’s been kind of quiet here lately, and that’s because I’ve been studying up on Java so that I could develop a proper game in order to develop a proper AI. The game here (currently titled “Command”) is based off of Little Stars for Little Wars, almost exactly at the moment, although I have a few minor plans for the future. Anyhow, here is a quick demo:

As you can see, the AI is still in need of work (I just put that together hours ago). But I’m excited for the prospects!

Currently, the AI works by making a list of all the planets it is connected to, and then scoring them. The score is calculated by dividing the gains by the costs. The gains in most cases is the additional production, except of course if the planet is already owned by you. The costs include the distance, the current strength, and the projected strength of the planet when the army arrives. The depth as you can tell, is still rather shallow though, which is why the AI functions so oddly in the video.

Also, I should note, that this AI will be used plainly to understand macro level decision making. I suppose I will have to play around with it in the future in order to make it simulate real conditions a bit better, but at the moment it ought to give a fairly decent idea of how to one should manage armies strategically.

Written by Ceasar Bautista

2010/07/13 at 23:41