Ceasar's Mind

Follow me: @Ceasar_Bautista

Posts Tagged ‘games

Games as Economies

with 2 comments

In a previous post, I wrote about how it’s impossible to price an object in a game according to a systematic formula, barring games of limited complexity, and objects that cover the same span (that is, there are just different multiples of the same vector). Instead I claimed at the time it was just arbitrary- that designers could set the prices to whatever they liked and that the game would always be fair so long as each player had equal opportunity. Thus, the designer’s job is really just to play with the prices until it produces the interplay he is looking for.

In another post, I wrote about how research in RTS games, and why spending resources in order to have the option to train new units can payoff. While upgrades obviously boost the strength of an army, research unlocks news units for a price, and that price is only worth paying if one can expect to use the unlocked units in such a way, that the utility of their use exceeds the initial cost of research. Thus, the price of research is also arbitrary.

Having since studied microeconomics, I’d like to revisit these topics.

An indifference curve is a line that shows combinations of goods among which a consumer is indifferent.

Economists call the phenomenon I just described the marginal rate of substitution or MRS for short. Formally, the MRS is defined as “the rate at which a person will give up good y for good x while remaining indifferent”. In other words, it’s the price at which you are willing to buy something using something else (ie, how much you are willing to shell out for a can of soda). What’s interesting about the MRS is that it changes- the more a person has of good x, the more one is willing to trade of good x for good y. Said another way, because billionaires have so much money, they don’t mind paying $5.00 for a hot dog.

This is a far more intuitive way of looking at things than trying to predict prices from the attributes of the game objects. In short, all one needs to understand now is that players will buy an object when the utility of a object exceeds its cost.

While the main point I wanted to convey has been made, I want to just put down some related ideas that don’t exactly deserve a post of their own but that I think are worth sharing.

-If you are familiar with Dominion, you may know MRS as “the Silver test”. (If you are not familiar with Dominion, all you need to know is that players regularly face the choice of buying cards with special effects or treasures, such as a Silver, which increase income.) That is, when making a non-trivial purchasing decision, one always has to consider if the object at hand is in fact better than a Silver. What I find most interesting about the Silver test is how many players completely fail to pick up this rule, instead being regularly mislead by the incorrect assumption that “things that cost more must be better”. Certainly changed a few paradigms of mine after noticing what was going on.

-Knowing when to buy what is the backbone of many games. In Dominion, playing cards is fairly trivial- but the decisions of which cards to buy each turn is often rather complex, and in the majority of cases, determines who wins. Likewise, in StarCraft, micro-ing units is fairly simple- but, again, the decisions of which units to buy and which tech to research is far more complex, and is far more important than any tactical feat. In short, your economics textbook may be more valuable than The Art of War.

Written by Ceasar Bautista

2011/08/27 at 19:39

Analyzing the RTS

with 4 comments

I recently read an article that claimed that we really need to stop studying Facebook as a whole. That is, if you look at what is said about Facebook, there are a bunch of people who say that life is better because we’re all more connected now and there are a bunch of people who say that life sucks more now because people value face-to-face interaction less. The point is though, that these are both partly valid, but it’s stupid to say it’s Facebook’s fault. Facebook is a combination of tons of mechanics, a huge complicated system, and the only way to make sense of it is to atomize its components. (This isn’t really a new thought- dynamic programming for example pretty much uses the same idea- but the guy was just ranting to wake people up again.)

Real Time Strategy games are the same way, which often makes it hell to talk about them with people because there are so many factors that we may as well be predicting how a drop of dye will spread if we were to drop it in a glass of water. When I made Naval Commander, besides wanting to producing a simple version of the RTS for simplicity’s sake, my intention was partly to put to use a few of the formulas and thoughts that my friend Vinh and I had conjectured. However, even that was clearly too high-level to make any sense of.

Having recently played a bit of GemCraft Labyrinth, it occurred to me it wouldn’t be very hard to actually distill it further. (GemCraft basically uses almost an exclusive  soft-counter system, and in my opinion that makes it very interesting.) All one would really need to do is to break everything down into its components, understand what each component does, and then rebuild everything utilizing the components.  As far as I can tell, there are only three unique things to notice (undoubtedly more, but it’s 3:51AM.)

  • High damage – Good against heavily armored targets. Bad against low hit point targets.
  • Low damage – Bad against heavily armored targets.
  • Large range – Good against slow moving targets. Loses value when overkilling.

So we can see some cool things with just the small list. Since an offense consists of some amount of damage, some attack speed, and some range, we can create a 2*2 interesting units with these pieces. For example, one might construct an anti-armor unit whose job is to put heavy damage on armored targets. Such a unit would exclusively need a high attack damage. Likely, he would have a slow attack speed to keep his cost down, and he would be extremely susceptible to units with low hit points who he would expend too much damage on.

Armed with these basic components (or at least the realization of the need to break things down into their smallest parts) we could construct an entire RTS game which would provide interesting choices to the players, without any kind of need for tactical unit abilities, requiring instead only tactical movement.

Written by Ceasar Bautista

2011/04/23 at 03:53

Posted in Uncategorized

Tagged with , ,

The Princess and Monster game

with one comment

The current assignment in my programming class (CIS 121) is to build two AIs for a variation on the classic Pursuit-Evasion game, the Princess and the Monster. I think it’s an extremely interesting problem and will present some thoughts on the game here, and would greatly appreciate any feedback or response to further guide me given how little I know about the subject.

For the uninitatiated, the Princess and Monster game goes like this: Take a connected graph of any size. Assign one node to be the location of the Monster and another to be the location of the Princess. Each turn, the Princess and Monster may move to any nodes adjacent to their respective locations. If the Monster and Princess move onto the same square, the Monster wins. Else, if the Princess can avoid the Monster for a certain amount of turns, the Princess wins.

The game in practice varies a lot. For example, in some variants, the game is played on a continuous and bounded plane rather than a graph. Other interesting variations include revealing the starting location of each player to all of the player at the start of the game, enabling the Monster to capture the Princess if the Monster can simply get within a certain range, allowing the two player to “hear” each other if they are within a certain range, and changing the permitted speeds of each of the players.

In particular, I’ll examine the case where the Monster wins if it moves onto a node adjacent to the Princess’ location, and where both players are notified of each other’s location if they are within three edges of each other. Furthermore, for simplicity, I’ll assume the game is played on a 10×10 grid, rather than a random undirected graph.

So some quick observations.

The Really Simple Case

If we simplify the situation so that the game is the pure game, with not knowledge of starting positions, no ability to hear, and no capture range, we can observe a few things. First off, so long as the monster’s strategy is non-deterministic, there is no way that the princess can guarantee she can evade capture. However, should the monster and princess choose a random-walk strategy, then given a 100 node graph, the probability that they meet is 1/100 * 1/100. So for now on, let’s just assume the princess adopts such a strategy.

Now we can make a far more interesting observation. With the princesses’ strategy in mind, we note that every time we explore a node and move on, the probability that the princess is at the node that we just explored is equal to the probability that the princess is at once of the adjacent nodes multiplied by the chance that the princess would move there. Furthermore, because the princess cannot reach any adjacent nodes from the location that we were just at, she in fact is less likely to be nearby. (Which is kind of weird, but it makes sense if you imagine us putting a dummy princess on each node, and each turn, splitting the dummies into multiple dummies, sending some out and keeping some around, so that, without the monster, each probability would remain 1.)

Given all that, the intuitive solution is to continuously move to the most probable area nearby. The problem with this approach is that it very easily leads to deterministic searches that cover only a small part of the graph. (See the video below.)

This is not say the approach is incorrect. It’s just that we aren’t looking far enough ahead.

The code used in the video above works like this. First we go through every node and score it equal to 1, representing it’s probability of 1%. Then we place the monster down, remove the 1% from where he’s standing, and distribute it to all of the other nodes evenly. Lastly, we simulate what would happen if the princess moved according to an arbtirary Markov process, which in this case is simple a 20% chance to move to any other neighbor. (For those unfamiliar, as I initially was, this preserves the chance that all nodes will equal 1% given the absence of the monster since the princess will leave the corner only 2*20% of the time when she is on a corner, and she will only enter a corner 20% of the time if she is on either of the adjacent nodes.) Specifically, this is implemented with two hashmaps. One keeps track of the current probability, and another is a copy of the first. Then we go through each node and tell it to distribute 20% of it’s current probability to each of neighbors in the copied hashmap (which is needed so as to not corrupt the flow). Finally we replace the hashmap we are using with the copy. (Note here that 20% is arbitrary. Ideally, we would use some machine learning techniques to figure out what exact matrix the princess is using. Note also that if the chance of the princess moving is 0, then the best strategy becomes to create a Eulerian path that explores each node exactly once.)

So basically, as mentioned on the Wikipedia page for Princess and Monster games, it becomes evident that simply standing still most of the time and only occasionally find a new spot to chill is a pretty viable strategy for the monster. This will kind of create a drain, with the monster being the gutter, simplifying the search from one that is completely blind to only that is only half blind. Unfortunately, I’m not at all on the specifics on how long to wait, but it seems to make some intuitive sense. (Note here that the more the princess moves, the better this strategy of waiting is. )

Add in Capture Range

And it’s no different obviously. The graph basically becomes relatively smaller. There are sometimes a few problem handling corners nicely, but largely it’s inconsequential.

Add in both players knowing the start location of the other or add in hearing range

These are both extremely interesting changes and deserve their own posts, so that I’ll give them. (But probably not until next Tuesday when we turn in our AIs. Check back then!)

Written by Ceasar Bautista

2011/04/19 at 20:09

Some Segmentation Ideas

leave a comment »

Continuing with my earlier post on Robocode, I’d like to describe some thoughts on  segmentation strategies.

So the basic idea is, we want to minimize our bins at all times. In effect, this means calculating the Maximum Escape Angle, calculating the arc formed by the intersection of the target and the circle of radius distance to target with center on our tank (that is, the orbital path our opponent could take), and then taking the ceiling of one half of the ratio (one half because a single shot will hit our opponent just as well as if hits the leftmost part of our opponent as the rightmost side).

Interestingly, we can adjust the Maximum Escape by adjusting the speed of our bullet. Given that, we can actually reduce or expand the number of bins as we desire- a useful ability when trying to maximize probability (since things are discrete, a change from N to N-1 bins can be fairly significant assuming the probability to hit is a uniform distribution at both ranges).

Furthermore, we can supplement our segmentation data by adjusting the range at which we fight, spiraling toward and away our opponent as necessary, in order to keep our targets at optimal distances.


Written by Ceasar Bautista

2011/03/22 at 03:05

Educational Gates or: How I Learned to Hate School and Love Games

with 2 comments

If you’ve yet to watch Salman Kahn’s talk at TED, head over now and give it a watch- you won’t regret it.

Sal says a lot of insightful stuff here, so I’ll be revisiting his talk several times, but at the moment I want to comment on a small point that Sal makes.

Sal explains that one reason education is failing is because it passes students who don’t fully understand what they are being taught. Not just on a grade-to-grade basis, but lecture-to-lecture. A student might not fully understand a subject, get labeled a B student after the test, and then be expected to understand the next lecture which builds on what the student ought to know but doesn’t. Sal makes the analogy, it would be liking a father trying to teach his son to ride a bike, evaluating him after a week, seeing his son is having trouble maintaining his balance on left turns and noticing trouble with managing the brakes, and then handing his son a unicycle, and expecting his son to manage. It’s obviously faulty, it doesn’t work, and there’s a good reason this analogy doesn’t take place outside of schools.

If you don't know how to jump by now, you're fucked.

Personally, I’ve experienced this the hard way. As a game designer during high school, I once collaborated with a friend to produce a tactical puzzle game (called “Pinnacle”) where a team of five players had to coordinate their military units in order to defeat an AI opponent by utilizing a particular tactic. At one point, we made the decision to, rather than failing the players if they couldn’t figure out a level and making them try again, instead just push them through to the next level (the idea being, we wanted to make it more arcade-y and let players taste the entire game, rather than getting stuck and quitting). In theory, this could possibly work. If each level didn’t require an understanding of previous levels, this would be totally okay. Not being the case though, it (and the players) failed miserably, with players progressing to harder and harder levels despite having never learned the basics (which were often difficult to convey with one try). We quickly realized our mistake and reverted it, and learned firsthand why in Super Mario Bros and other professional games, you can’t just skip ahead nor does the designer push you forward- if you don’t understand the skills required to pass the current level, you’re experience with the next level is going to suck.

I’m not sure if the concept has a name, but if I had to call it something, I’d call it an educational gate. You can’t pass until you have the skills that will be expected of you on the other side of the gate. Most notably, these gates show up in the form of boss battles (although frankly, almost every instance, from the first Goomba in Mario, to random pits in Prince of Persia are technically all gates). Rather than trying to challenge the player, boss battles are typically designed to stop the player from progressing until he has achieved a certain level of mastery with a particular skill. (If you’ve ever tried you’re hand at one of the Legend of Zelda games, you know exactly what I mean.) In fact, I recall reading an article on Gamasutra that detailed a designers experiences with designing boss battles that did not test a player’s skills, and his explanation of how they sucked.

Frankly, I love this article because the contrast between exams and boss battles is ridiculous, despite them being analogous. I mean, okay, they both test us, but really, how much cooler would tests be if instead of just testing abstract concepts, all of the questions were connected to a central theme, that made us feel like we were really accomplishing something?

And furthermore, what if each lecture was a test in itself, that also made us feel like we were accomplishing something, while preparing us to take the exam? That’s how games work. Consider the scene below from Valve’s critically acclaimed Portal.

This scene made me cry.

In this particular scene in the game, the player must sacrifice his friend, the Companion Cube, in order to progress by dropping it in an incinerator. A relatively simple task, but it forces the player to understand how incinerators work.

Another incinerator, but this time, it's used to avenge the Companion Cube and destroy Glados, Portal's boss.

Later, an understanding of incinerators is required to defeat Glados, Portal’s boss. This is the only the tip of the iceberg though- the entirety of Portal, Super Mario Bros, Zelda, Metroid, and many other classics were designed using this pattern. In reality, games are hardly games at all- they’re more like extremely engaging classrooms. (Spoiler: Learning is actually fun.)

Really, schools have such a long way to go, having made virtually no progress in pedagogy despite game designers having illuminated the way since the 70s. Anyway, now you understand why I’m such a critic of education. It’s just too hard not to be when you see it consistently done wrong.

Written by Ceasar Bautista

2011/03/19 at 20:43

Randomness and Strategy Games

leave a comment »

At first, randomness and strategy games would seem like unlikely partners. Strategy games emphasize the role of information in making decisions, while randomness is chaotic, and undermines planning. Despite their differences however, when used properly they can combine nicely.

First off, let me begin with some terminology. When I refer to tactics, I refer to a series of maneuvers designed to accomplish a specific task. In other words, tactics are the little tricks that you can calculate in your head. When I refer to strategical moves, I refer to moves that are made in the absence of calculation, and instead guided by heuristics (in particular, experience).

With those definitions in mind, it is clear that when randomness is introduced into the low-levels (like, at an individual unit level) of a game, it elevates many tactics to the level of strategy. This can be good or bad, depending on your intentions with the game.

Randomness should not however come into high-levels of a game. I often debate the idea of macro-strategies, and how if done improperly, there can very easily be a rock-paper-scissors game disguised as a strategy game. This is bad, just in general.

Randomness should also be predictable. Not like, in the sense that it isn’t random, but in the sense that it shouldn’t be a 1% chance to create some uber-unfair effect. Again, depending on your intentions, the proc-rates will vary, but typically, I would claim that higher values are worse than lower values, the reason being, high proc-rates will cause players to depend on them working, and when they don’t it’ll cause a lot of frustration, while with low values, a player will find it more like it was just good luck.

For those unfamiliar with the idea, check out prospect theory. To make things brief though, marginal gains and marginal losses both decrease. Put in another way, let me present to you a classic problem. An impending disease will strike a city, and you are put in charge of choosing a plan that will protect the lives of its inhabitants. In Case 1, you are presented two options: Plan A will save 100 people. Plan B has a 50% chance to save 200 people, and a 50% chance to save none. Record your choice. In Case 2, you are again presented two options: This time, Plan A will result in the death of 100 people and Plan B has a 50% chance to result in the death of 200 people, and a 50% chance to result in the death of none. Record your choice. Now, if you are like most people, you chose Plan A in Case A and plan B in case B, despite both being equally rational choices. Here’s why: In Case 1, the value of the first 100 lives saved is to our brains worth more than the value of the next 100 lives saved. Thus, a 50% chance to double the number of lives saved doesn’t cut it. In Case 2, the value of the first 100 lives lost is worth more than the next 100 lives lost. In this case, where we want to minimize loss, taking the risk is a good idea.

What does this mean for designers? Well, for one, it means that if people are going to use something, they will prefer a 100% proc with a small effect rather than anything less than a 100% proc with a bigger effect. If they’re opponent is going to use something, they prefer the inverse. And finally, it means that testers will likely report on randomness in ways that pretty much mean nothing.

Debatably, it may make sense to not even make randomness random at all. I believe it was Civilization Revolution that completely did away with randomness altogether, and instead opted for an approach that guaranteed that a 1/x proc would proc once every x times, citing that players typically felt that the game was cheating them when abilities proc-ed repeatedly in their opponent’s favor. I personally think that if this idea can be reasonably implemented, it’s a better approach, as people tend to remember the negative experiences a lot more strongly than the positive ones. (This is scientifically verified, not just opinion. This is why when we are unsure we keep our original answers on tests rather than changing them, despite evidence that says to do the opposite. That is, we remember when we change an answer and got something wrong, and as opposed to changing an answer and getting it right.)

Randomness also removes a player’s sense of agency, which is vital in games. I can’t remember where I read it, but player’s prefer an opponent to be buffed than for them to be negatively buffed (and, I presume, the opposite for their enemies). Randomness has the same effect, and should be used sparingly.

Anyway, while you should factor all of those thoughts in, the real aim of this post was how it relates to strategy games. As said before, we want our randomness to not be so random, in order to develop sounder strategies. I would rather have a 50% proc which forces me to plan for what happens in either case, as opposed to a >50% proc in which I will count on it working, and totally be screwed, or perhaps better yet, a <50% proc, where I will count on the proc failing, and be happy when it turns out to go well.

In order to maintain senses of agency, which are of utmost importance in strategy games, randomness ought to have minimal effect, which means the range of its possible outcomes should be small. That is, if a unit is to deal on average 50 damage, it’s better to have it deal 40-60 with equal weights as opposed to 0-100 with equal weights. Furthermore, careful effort should be made to ensure that luck never (directly, or rather, noticeably) affects the outcome of a game.

Finally, let me refer back to this post on the TDG forums that I made a while ago, detailing randomness and its effects on intensity.

Written by Ceasar Bautista

2011/01/28 at 03:45

The Nature of Imbalance

with one comment

Before I can explain the combat effectiveness of a heterogeneous group, I need make sure the reader understands a few things first.

First, recall that in homogeneous groups, every additional unit increases the strength of the group exponentially.

Recall also that a rational player will target enemy units according to their Offense to Defense ratio, the highest of which will be targeted first.

With that in mind, I can explain.

Let’s consider the simple case of a group of two units, one unit with 1 Offense (O) and 1 Defense (D), and the other with sqrt(3) O and sqrt(3) D. Since both units in the group have the same Offense to Defense ratio, the order in which our enemy targets our units is unimportant and therefore we can calculate the strength of our group against the Universal Unit to get a K-value for our group. That said, it comes out to be (1 + sqrt(3))*1+sqrt(3)*sqrt(3)=4+sqrt(3) ~= 5.73K.

Notice though, that the unit with sqrt(3) Offense and sqrt(3) Defense has a K-value of 3, which means it worth exactly twice as much the other unit. That is to say, the value of our group is $1 + $2 = $3.

However, notice that $3 worth of 1O, 1D units produces an effective strength of 1+2+3=6K. This suggests that my previous suggestion of how to price units is not adequate.

Things break down even more when the Offense to Defense ratios are different. If a player rationally targets the unit with the highest Offense to Defense ratio, he will make heterogeneous groups even less effective. Consider for example, another group worth $3, except this time composed of a unit with 1 Offense and 1 Defense and a unit with 1 Offense a 3 Defense. In this case, the 1 Offense, 1 Defense unit is more threatening, and so with that unit targeted first the group’s strength comes out to a measly (1+1)*1+1*3=4K. Should the player have done otherwise, he would improved the group’s strength to an impressive (1+1)*3+1*1=7K.

This has something interesting implications. What we are basically saying here is that the effective strength for a two unit group is given by,

(O1+O2)D1+O2D2 = D1O1 + D1O2 + D2O2 or (O1+O2)D2+O1D1 = D1O1 + D2O1 + D2O2

What is important here in the middle factor, the D1O2. Typically, if Unit 1 and Unit 2 were the same price we would want to D1=O2, since if D1!=O2 then D2!=O1, and since our opponent gets to choose which unit he will target first, he will always pick the unit that makes the middle term smallest. Thus we try to make D1O2=D2O1, or equivalently D1/O1=D2/O2. Therefore, homogeneous groups are typically the most effective.

However, it is sometimes possible to force our enemy to attack our units in the way that we want him to. In this case, we want very heterogeneous units. In fact, if we could have it, we would have our units as different as possible- give one with infinite Offense and infinitesimal Defense and the other infinite Defense and  infinitesimal Offense, and we get a middle term that equals to infinity^2!

As we can see, prices cannot possibly be determined by looking at stats alone. In fact, prices for units are completely arbitrary- fine-tuned to make the game play as the designer intended. Likewise, the strength of units cannot be measured, since units rarely fight alone or in purely homogeneous groups. Only so long as prices keep the decisions interesting then they are completely left to the designer’s discretion.

Written by Ceasar Bautista

2011/01/02 at 01:45