## Posts Tagged ‘**game design**’

## Games as Economies

In a previous post, I wrote about how it’s impossible to price an object in a game according to a systematic formula, barring games of limited complexity, and objects that cover the same span (that is, there are just different multiples of the same vector). Instead I claimed at the time it was just arbitrary- that designers could set the prices to whatever they liked and that the game would always be fair so long as each player had equal opportunity. Thus, the designer’s job is really just to play with the prices until it produces the interplay he is looking for.

In another post, I wrote about how research in RTS games, and why spending resources in order to have the option to train new units can payoff. While upgrades obviously boost the strength of an army, research unlocks news units for a price, and that price is only worth paying if one can expect to use the unlocked units in such a way, that the utility of their use exceeds the initial cost of research. Thus, the price of research is also arbitrary.

Having since studied microeconomics, I’d like to revisit these topics.

—

Economists call the phenomenon I just described the marginal rate of substitution or MRS for short. Formally, the MRS is defined as “the rate at which a person will give up good y for good x while remaining indifferent”. In other words, it’s the price at which you are willing to buy something using something else (ie, how much you are willing to shell out for a can of soda). What’s interesting about the MRS is that it changes- the more a person has of good x, the more one is willing to trade of good x for good y. Said another way, because billionaires have so much money, they don’t mind paying $5.00 for a hot dog.

This is a far more intuitive way of looking at things than trying to predict prices from the attributes of the game objects. In short, all one needs to understand now is that players will buy an object when the utility of a object exceeds its cost.

—

While the main point I wanted to convey has been made, I want to just put down some related ideas that don’t exactly deserve a post of their own but that I think are worth sharing.

-If you are familiar with Dominion, you may know MRS as “the Silver test”. (If you are not familiar with Dominion, all you need to know is that players regularly face the choice of buying cards with special effects or treasures, such as a Silver, which increase income.) That is, when making a non-trivial purchasing decision, one always has to consider if the object at hand is in fact better than a Silver. What I find most interesting about the Silver test is how many players completely fail to pick up this rule, instead being regularly mislead by the incorrect assumption that “things that cost more must be better”. Certainly changed a few paradigms of mine after noticing what was going on.

-Knowing when to buy what is the backbone of many games. In Dominion, playing cards is fairly trivial- but the decisions of which cards to buy each turn is often rather complex, and in the majority of cases, determines who wins. Likewise, in StarCraft, micro-ing units is fairly simple- but, again, the decisions of which units to buy and which tech to research is far more complex, and is far more important than any tactical feat. In short, your economics textbook may be more valuable than The Art of War.

## Analyzing the RTS

I recently read an article that claimed that we really need to stop studying Facebook as a whole. That is, if you look at what is said about Facebook, there are a bunch of people who say that life is better because we’re all more connected now and there are a bunch of people who say that life sucks more now because people value face-to-face interaction less. The point is though, that these are both partly valid, but it’s stupid to say it’s Facebook’s fault. Facebook is a combination of tons of mechanics, a huge complicated system, and the only way to make sense of it is to atomize its components. (This isn’t really a new thought- dynamic programming for example pretty much uses the same idea- but the guy was just ranting to wake people up again.)

Real Time Strategy games are the same way, which often makes it hell to talk about them with people because there are so many factors that we may as well be predicting how a drop of dye will spread if we were to drop it in a glass of water. When I made Naval Commander, besides wanting to producing a simple version of the RTS for simplicity’s sake, my intention was partly to put to use a few of the formulas and thoughts that my friend Vinh and I had conjectured. However, even that was clearly too high-level to make any sense of.

Having recently played a bit of GemCraft Labyrinth, it occurred to me it wouldn’t be very hard to actually distill it further. (GemCraft basically uses almost an exclusive soft-counter system, and in my opinion that makes it very interesting.) All one would really need to do is to break everything down into its components, understand what each component does, and then rebuild everything utilizing the components. As far as I can tell, there are only three unique things to notice (undoubtedly more, but it’s 3:51AM.)

- High damage – Good against heavily armored targets. Bad against low hit point targets.
- Low damage – Bad against heavily armored targets.
- Large range – Good against slow moving targets. Loses value when overkilling.

So we can see some cool things with just the small list. Since an offense consists of some amount of damage, some attack speed, and some range, we can create a 2*2 interesting units with these pieces. For example, one might construct an anti-armor unit whose job is to put heavy damage on armored targets. Such a unit would exclusively need a high attack damage. Likely, he would have a slow attack speed to keep his cost down, and he would be extremely susceptible to units with low hit points who he would expend too much damage on.

Armed with these basic components (or at least the realization of the need to break things down into their smallest parts) we could construct an entire RTS game which would provide interesting choices to the players, without any kind of need for tactical unit abilities, requiring instead only tactical movement.

## Educational Gates or: How I Learned to Hate School and Love Games

If you’ve yet to watch Salman Kahn’s talk at TED, head over now and give it a watch- you won’t regret it.

Sal says a lot of insightful stuff here, so I’ll be revisiting his talk several times, but at the moment I want to comment on a small point that Sal makes.

Sal explains that one reason education is failing is because it passes students who don’t fully understand what they are being taught. Not just on a grade-to-grade basis, but lecture-to-lecture. A student might not fully understand a subject, get labeled a B student after the test, and then be expected to understand the next lecture which builds on what the student ought to know but doesn’t. Sal makes the analogy, it would be liking a father trying to teach his son to ride a bike, evaluating him after a week, seeing his son is having trouble maintaining his balance on left turns and noticing trouble with managing the brakes, and then handing his son a unicycle, and expecting his son to manage. It’s obviously faulty, it doesn’t work, and there’s a good reason this analogy doesn’t take place outside of schools.

Personally, I’ve experienced this the hard way. As a game designer during high school, I once collaborated with a friend to produce a tactical puzzle game (called “Pinnacle”) where a team of five players had to coordinate their military units in order to defeat an AI opponent by utilizing a particular tactic. At one point, we made the decision to, rather than failing the players if they couldn’t figure out a level and making them try again, instead just push them through to the next level (the idea being, we wanted to make it more arcade-y and let players taste the entire game, rather than getting stuck and quitting). In theory, this could possibly work. If each level didn’t require an understanding of previous levels, this would be totally okay. Not being the case though, it (and the players) failed miserably, with players progressing to harder and harder levels despite having never learned the basics (which were often difficult to convey with one try). We quickly realized our mistake and reverted it, and learned firsthand why in Super Mario Bros and other professional games, you can’t just skip ahead nor does the designer push you forward- if you don’t understand the skills required to pass the current level, you’re experience with the next level is going to suck.

I’m not sure if the concept has a name, but if I had to call it something, I’d call it an educational gate. You can’t pass until you have the skills that will be expected of you on the other side of the gate. Most notably, these gates show up in the form of boss battles (although frankly, almost every instance, from the first Goomba in Mario, to random pits in Prince of Persia are technically all gates). Rather than trying to challenge the player, boss battles are typically designed to stop the player from progressing until he has achieved a certain level of mastery with a particular skill. (If you’ve ever tried you’re hand at one of the Legend of Zelda games, you know exactly what I mean.) In fact, I recall reading an article on Gamasutra that detailed a designers experiences with designing boss battles that did **not** test a player’s skills, and his explanation of how they sucked.

Frankly, I love this article because the contrast between exams and boss battles is ridiculous, despite them being analogous. I mean, okay, they both test us, but really, how much cooler would tests be if instead of just testing abstract concepts, all of the questions were connected to a central theme, that made us feel like we were really accomplishing something?

And furthermore, what if each lecture was a test in itself, that also made us feel like we were accomplishing something, while preparing us to take the exam? That’s how games work. Consider the scene below from Valve’s critically acclaimed *Portal*.

In this particular scene in the game, the player must sacrifice his friend, the Companion Cube, in order to progress by dropping it in an incinerator. A relatively simple task, but it forces the player to understand how incinerators work.

Later, an understanding of incinerators is required to defeat Glados, Portal’s boss. This is the only the tip of the iceberg though- the entirety of Portal, Super Mario Bros, Zelda, Metroid, and many other classics were designed using this pattern. In reality, games are hardly games at all- they’re more like extremely engaging classrooms. (**Spoiler**: Learning is actually *fun*.)

Really, schools have such a long way to go, having made virtually no progress in pedagogy despite game designers having illuminated the way since the 70s. Anyway, now you understand why I’m such a critic of education. It’s just too hard not to be when you see it consistently done wrong.

## Randomness and Strategy Games

At first, randomness and strategy games would seem like unlikely partners. Strategy games emphasize the role of information in making decisions, while randomness is chaotic, and undermines planning. Despite their differences however, when used properly they can combine nicely.

First off, let me begin with some terminology. When I refer to **tactics**, I refer to a series of maneuvers designed to accomplish a specific task. In other words, tactics are the little tricks that you can calculate in your head. When I refer to **strategical moves**, I refer to moves that are made in the absence of calculation, and instead guided by heuristics (in particular, experience).

With those definitions in mind, it is clear that when randomness is introduced into the low-levels (like, at an individual unit level) of a game, it elevates many tactics to the level of strategy. This can be good or bad, depending on your intentions with the game.

Randomness should not however come into high-levels of a game. I often debate the idea of macro-strategies, and how if done improperly, there can very easily be a rock-paper-scissors game disguised as a strategy game. This is bad, just in general.

Randomness should also be predictable. Not like, in the sense that it isn’t random, but in the sense that it shouldn’t be a 1% chance to create some uber-unfair effect. Again, depending on your intentions, the proc-rates will vary, but typically, I would claim that higher values are worse than lower values, the reason being, high proc-rates will cause players to depend on them working, and when they don’t it’ll cause a lot of frustration, while with low values, a player will find it more like it was just good luck.

For those unfamiliar with the idea, check out prospect theory. To make things brief though, marginal gains and marginal losses both decrease. Put in another way, let me present to you a classic problem. An impending disease will strike a city, and you are put in charge of choosing a plan that will protect the lives of its inhabitants. In Case 1, you are presented two options: Plan A will save 100 people. Plan B has a 50% chance to save 200 people, and a 50% chance to save none. Record your choice. In Case 2, you are again presented two options: This time, Plan A will result in the death of 100 people and Plan B has a 50% chance to result in the death of 200 people, and a 50% chance to result in the death of none. Record your choice. Now, if you are like most people, you chose Plan A in Case A and plan B in case B, despite both being equally rational choices. Here’s why: In Case 1, the value of the first 100 lives saved is to our brains worth more than the value of the next 100 lives saved. Thus, a 50% chance to double the number of lives saved doesn’t cut it. In Case 2, the value of the first 100 lives lost is worth more than the next 100 lives lost. In this case, where we want to minimize loss, taking the risk is a good idea.

What does this mean for designers? Well, for one, it means that if people are going to use something, they will prefer a 100% proc with a small effect rather than anything less than a 100% proc with a bigger effect. If they’re opponent is going to use something, they prefer the inverse. And finally, it means that testers will likely report on randomness in ways that pretty much mean nothing.

Debatably, it may make sense to not even make randomness random at all. I believe it was Civilization Revolution that completely did away with randomness altogether, and instead opted for an approach that guaranteed that a 1/x proc would proc once every x times, citing that players typically felt that the game was cheating them when abilities proc-ed repeatedly in their opponent’s favor. I personally think that if this idea can be reasonably implemented, it’s a better approach, as people tend to remember the negative experiences a lot more strongly than the positive ones. (This is scientifically verified, not just opinion. This is why when we are unsure we keep our original answers on tests rather than changing them, despite evidence that says to do the opposite. That is, we remember when we change an answer and got something wrong, and as opposed to changing an answer and getting it right.)

Randomness also removes a player’s sense of agency, which is vital in games. I can’t remember where I read it, but player’s prefer an opponent to be buffed than for them to be negatively buffed (and, I presume, the opposite for their enemies). Randomness has the same effect, and should be used sparingly.

Anyway, while you should factor all of those thoughts in, the real aim of this post was how it relates to strategy games. As said before, we want our randomness to not be so random, in order to develop sounder strategies. I would rather have a 50% proc which forces me to plan for what happens in either case, as opposed to a >50% proc in which I will count on it working, and totally be screwed, or perhaps better yet, a <50% proc, where I will count on the proc failing, and be happy when it turns out to go well.

In order to maintain senses of agency, which are of utmost importance in strategy games, randomness ought to have minimal effect, which means the range of its possible outcomes should be small. That is, if a unit is to deal on average 50 damage, it’s better to have it deal 40-60 with equal weights as opposed to 0-100 with equal weights. Furthermore, careful effort should be made to ensure that luck never (directly, or rather, noticeably) affects the outcome of a game.

Finally, let me refer back to this post on the TDG forums that I made a while ago, detailing randomness and its effects on intensity.

## The Nature of Imbalance

Before I can explain the combat effectiveness of a heterogeneous group, I need make sure the reader understands a few things first.

First, recall that in homogeneous groups, every additional unit increases the strength of the group exponentially.

Recall also that a rational player will target enemy units according to their Offense to Defense ratio, the highest of which will be targeted first.

With that in mind, I can explain.

Let’s consider the simple case of a group of two units, one unit with 1 Offense (O) and 1 Defense (D), and the other with sqrt(3) O and sqrt(3) D. Since both units in the group have the same Offense to Defense ratio, the order in which our enemy targets our units is unimportant and therefore we can calculate the strength of our group against the Universal Unit to get a K-value for our group. That said, it comes out to be (1 + sqrt(3))*1+sqrt(3)*sqrt(3)=4+sqrt(3) ~= 5.73K.

Notice though, that the unit with sqrt(3) Offense and sqrt(3) Defense has a K-value of 3, which means it worth exactly twice as much the other unit. That is to say, the value of our group is $1 + $2 = $3.

However, notice that $3 worth of 1O, 1D units produces an effective strength of 1+2+3=6K. This suggests that my previous suggestion of how to price units is not adequate.

Things break down even more when the Offense to Defense ratios are different. If a player rationally targets the unit with the highest Offense to Defense ratio, he will make heterogeneous groups even less effective. Consider for example, another group worth $3, except this time composed of a unit with 1 Offense and 1 Defense and a unit with 1 Offense a 3 Defense. In this case, the 1 Offense, 1 Defense unit is more threatening, and so with that unit targeted first the group’s strength comes out to a measly (1+1)*1+1*3=4K. Should the player have done otherwise, he would improved the group’s strength to an impressive (1+1)*3+1*1=7K.

This has something interesting implications. What we are basically saying here is that the effective strength for a two unit group is given by,

(O1+O2)D1+O2D2 = D1O1 + D1O2 + D2O2 or (O1+O2)D2+O1D1 = D1O1 + D2O1 + D2O2

What is important here in the middle factor, the D1O2. Typically, if Unit 1 and Unit 2 were the same price we would want to D1=O2, since if D1!=O2 then D2!=O1, and since our opponent gets to choose which unit he will target first, he will always pick the unit that makes the middle term smallest. Thus we try to make D1O2=D2O1, or equivalently D1/O1=D2/O2. Therefore, homogeneous groups are typically the most effective.

However, it is sometimes possible to force our enemy to attack our units in the way that *we* want *him* to. In this case, we want very heterogeneous units. In fact, if we could have it, we would have our units as different as possible- give one with infinite Offense and infinitesimal Defense and the other infinite Defense and infinitesimal Offense, and we get a middle term that equals to infinity^2!

As we can see, prices cannot possibly be determined by looking at stats alone. In fact, prices for units are completely arbitrary- fine-tuned to make the game play as the designer intended. Likewise, the strength of units cannot be measured, since units rarely fight alone or in purely homogeneous groups. Only so long as prices keep the decisions interesting then they are completely left to the designer’s discretion.

## Prioritizing Targets

Not sure where this belongs exactly yet, but I figured it was worth mentioning. In regards to all of these Hit Points and Damage things, there is a way to prioritize which targets you take out besides what’s most convenient and it’s simple really: Determine which unit is dealing the most damage per hit point, and kill him first.

Take for example, Halo 3. Consider you encounter the two enemies above simultaneously. The one on the left is called a Hunter, armed with a large Assault Cannon, and on the right is a Grunt, armed with a Fuel Rod Cannon. Both weapons are extremely dangerous. However, the Hunter is very difficult to kill, possessing thick armor and a large metallic shield on his left arm which no projectile can penetrate, while the Grunt is very easy to kill, possessing no armor and very little hit points.

The decision before you is this: To target the Hunter first and attempt to dodge both weapons while attempting to find a weak spot in its armor, or to shoot at the Grunt and kill him quickly and then to fight the Hunter alone. Not much of a choice unless you like a challenge.

Mathematics tell us to take exactly the same course of action in RTS battles. Put simply, to figure out which unit to strike first, determine the DPS of all enemy units and divide it by their hit points and then strike the unit which has the highest result. Usually though, this is intuitively obvious.

**Factor in Capital**

In strategy games though, there is also capital. That is, while in most cases a damaged unit can be regenerated or repaired, sometimes even for no cost besides time, a dead unit stays dead, and is a permanent loss of resources. With this in mind, the priority can shift to striking the unit with the highest cost to hit point ratio.

Ultimately, it comes down to whether or not you can successfully ensure a kill. If it’s not possible, then it’s best to prioritize by damage to health ratios in order to preserve the hit points of your force. Otherwise, it’s significantly better to take out what you can and then regenerate your units before fighting again on much less level ground.

**Why Cheap Units Are Still Useful**

The idea of worrying about capital losses usually means trying to invest minimally in weak units since they die very easily. I’ve often wondered if I should make a new formula to adjust for this problem, making expensive units less and less cost effective. I think though that cheap units are underestimated. The one power that they have over their more expensive counterparts is that they can be in multiple places at once. Utilized correctly, this feature can overwork an enemy defense and allow you to penetrate where a more expensive force couldn’t. (The trick is to set it up so that the cheap provides provide an extra kick for your other more expensive units, making each one more powerful individually, and then striking the enemy. He’ll be unable to divert his resources as necessary since they are all centralized in expensive units.) However, this is based off of my experiences with Naval Commander, which accidentally was balanced “incorrectly” since the attack mechanism would choose a random target in range instead of focusing on the weak units, effectively making the strength of units scale linearly. I’ll have to play with the idea a little more in the future.

## Balancing Forces

Let me start this post with an email from my friend Vinh, as he explains the core concepts of force strength particularly well.

This formula relates n amount of Unit 1, and n amount of Unit 2. I call it the “N vs N” formula where N obviously stands for the amount units.

Basically I started with this formula:

H1*D1 = H2*D2

But what would happen if there are 2 of Unit 2? Lets call these two units “Force 2.”

Intuition would tell me to multiply the attributes of Unit 1 by 2, in order to balance the two forces. However, if we just plug in numbers we can see that this doesn’t work out. Force 2 will still have an edge.

We can demonstrate this with a scenario.

H1*D1 = H2*D2

H1 = 100 H2 = 50

D1 = 1 D2 = 2Since there are 2 of Unit 2, we multiply H1 by 2 to balance it.

H1 = 200 D1 = 1

1 Unit of Force 2 will die in 50 seconds, during which time Force 2 will deal a total of 200 damage and kill Force 1 completely! Both forces do not kill each other at the same time and therefore they are not balanced.

Obviously there is a trick involved here. We have to derive a formula mathematically. Intuition fails us here.

For both forces to be balanced, they must kill each other at the same time. Given a set amount of time (T), both forces must deal the same amount of damage.

Now if we set T equal to the time the unit is alive, which is his HP divided by the enemy’s damage and arbitrarily balance against an enemy whose damage is equal to 1 we can get a unit’s net damage.

D1 * T = D1 * (H1/1) = D1 * H1 = Net Damage

We can conclude that in order to be balanced, both forces must deal the same amount of Net Damage.

Force 2 has 2 Units, and therefore the total time the whole Force is alive is modeled differently.

1 Unit in the Force 2 will eventually die, within the time that he is alive the whole of Force 2 does 2 times his damage (since there are 2 units).

So, 2*(D2*H2) = Net Damage. However, this is not the whole story, since there is still one more unit alive. The damage that the solo unit does when he is alive also contributes to the Net Damage, therefore we must add his damage to the formula.

2*(D2*H2) + (D2*H2) = Net Damage.

Now we set the Net Damage of Unit 1 and 2 equal to get

H1*D1 = 2*(D2*H2) + (D2*H2).

I’ll shorten this letter by letting you clarify this with your own scenario.

Using mathematically induction I can prove this to be true if Force 2 is comprised of 3 units (And Force 1 still has 1 Unit).

H1*D1 = 3*(H2*D2) + 2*(H2*D2) + (H2*D2) (I can add the terms together)

H1*D1 = 6*(H2*D2)Now what if it was a 2 vs 3 situation?

Using the pattern that I found, this would be my guess

2*(H1*D1) + (H1*D1) = 3(H2*D2) + 2*(H2*D2) + (H2*D2)

3*(H1*D1) = 6*(H2*D2)Once again, I’ll leave you to make a scenario to clarify this formula. It should work out.

Take note that the amount of Units in both Forces is a relative term. It isn’t really the “amount” of units, its actually the ratio between their cost.

For 1 v 1 scenario, both Units cost the same amount. For 1 v 2, Unit 2 cost half as much as Unit 1. For 1 v 3, Unit 2 cost a third as much as Unit 1, etc. . .

So in fact, a 2 v 2, or a 3 v 3 scenario is modeled by the same formula as a 1 v 1 scenario, since their ratios will reduce to 1 : 1.

-Vinh

As we can see, the strength of a force grows exponentially with each additional unit. (Assuming units take up no space and don’t overkill their targets.) We can conclude then that a unit that costs twice as much as another unit should be exponentially more powerful. (The particular function for determining the strength of a unit given is price is given by the triangular numbers. Equally, the triangular numbers model the strength of a homogeneous force as it increases in size.)

If we multiply a force’s strength by the raw strength of any unit in the force we can find the true strength of a force. I arbitrarily chose to call this value K. K is a useful concept because it allows us to determine the prices of units by simply solving the equation on the right for x, which represents the number of units in the force. Since the number of units is directly proportional to the price, simply divide another constant arbitrary value by x in order to obtain prices.

From actually putting this formula to use, I’ve found a few problems. First off, it’s very difficult to be precise. Not much of a problem, but a problem nonetheless. Secondly, the formula is strictly for pricing units in respect to combat. Since many attributes have utility along other dimensions, movement speed for example, the pricing is not exactly accurate. Finally, as mentioned before, collision size is not accounted for either. Theoretically, collision size should be directly proportional to cost, with larger units being bigger, but as that’s not always the case it can really break the numbers sometimes.

Nevertheless, the formula should be a decent start for any designer.