Ceasar's Mind

Follow me: @Ceasar_Bautista

The Quadratic Sieve

with 3 comments

Just got done with a terrible day of magic numbers. To spare everyone the trouble, here’s a fairly clear version of a quadratic sieve for generating prime numbers that ought to be self-explanatory and easily modified– as opposed to many other algorithms out there.

def primes(n):
    '''Get all primes up to n.'''
    n = int(n)
    if n < 2:
        return []
    sieve = range(n)
    sieve[1] = 0
    root = n ** 0.5
    index = 0
    while index <= root:
        if sieve[index]:
            i = index ** 2
            while i &lt; n:
                sieve[i] = 0
                i += index
        index += 1
    return [x for x in sieve if x]

Written by Ceasar Bautista

2011/07/10 at 02:49

Posted in Uncategorized

Tagged with , ,

Decoding State

leave a comment »

In a previous article covering the concept of using decision trees for Robocode I posed the question:

Finally, how do we best encode the information regarding the state of the game itself? For example, does it make sense to store the distance between the two tanks even though the same information could be determined via enough splits? (I say yes, if just to speed things up) If so, what other attributes would be relevant?

I’m still not certain of the answer, but I’ve got some additional insight to further clarify the problem.

So we know that given a table of the state of the game encoded in any form (ie: x and y positions of both bots, hit points, and size of the battlefield) we could, given enough data points, build out a tree that very well approximates the correct firing angles for every situation. This make encoding something like “distance to target” technically redundant. However, the distance to the target seems to be a very relevant attribute to capture- something clearly not easily captured via the x and y positions of two bots. I’m not sure how it hit me, but that uncapturable essence of distance is unique strictly because it describes the relationship between the two variables.

That said, we now run into a problem- there are an infinite number of relationships between any two variables. Distance, obviously, is the relationship defined by (x -a) ** 2 + (y – b) ** 2 = c. But countless other relationships exist- for example any of the lines defined by any polynomials (ie: a * x ** 2 + b * x  + c = y) or another example– any of the lines defined by exponential relations (ie: y = a * n ** x + b). These relationships suddenly become impossible to handle and were the main reason I decided to investigate other options when I realized multiple regressions suffer the same fault. (Remember, that what I’ve just described is the relationship between only two dimensions. With each added dimension, the simple relationships grow at a rate of O(n!).)

Fortunately, it may very well be the case that it’s not TOO important that we capture all of them. Take for example the concept of curve fitting. While for any n points, a polynomial of degree (n – 1) is guaranteed to exist, such polynomials  are often so ridiculous in their fit that they are not useful for analyzing data. Thus, many statisticians will deliberately use only simple models to analyze data. It’s more or less the same with us- if we capture only relationships we expect to be useful, we’ll probably be good enough for most cases (especially given the size of the tanks and the relatively small size of the battlefield).

All told, while I’m still interested in figuring out definitively what is the right way to handle decoding the data, I think for now just capturing a few relationships that describe circles, straight lines, and hyperbolas, a very reasonable tank could be constructed.

Written by Ceasar Bautista

2011/07/08 at 02:28

Robocode, Decision Trees (Dynamic Segmentation), and Information Science

with 2 comments

So some time ago, I discovered that when I was talking about what is known to the Robocode community as “Dynamic Segmentation“, what I actually was talking about is known to the machine learning community as a decision tree. To those who are unfamiliar, a decision tree is a tool used to classify information by identifying relevant clusters. That’s still a little vague, so I’ll explain in the context of trying to compress an image.

Let’s imagine that we have an image of black and white pixels that we’re trying to compress. To start, we first translate everything into a bit matrix. Now things get interesting. To compress the image optimally, we need to recursively divide the image along the x or y axis until each bit is in a box of identical numbers. There are several ways to do this, the most obvious being to perform a depth first search of all possible combinations of splits that satisfy the end condition until the tree with the least splits is found. That’s a terrible solution though and fortunately information science offers some help.

Rather than doing a recursive depth first search, a better solution is to calculate the entropy of the original image and for all possible splits calculate the reduction in entropy and then selecting the one that reduces entropy the most. (For those unfamiliar, entropy is more or less the misclassification rate– not exactly so, but they’re closely related.) Doing so repeatedly ought to give us not necessarily the best solution, but one that is good enough.

The resulting tree structure used to compress the image is called a decision tree. The reason being that we now have a series of rules that can tell us what color a pixel ought to be at any x-y coordinate, not so different than a min-heap really. The code for a more general tree is below.

class DecisionTree():
    def __init__(self, table, target_column, fitness_func):
        self.table = table
        self.target_component = target_column
        self.fitness_func = fitness_func
        self.tree = None

    def build(self):
        '''Build the decision tree.'''
        target_values = self.table.select(self.target_column)

        if not self.table:
            return None
        elif len(set(target_values)) == 1:
            return target_values[0]
            splitting_column = self.choose_column()

            if splitting_column is None:
                return sample_analysis.mode(target_values) #Could be average or median for continuous data
                self.tree = {'splitting_column': splitting_column}
                #Could be a problem on big data
                splits = {}
                for row in self.table.get_rows():
                        splits[row[splitting_column]] = [row]
                for split in splits:
                    subtree = DecisionTree(Table(splits[split]), self.target_column, self.fitness_func)
                    self.tree[split] = subtree

    def choose_column(self, significance=0.0):
        '''Get the attribute with the highest information gain.'''
        best_gain = 0.0
        best_column = None

        for column in self.table.columns:
            gain = fitness_func(vectors, attribute, target_attribute)
            if gain > best_gain:
                best_gain = gain
                best_column = column

        if best_gain > significance: #Chosen for significance
            return best_column
            return None

tldr: A decision tree algorithm takes a table of data, a target column, and a fitness function (used to figure out how to split the data) and constructs an n-dimensional tree which can be used to explain the data.

Now, back to Robocode- The decision tree is an incredibly formidable tool. Not only can it be used to compress and explain data, but it can be  and is used for forecasting. Since the algorithm identifies ultimately identifies clusters, then we can reasonably expect future data to simply fit in to our rules. So simply load it up with a table of wave data, identify the correct firing angle as the column to split on, select a reasonable fitness function (these get complex) and let the code figure out the clusters.  Then just before firing, apply the rules you’ve discovered to find the relevant cluster, and fire!

All said, there are a few interesting technical problems that arise that I want to raise, partly because I don’t yet know the answer, and just for you, the reader, to consider before getting to coding.

  • First off, what fitness functions do we use to categorize the data at each split? It gets tricky when you start mixing in categorical data with numerical data. Even barring that, fitness functions often can be biased.
  • A tree is a static data structure. Therefore, if it can be determined that the tree is not performing well, the entire tree needs to be recalculated (in opposition to what I had believed earlier about simply splitting leaves). While this isn’t so much a problem for small amounts of data, which huge amount it can be a problem. Are there ways to make the tree more flexible so that it never has to fully be recalculated?
  • Done incorrectly, a tree can overclassify the data, for example, making each row id its own cluster. Even if that can be fixed (and it can), where is the correct spot to draw the line on classifying the data, taking into the account that the more splits we make, the less likely it is that we split correctly? (That is to say, at the most extreme, you can be 100% sure you correctly classified the data as ‘data’ if you make no splits.) My intuition says some genetic algorithms can figure this out, but it’s hard to say for sure.
  • Finally, how do we best encode the information regarding the state of the game itself? For example, does it make sense to store the distance between the two tanks even though the same information could be determined via enough splits? (I say yes, if just to speed things up) If so, what other attributes would be relevant?

In any case, there is a LOT of research on this topic as I’ve recently discovered. (Claude Shannon is the man!) Let me know of any thoughts / answers and good luck!

Written by Ceasar Bautista

2011/06/27 at 23:49

Analyzing the RTS

with 4 comments

I recently read an article that claimed that we really need to stop studying Facebook as a whole. That is, if you look at what is said about Facebook, there are a bunch of people who say that life is better because we’re all more connected now and there are a bunch of people who say that life sucks more now because people value face-to-face interaction less. The point is though, that these are both partly valid, but it’s stupid to say it’s Facebook’s fault. Facebook is a combination of tons of mechanics, a huge complicated system, and the only way to make sense of it is to atomize its components. (This isn’t really a new thought- dynamic programming for example pretty much uses the same idea- but the guy was just ranting to wake people up again.)

Real Time Strategy games are the same way, which often makes it hell to talk about them with people because there are so many factors that we may as well be predicting how a drop of dye will spread if we were to drop it in a glass of water. When I made Naval Commander, besides wanting to producing a simple version of the RTS for simplicity’s sake, my intention was partly to put to use a few of the formulas and thoughts that my friend Vinh and I had conjectured. However, even that was clearly too high-level to make any sense of.

Having recently played a bit of GemCraft Labyrinth, it occurred to me it wouldn’t be very hard to actually distill it further. (GemCraft basically uses almost an exclusive  soft-counter system, and in my opinion that makes it very interesting.) All one would really need to do is to break everything down into its components, understand what each component does, and then rebuild everything utilizing the components.  As far as I can tell, there are only three unique things to notice (undoubtedly more, but it’s 3:51AM.)

  • High damage – Good against heavily armored targets. Bad against low hit point targets.
  • Low damage – Bad against heavily armored targets.
  • Large range – Good against slow moving targets. Loses value when overkilling.

So we can see some cool things with just the small list. Since an offense consists of some amount of damage, some attack speed, and some range, we can create a 2*2 interesting units with these pieces. For example, one might construct an anti-armor unit whose job is to put heavy damage on armored targets. Such a unit would exclusively need a high attack damage. Likely, he would have a slow attack speed to keep his cost down, and he would be extremely susceptible to units with low hit points who he would expend too much damage on.

Armed with these basic components (or at least the realization of the need to break things down into their smallest parts) we could construct an entire RTS game which would provide interesting choices to the players, without any kind of need for tactical unit abilities, requiring instead only tactical movement.

Written by Ceasar Bautista

2011/04/23 at 03:53

Posted in Uncategorized

Tagged with , ,

The Princess and Monster game

with one comment

The current assignment in my programming class (CIS 121) is to build two AIs for a variation on the classic Pursuit-Evasion game, the Princess and the Monster. I think it’s an extremely interesting problem and will present some thoughts on the game here, and would greatly appreciate any feedback or response to further guide me given how little I know about the subject.

For the uninitatiated, the Princess and Monster game goes like this: Take a connected graph of any size. Assign one node to be the location of the Monster and another to be the location of the Princess. Each turn, the Princess and Monster may move to any nodes adjacent to their respective locations. If the Monster and Princess move onto the same square, the Monster wins. Else, if the Princess can avoid the Monster for a certain amount of turns, the Princess wins.

The game in practice varies a lot. For example, in some variants, the game is played on a continuous and bounded plane rather than a graph. Other interesting variations include revealing the starting location of each player to all of the player at the start of the game, enabling the Monster to capture the Princess if the Monster can simply get within a certain range, allowing the two player to “hear” each other if they are within a certain range, and changing the permitted speeds of each of the players.

In particular, I’ll examine the case where the Monster wins if it moves onto a node adjacent to the Princess’ location, and where both players are notified of each other’s location if they are within three edges of each other. Furthermore, for simplicity, I’ll assume the game is played on a 10×10 grid, rather than a random undirected graph.

So some quick observations.

The Really Simple Case

If we simplify the situation so that the game is the pure game, with not knowledge of starting positions, no ability to hear, and no capture range, we can observe a few things. First off, so long as the monster’s strategy is non-deterministic, there is no way that the princess can guarantee she can evade capture. However, should the monster and princess choose a random-walk strategy, then given a 100 node graph, the probability that they meet is 1/100 * 1/100. So for now on, let’s just assume the princess adopts such a strategy.

Now we can make a far more interesting observation. With the princesses’ strategy in mind, we note that every time we explore a node and move on, the probability that the princess is at the node that we just explored is equal to the probability that the princess is at once of the adjacent nodes multiplied by the chance that the princess would move there. Furthermore, because the princess cannot reach any adjacent nodes from the location that we were just at, she in fact is less likely to be nearby. (Which is kind of weird, but it makes sense if you imagine us putting a dummy princess on each node, and each turn, splitting the dummies into multiple dummies, sending some out and keeping some around, so that, without the monster, each probability would remain 1.)

Given all that, the intuitive solution is to continuously move to the most probable area nearby. The problem with this approach is that it very easily leads to deterministic searches that cover only a small part of the graph. (See the video below.)

This is not say the approach is incorrect. It’s just that we aren’t looking far enough ahead.

The code used in the video above works like this. First we go through every node and score it equal to 1, representing it’s probability of 1%. Then we place the monster down, remove the 1% from where he’s standing, and distribute it to all of the other nodes evenly. Lastly, we simulate what would happen if the princess moved according to an arbtirary Markov process, which in this case is simple a 20% chance to move to any other neighbor. (For those unfamiliar, as I initially was, this preserves the chance that all nodes will equal 1% given the absence of the monster since the princess will leave the corner only 2*20% of the time when she is on a corner, and she will only enter a corner 20% of the time if she is on either of the adjacent nodes.) Specifically, this is implemented with two hashmaps. One keeps track of the current probability, and another is a copy of the first. Then we go through each node and tell it to distribute 20% of it’s current probability to each of neighbors in the copied hashmap (which is needed so as to not corrupt the flow). Finally we replace the hashmap we are using with the copy. (Note here that 20% is arbitrary. Ideally, we would use some machine learning techniques to figure out what exact matrix the princess is using. Note also that if the chance of the princess moving is 0, then the best strategy becomes to create a Eulerian path that explores each node exactly once.)

So basically, as mentioned on the Wikipedia page for Princess and Monster games, it becomes evident that simply standing still most of the time and only occasionally find a new spot to chill is a pretty viable strategy for the monster. This will kind of create a drain, with the monster being the gutter, simplifying the search from one that is completely blind to only that is only half blind. Unfortunately, I’m not at all on the specifics on how long to wait, but it seems to make some intuitive sense. (Note here that the more the princess moves, the better this strategy of waiting is. )

Add in Capture Range

And it’s no different obviously. The graph basically becomes relatively smaller. There are sometimes a few problem handling corners nicely, but largely it’s inconsequential.

Add in both players knowing the start location of the other or add in hearing range

These are both extremely interesting changes and deserve their own posts, so that I’ll give them. (But probably not until next Tuesday when we turn in our AIs. Check back then!)

Written by Ceasar Bautista

2011/04/19 at 20:09

Dynamic Segmentation

leave a comment »

In order to properly parse the input data gathered by a bot in Robocode, it’s insufficient to statically declare a number of bins beforehand and simply insert the data into the structure.

The main problem with this approach is that the data is getting subdivided way too many times to be useful. For example, if wanted to increase accuracy, we would desire to eliminate any confounding information, and therefore segment on every possible independent variable and more- the reason being that if we can assume bots are imperfect, than our goal is to find actually find flaws. This means we want to segment on a large number of variables, for example distance to target, time since start of battle, bullet power, target velocity, target acceleration, target angular acceleration, target heading, target x-distance from center, target y-distance from center, target hit points, shoot velocity, shooter acceleration, shooter angular acceleration ,shooter heading, shooter x-distance from center, shooter y-distance from center, shooter hit points, and possibly more. That’s a lot, and if we segment on all of that, it’s going to take a very long time to acquire meaningful data that we can act on.

Another problem is that by dividing information, information loses value near the divisions. This can be remedied to an extent through some information blurring, but generally it’s not a trivial problem to solve. Particularly, static divisions are particularly worrisome since there is no good way to know where to divide the data.

Fortunately, there is a solution- dynamic segmentation. The idea is to, rather than statically declaring the segmentation divisions, simply declare along which axes the data can be segmented, and then build a tree structure, which splits data when it makes sense to. This is not as simple as it seems, but to illustrate the general idea, if our enemy is moving on a clockwise spiral relative to our position moving randomly towards our away for sizable amounts of time, then while our first few shoots will all be clumped together in one node, eventually the bot should realize that by splitting the data up into ‘target moving closer’ and ‘target moving away’ that it’s shots will become more accurate. This is very cool, because the bot will generally have some pretty good guesses most of the time, and only improve with more data. Furthermore, it reduces the need to worry about any kind of covariance, since the bot will automatically detect it, being able to split anywhere- for example, rather than tracking distance to corners, the bot will eventually learn where the corners are (provided the enemy acts unusually in them) by first segmenting on target x-distance from center and then segmenting on target-y distance from center.

The problem now is to determine when is it appropriate to split. By virtue of the problem of determining whether or not a die is load, we can figure this out. Immediately, it’s apparent that small amounts of data won’t suffice. Additionally, it’s fairly clear that variance has something to do with it, since if we’re consistently hitting our target it would be rather foolish to split the data. The question is how much variance do we allow.

To that, unfortunately I’m not exactly sure. I think a chi-squared test is the solution, although from my research it seems it can get pretty involved. (Even determining whether or not a coin is biased is fairly complicated.) For now though, I just want to throw out the idea of utilizing a tree structure.

Written by Ceasar Bautista

2011/04/07 at 17:27

Jellybean Jars Done Right

leave a comment »

It’s fairly close to Easter, so I figure a few of these will be popping up in the future, and I thought I’d just write a post about how to game the system properly and guess the correct number with a good deal of precision.

So back in senior year of high school, we had a week long contest known as Spirit Week where the school was divided into two teams to compete in a series of events for no reason other than bragging rights. As you can imagine, most of the stuff was rather silly and I didn’t bother. However, one event was more suited to my tastes: The moderators filled up two decent sized jars with Jolly Ranchers and gave each of us one guess. The person who submitted the closest number to the real count would win some points for their team as well as the two jars of candy.

My friend Vinh and I decided to take up the challenge for some kicks. In order to do this, I took a bunch of 2×4 Lego bricks from my house, a small jar, and then bought some Jolly Ranchers from a nearby convenience store. (I thought the Lego bricks were a good approximation, but it turned out later I didn’t need them.) I then proceeded to fill up the jar with all the Jolly Ranchers I had, measured the height, and calculated the jar volume to Jolly Rancher ratio (measuring Jolly Rancher volume is impractical because of the weirdness of the shape and the chaotic way that they stack, so this is the only reasonable way). It came out to be that 66% of the jar was empty space! (And for those interested, a jar of 2×4 lego bricks is 50% empty space. In other words, if you don’t know what to do with all of your bricks and hate how much space they take up, you might compress them by pressing them together rather than just throwing them all into a box for a 50% reduction!)

Now typically they don’t let you measure the jar, and in that case your guess is as good as mine. (I would suppose taking a picture of the jar next to a meterstick would be useful in the case.) But in this case they let people measure. Unfortunately, the one jar was kind of round, so a simple height x width x length was insufficient, but fortunately, my friend Vinh knew how to do curve fitting on our TI-83s, so I simply recorded a few widths at various heights and we took a few integrals to get the volume. Multiplying the volume by the space to Jolly Rancher ratio gave us an approximation, and being a team event, Vinh and I printed out a bunch of numbers for our friends to guess that gave us plenty of room to be off by.

Subsequently, one of my friends, Max, won,with a guess off by three of our original approximation. Max kept one of the jars, I got one (offered to split it with Vinh but he didn’t care. Me either really, it’s now preserved as a trophy in my basement.), and the team scored a bunch of points. (We still lost though unfortunately.)

And that’s how you do it the nerd way.

Written by Ceasar Bautista

2011/04/06 at 16:14

Posted in Uncategorized

Tagged with ,