Ceasar's Mind

Follow me: @Ceasar_Bautista

Posts Tagged ‘programming

Recursive Python Considered Harmful

with 11 comments

It somewhat recently occurred to me that recursion, despite it cleanliness, improves readability at a significant cost in memory. By piling on the stack trace, at best we get a function that uses O(n) memory, and quite possibly even one that is even worse. Given that any recursive function can be written using a while loop with O(1) memory bounds, recursion seems quite a poor choice for all practical purposes.

I brought this up at work and Alexey explained that while this is true, many (particularly, functional) compilers are intelligent enough to optimize the code for you if you use tail-recursion. Being unaware of the concept, I looked up how to convert a regular recursive function into a tail-recursive function and discovered I was wasting my time since Guido has decided not to implement tail-recursion in Python.

This begs the question then, is recursion ever useful in Python or should we go back to our code and start writing those while loops?

As I see it, there is only one reason to use recursion in Python and that is when you plan to memoize your function (which makes the function take up O(n) memory anyway). (Additionally, it may be helpful to write functions out as recursive functions in order to assist in proving things, but I think, again, the function would almost certainly need to be memoized or transformed.)

In all other cases, I believe while loops are the way to go. Besides the obvious problems with memory, there are a few other points worth mentioning.

For one, Python caps the stack height at 1000. While for some functions this may be okay, if you have any interest in scaling, this limit is way too small.

Additionally, function call overhead is Python is extremely slow. (Source: the official Python wiki.)

Anyway, that’s my two cents.

Advertisements

Written by Ceasar Bautista

2011/10/23 at 00:51

Posted in Uncategorized

Tagged with ,

Python Tips

leave a comment »

The “with” statement

This is just incredibly beautiful code.

Check out the source for more details on Python’s “with” statement if you’re interested.

Get current directory

Too useful not to know.  Source is here.

max and min accept a sorting function

Instead of writing out this:

def best(candidates, scoref):
    '''Use scoref to find the candidate with the highest score.'''
    highest_score = float('-inf')
    best = None
    for candidate in candidates:
	candidate_score = scoref(candidate)
        if candidate_score > highest_score:
            highest_score = candidate_score
            best = candidate
    return best

You can just write:

max(iterable, key=scoref)

Don’t forget the “key” keyword, or Python will think you are comparing the list against the function. This is pretty cool because it makes things like getting the max value in a dictionary trivial.

>>> a = {'a': 1, 'b':2, 'c':3}
>>> a
{'a': 1, 'c': 3, 'b': 2}
>>> max(a, key=lambda x: a[x])
'c'

Written by Ceasar Bautista

2011/08/02 at 22:26

Posted in Uncategorized

Tagged with ,

Prime Generator

with 2 comments

Inspired by the last post, just thought this was too cool not to share. The function generates primes!

def prime_gen():
    '''Generate primes.'''
    offset = 2
    index = offset
    sieve = range(offset, offset ** 2)
    found = []
    while 1:
        start = index ** 2
        end = (index + 1) ** 2
        sieve.extend(range(start, end))

        num = sieve[index - offset]
        if num:
            found.append(num)
            sieve = sieve[index - offset:]
            offset = num
            yield num

        i = 0
        while i < len(found):
            curr = found[i]
            j = start / curr * curr
            while j < len(sieve) + offset:
                sieve[j - offset] = 0
                j += curr
            i += 1
        index += 1</pre>

Written by Ceasar Bautista

2011/07/10 at 03:57

Posted in Uncategorized

Tagged with , ,

The Quadratic Sieve

with 3 comments

Just got done with a terrible day of magic numbers. To spare everyone the trouble, here’s a fairly clear version of a quadratic sieve for generating prime numbers that ought to be self-explanatory and easily modified– as opposed to many other algorithms out there.

def primes(n):
    '''Get all primes up to n.'''
    n = int(n)
    if n < 2:
        return []
    sieve = range(n)
    sieve[1] = 0
    root = n ** 0.5
    index = 0
    while index <= root:
        if sieve[index]:
            i = index ** 2
            while i &lt; n:
                sieve[i] = 0
                i += index
        index += 1
    return [x for x in sieve if x]

Written by Ceasar Bautista

2011/07/10 at 02:49

Posted in Uncategorized

Tagged with , ,

Dynamic Segmentation

leave a comment »

In order to properly parse the input data gathered by a bot in Robocode, it’s insufficient to statically declare a number of bins beforehand and simply insert the data into the structure.

The main problem with this approach is that the data is getting subdivided way too many times to be useful. For example, if wanted to increase accuracy, we would desire to eliminate any confounding information, and therefore segment on every possible independent variable and more- the reason being that if we can assume bots are imperfect, than our goal is to find actually find flaws. This means we want to segment on a large number of variables, for example distance to target, time since start of battle, bullet power, target velocity, target acceleration, target angular acceleration, target heading, target x-distance from center, target y-distance from center, target hit points, shoot velocity, shooter acceleration, shooter angular acceleration ,shooter heading, shooter x-distance from center, shooter y-distance from center, shooter hit points, and possibly more. That’s a lot, and if we segment on all of that, it’s going to take a very long time to acquire meaningful data that we can act on.

Another problem is that by dividing information, information loses value near the divisions. This can be remedied to an extent through some information blurring, but generally it’s not a trivial problem to solve. Particularly, static divisions are particularly worrisome since there is no good way to know where to divide the data.

Fortunately, there is a solution- dynamic segmentation. The idea is to, rather than statically declaring the segmentation divisions, simply declare along which axes the data can be segmented, and then build a tree structure, which splits data when it makes sense to. This is not as simple as it seems, but to illustrate the general idea, if our enemy is moving on a clockwise spiral relative to our position moving randomly towards our away for sizable amounts of time, then while our first few shoots will all be clumped together in one node, eventually the bot should realize that by splitting the data up into ‘target moving closer’ and ‘target moving away’ that it’s shots will become more accurate. This is very cool, because the bot will generally have some pretty good guesses most of the time, and only improve with more data. Furthermore, it reduces the need to worry about any kind of covariance, since the bot will automatically detect it, being able to split anywhere- for example, rather than tracking distance to corners, the bot will eventually learn where the corners are (provided the enemy acts unusually in them) by first segmenting on target x-distance from center and then segmenting on target-y distance from center.

The problem now is to determine when is it appropriate to split. By virtue of the problem of determining whether or not a die is load, we can figure this out. Immediately, it’s apparent that small amounts of data won’t suffice. Additionally, it’s fairly clear that variance has something to do with it, since if we’re consistently hitting our target it would be rather foolish to split the data. The question is how much variance do we allow.

To that, unfortunately I’m not exactly sure. I think a chi-squared test is the solution, although from my research it seems it can get pretty involved. (Even determining whether or not a coin is biased is fairly complicated.) For now though, I just want to throw out the idea of utilizing a tree structure.

Written by Ceasar Bautista

2011/04/07 at 17:27

Some Segmentation Ideas

leave a comment »

Continuing with my earlier post on Robocode, I’d like to describe some thoughts on  segmentation strategies.

So the basic idea is, we want to minimize our bins at all times. In effect, this means calculating the Maximum Escape Angle, calculating the arc formed by the intersection of the target and the circle of radius distance to target with center on our tank (that is, the orbital path our opponent could take), and then taking the ceiling of one half of the ratio (one half because a single shot will hit our opponent just as well as if hits the leftmost part of our opponent as the rightmost side).

Interestingly, we can adjust the Maximum Escape by adjusting the speed of our bullet. Given that, we can actually reduce or expand the number of bins as we desire- a useful ability when trying to maximize probability (since things are discrete, a change from N to N-1 bins can be fairly significant assuming the probability to hit is a uniform distribution at both ranges).

Furthermore, we can supplement our segmentation data by adjusting the range at which we fight, spiraling toward and away our opponent as necessary, in order to keep our targets at optimal distances.

 

Written by Ceasar Bautista

2011/03/22 at 03:05

Wave Surfing Explained

leave a comment »

I’ve recently reignited my interest in Robocode but this time armed with everything I’ve learned since September.

For the uninitiated, Robocode is an educational programming game, originally written with Java in mind, where players code up algorithms to guide virtual tanks in destroying each other. It’s a unique challenge, combining fields like programming, statistics, linear algebra, and game theory all into one.

Originally intended for students, the game was quickly picked up by experienced programmers and is now rather competitive. Many players use sophisticated algorithms in order to dodge enemy bullets and make sure their own find their mark.

Anyway, let me continue to explain wave surfing- but first I need to explain segmentation.

Segmentation

Since in Robocode neither bot can see any of the bullets, developers have to find ways to track their enemy through alternative means. Segmentation is a targeting technique that involves finding which firing angles are most likely to hit the target by dividing the continuous range of firing angles into discrete bins, and finding which bins are most popular. To implement segmentation, one needs to track each bullet with “waves”.

A wave is basically analogous to a ripple in a pond after tossing a stone into the water. We track the bullets until the “wave” hits our opponent. At that point, we can determine what the correct firing angle was at the time of firing, and subsequently increment the value in the bin which contains the correct firing angle. We then proceed to fire at the center of the most popular bin whenever we get the chance.

If our bins looked like this, we would fire into the 15% bin.

A primitive approach of segmentation might only segment by firing angle. However more sophisticated approaches first segment by distance, and then angle. (And even more sophisticated approaches segment by even more dimensions.) However, these improvements come at a cost of slowing down the rate of learning, but the accuracy gain is typically well worth the price.

Optimal segmentation (I think anyway) ought to reduce the number of bins down to the minimum needed to hit every possible position of the opponent. For example, an opponent at point blank range would only require one bin, since all firing angles within the escape angle would be guaranteed to hit the target. As the distance increases however, more and more bins become necessary (I believe at an exponential rate). By reducing the number of bins in this fashion, we increase learning speed, and reduce memory costs.

Wave Surfing

Wave Surfing is an evasive technique first developed and implemented by ABC in mid-2004. To put it succinctly, wave surfing is anti-segmentation. Every time the enemy fires a shot, we create a wave, and see whether or he not the bullet hit when the wave reaches our bot, and subsequently increment our own bin. In this way, we know what our opponent believes to be the most popular firing angle, and therefore make sure to avoid it, producing a near uniform distribution at all ranges.

Why this is optimal

Refer back to my earlier post on the game theoretical solution to Rock-Paper-Scissors. Basically, to do anything else is to tip our opponent off to a flaw in our strategy. If you know for example that your opponent will play Rock 2/3 of the time and Paper 1/3 of the time, your optimal strategy becomes to keep playing Paper.

Implementation Issues

A literal implementation of the above is still susceptible to intelligent algorithms. For example, at the start of a game or Rock-Paper-Scissors, if your opponent first plays Scissors, it would be illogical to assume that because your opponent has so far only played Scissors, it would be logical to assume he would do so again. This illustrates the problem of determining when the opponent is playing sub-optimally with an unequal probability distribution. Thankfully, statistics provides an answer. Using statistical methods to determine statistically significant variance will lead to a solution- typically, with low amount of data, statisticians say nothing can be concluded, and when more data, more confidence can be placed on inferences. A simple implementation of confidence intervals ought to be sufficient.

Written by Ceasar Bautista

2011/03/20 at 01:31