Ceasar's Mind

Follow me: @Ceasar_Bautista

Alpha Beta Pruning

with 2 comments

The most significant improvement that can be made to the AI is the implementation of alpha beta pruning. Alpha beta pruning basically evaluates the board the same way that human minds do and allows both us and computers to save drastic amounts of time when figuring out the best move.

How it Works

If I analyze a move and my opponent’s best response leads to a position that is equal and then I start analyzing another position and see at one point that my opponent has a response that is worse than equal for me, then I immediately stop looking at that move since I know that I could always just play my original move and end up with a better game. Analyzing my opponent’s alternatives would be a waste since I knew I’d wouldn’t be playing the move when I found that first response that was bad for me. Alpha beta works exactly the same way.

Let me put that in concrete terms- If for example, I analyze a Knight move and see that at best my opponent can’t do better than equalize, and then I start analyzing an alternative, say,  my Queen capturing a guarded enemy Pawn, I don’t have to analyze all the moves that my opponent has in response to my Queen capturing his Pawn if I see that he can simply capture my Queen. (Remember, I could always just play my Knight move.) For three ply though, it gets a little bit more tricky. For example, if I make a pawn move, my opponent makes a pawn move, and I capture a Knight, my opponent probably won’t make that pawn move if a different move would save the Knight (or maybe even capture one of mine). Of course, I can always reject my initial pawn move though, so it can get a little tricky.

Back to the pruning idea though: Alpha and beta both represent numbers. Alpha I assign to my best move (and let’s say I am playing black). So every time it’s white’s turn, I go through all the responses, score them, and assign the best score to alpha. Then I start checking white’s next alternative and all of my responses. If I find one that scores higher than alpha, I stop looking at that move. White won’t play it. If however, I find no response better than alpha, if they’re all worse for example, then I find the best of them and set that as the new alpha. In this way, I find my worst best move.

Beta on the other hand, keeps the tab for white. I do the same thing, except I look for scores worse than beta.

Here is basically the code I use, all written in Java.  (My actual code has some ugly stuff to make it work in the context I’m using it.) And here is an excellent link that also explains the idea, but more importantly, it has an interactive application on the bottom that shows how the algorithm works if I haven’t explained it fully.

Alpha Beta pruning reduces the amount of searching drastically. However, to get the full effect, one last improvement has to be made. Since the order of the moves to be searched affects when a search will be terminated, it’s best to sort the moves (I believe at each level) from strongest to weakest. This indeed involves scoring every possible position (minus those at the last ply to be searched), but I found that it is still far more efficient. The only level not worth sorting is the final level, at depth 0, since that’s where the alpha beta pruning kicks in (reducing the need to score everything).

Anyhow, I hope this all makes sense.

Written by Ceasar Bautista

2010/06/28 at 21:31

2 Responses

Subscribe to comments with RSS.

  1. […] to score means that the AI will work faster, allowing it to search deeper. More on that in Part 2 though. Possibly related posts: (automatically generated)collective vs. connectiveChess ArtA.I. to […]

  2. I find this to be a bit of a confusing explanation and a little hard to follow. Graphs are your friend!

    There’s also a lot of background knowledge that’s needed to understood why alpha-beta pruning makes sense, such as depth-first search [and the development to iterative deepening depth-first search], and the general concept of the minimax algorithm, which is what alpha-beta optimizes. There’s also heuristics that can be used if you can’t [in a reasonable timeframe] precisely calculate the score. I think you kind of mix heuristics into the topic with the Knight move vs. Queen move example, but that’s not necessarily an example of alpha-beta pruning. Of course this is getting pretty technical, but it’s a technical topic anyway.

    It of course won’t translate perfectly, but I’m interested to see how you’ll take this idea and apply it to your RTS musings!


    2010/07/04 at 15:51


Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s