Evaluation
One of the trickiest things to do with the computer opponent is to evaluate the current game state. Given any arrangement of pieces, you need to be able to determine which of the possible plays is the best. You need an heuristic to do this. An heuristic is an estimation of what you think is a good play to make. The factors that I used to evaluate the state of the game were:
* Number of pieces
* Number of playable squares (usually called "mobility")
* Number of empty squares next to the player's piece ("the frontier", or "potential mobility")
* Various corner and edge patterns (edges and corners are usually better moves)
Pieces and mobility
The most naive heuristic is to count the number of pieces each side has. Ultimately, this is the only measure, piece count determines the winner. However, until the end game, it has little to do with who is going to win.
A less naive approach is to count how many plays can be made. A state where 7 plays are possible is almost certainly better than one where you have to skip your turn because you can't make a play. I had already calculated where a player could place a piece in their turn as part of the user interface. Therefore, I got the mobility measure for nothing.
Finding the empty squares next to a player's pieces was also relatively simple to implement. It was little more than a variation on the mobility algorithm.
Patterns
Image that elucidates where the X and C-Squares are on an Othello board. |
The patterns that I check for are:
- Corners (a good play for the current player)
- X-Squares (bad play)
- C-Squares (bad)
- Corners and X-Squares (good)
- Corners and C-Squares (good)
- Edges (good)
Furthermore, I check for lots of combinations of these patterns. I.e., not only does a game state get points for a play on a corner, but gets even more points for the multiple corners. I didn't check to see if this approach improved play, but I suspect it would.
Piece stability and parity
There were a couple of major components of a good Othello evaluation function that I didn't include. These were piece stability and parity. Piece stability, i.e., finding the pieces that cannot be flipped, is one of the trickier things to determine. There is a good description of how to do it here. I couldn't think of a really efficient way to implement stability, so I left it out.
Parity, i.e., determining who plays last, was relatively simple to implement in its basic form. By default, white will always play last, and therefore has an advantage. For black to play last, someone has to miss a turn. The basic approach to parity didn't really seem to impact the performance of the computer player, so I left it out of the evaluation. A sophisticated form of parity - one where isolated parts of the board are evaluated for parity (an isolated section is one that is surrounded by pieces and edges) - seemed too tricky to implement, so I never tried.
Depth-First Search
I took my negamax, alpha-beta pruning negamax and negascout search methods from my noughts and crosses source code and adapted it to work for Othello. That was fairly easy to do, although my original code was a bit rubbish.
Initially, I thought I'd use negascout for Othello as it is the best of the three. However, for it to work effectively (i.e., better than an alpha-beta pruning negamax), it needs to do shallow searches of the game tree, or find some other way to have a pretty good attempt at pre-ordering the plays from best to worst. Negascout generally does a mini-search within a normal negamax search. It was a more involved task than I suspected. Once I had implemented the patterns approach to the evaluation function, I realised that my computer player was pretty good. Therefore, I decided not to pursue a negascout algorithm for Othello.
Opening book
With all the work that I did to be able to display the history of games to the human player (percentage of games that made a play, percentage of those where black wins), I was serendipitously writing the code for the computer player. The computer opponent uses this information in a similar way as a human.
End game search
One of the things that I didn't do for the computer player was an put any work into an end-game search. This sort of search is much deeper and tries to get search until the end of the game. Once a computer has this information, it'll know if it has won and exactly which moves to make to ensure victory. Until the end-game search, all other plays are calculated guessing. All I did to the computer player approaching end game was increase the search depth.
Conclusion
I completely underestimated how difficult it would be to create a competent computer player. I now have a much greater respect for people who managed to create computer players that are vastly superior to mine on machines that are vastly inferior to today's technology. It's true that .NET is not really up to the challenge (unmanged code like C and assembler would be much more suitable), but you'd think a modern processor using .NET could get close to Pentium using C. From my experience, it didn't. But in truth, it was the algorithms that weren't good enough. I would have many more things to do to be able to compete with other computer Othello players. E.g., stability detection, parity, better potential mobility, negascout, Multi-ProCut, end-game solving, much deeper searching, training and machine learning etc. There is an immensity of improvements that I didn't even touch on. I'm not unhappy with what I achieved, more very impressed that people have done so much better.
No comments:
Post a Comment