## GANN-EWN / Part 5 / Applying the Neural Network trained by the Genetic Algorithm to the game EWN

If you haven’t read the previous parts, you should start at part 1.

Now that we have a Neural Network architecture and a Genetic Algorithm, we can apply them to the game “Einstein Würfelt Nicht”.

# The Parameters of the Problem

There are several parameters that need to be addressed first:

• how to feed information to the NN and how to get a result,
• how big should the NN be, how many layers, how many neurons per layer, how many connections between each layer,
• how to tune the parameters for the GA (population size, mutation rate, etc.).

There are some precedents for every of these 3 points (NNs as well as GAs have been studied), but there is no existing answer for this particular set of problems.

# Feeding Information to the NN

The answer to the first question seems trivial, but it is actually not. The first answer that comes to mind is “just feed it the board and the die and get the stone that has to be played and the move to be played”.

This is of course a valid answer, but there are many more:

• don’t feed the whole board, but rather the positions of the stones along with the die,
• feed the possible moves instead of the die,
• or you could use the network as a “value network”, in other words let the network evaluate how favorable a certain position is for the current player. In that case, the algorithm has to simulate every possible move and apply the network on every resulting board.

There are many other ways of feeding information, including feeding redundant information. For instance, you could feed the number of stones left for each player in addition to the board: that’s an information which is obviously useful and that the network could directly use rather than having to calculate it again.

# Getting the NN’s Result

The Neural Network can be used to gather very different results, also depending on which information it was given as inputs. Here are a few ways that the network can be used:

• the number of the tile to be played along with the move to be played,
• among all the possible moves, get the index of the chosen move,
• return one output number for each tile and each move, and play the tile that has the highest number along with the move that has the highest number,
• if used as a value network, just return one value: the fitness of the current position.

Again, there are many ways of using a neural network to play. We could even use two networks: one to choose the tile to play, and then a second one to choose the move. Whatever we choose, we have to keep in mind two main points here:

• the result can never be an invalid move, which is not always trivial,
• make sure that results cannot be incoherent. For instance, a valid possibility would be to have two integer outputs: one for the tile to play on the board, one for the move to play, each of them being applied the mathematical “mod” to make sure that the results are within range. But then there might be a discrepancy between the chosen tile and the chosen move. Maybe the move made perfect sense with another tile, but not with the one that was eventually selected.

# The Size of the Neural Network

I tend to reason in terms of complexity of the game to address this problem. Just think about “if I had to code a perfect player for this game, how many rules and cases would I have to take into account”.

The answer also depends on what you feed the network. If you feed it a lot of redundant information (for instance, feed it the board and the number of remaining tiles for each player), then the network will have to extract less metadata from the board.

In the case of the game “Einstein Würfelt Nicht”, I chose not to mostly not give any redundant information to the network. Given the size of the board, I believed that a simple network of just a few layers and a couple of hundred neurons would probably do the trick.

Then comes the number of connections between layers. In order to extract as much information as possible from the board, I believed that a fully connected first layer was needed – although I chose not to enforce it, but I gave a sufficient amount of connections for this to happen. So I started off with a first layer of 20 neurons, along with 500 connections between the board (which is a 25 bytes array, with an additional byte for the die). I have also tried other different variants.

# The Parameters of the Genetic Algorithm

## Population size, mutation and cross-over rates

I started off with a population of 100 individuals and made some tests with 200. In that population, I chose to keep a fair amount of the best individuals, 10 to 30%, without checking their scores. All the others are discarded and replaced with either new individuals, either top individuals that have been mutated and crossed-over.

As for the mutation rate, I made it randomly chosen between 1/1000 and ten to twenty per 1000. That is to say that to create a mutated individual, 1 to 10/20 random mutations are applied for every 1000 bytes of its DNA. Note that with a network of 10000 elements, that’s just a few mutations in the network. A mutation can be a change in an operation, a connection move or a change in parameters such as weight and offset.

As for the crossover rate, I made it from 0.01% to 1%. As we will see later, it wasn’t that successful in the first versions.

## Evaluating players

Another important parameter is the accuracy of the evaluation for every individual. In the case of a game, it can be measured by having this individual play many games against other players. Other players may be other individuals of the population and/or a fixed hard-coded player. The more games are played, the more accurate the rating of a player. And this is getting more and more critical as the player improves: at the beginning, the player just needs to improve very basic skills and moves, it is failing often anyway, so it is easy to tell the difference between a good player and a bad one. As it improves, it becomes more and more difficult to find situations in which players can gain an advantage or make a slight mistake.

In the case of EWN, as it is a highly probabilistic game, the number of matches that are required can grow exponentially. Note that there are even a large number of starting positions: 6! x 6! which is roughly 500 thousand permutations. With symmetries we can remove some of them, but there is still a large number of starting positions despite the very simple placing rules. So even if you play 100.000 games for a player, it is still not covering the wide variety of openings. What if your player handles well those 100.000 openings but is totally lame at playing the rest? Not even mentioning the number of possible positions after a few turns.

A good indicator to check whether we have played enough games to correctly rate players is the “stability” of the ranking of players as we continue playing games. As the rankings stabilize (eg for instance the top player remains at the top for quite a long time) we are getting better and better accuracy.

## Individuals selection and breeding

As I developed this and started testing it, I realized that the evolution was going very slowly: new individuals were bad in general, with only a few of them reaching the “elite” of the population. That’s because of the randomness of the alterations. We will see later how I tackled that problem.

As I was going forward in this and observing how slow the process was on a CPU, I also started planning to switch the whole evaluation process to the GPU.

## GANN-EWN / Part 2 / Building a Hand-Crafted Player For EWN

Welcome to this second part of the GANN-EWN (Genetic Algorithms for Neural Networks applied to the game “Einstein Würfelt Nicht”) series! If you haven’t read it yet, it’s probably time to read the first part.

The game is quite simple: at the beginning of the game every player has 6 pieces labeled from 1 to 6, and every player throws a die at every turn. Depending on the result of the throw, the player may move one or two of his own pieces. The goal is to reach the opposite side of the board (diagonally) or capture all opponent’s pieces.

The rules with the dice are simple as well. If the die corresponds exactly to one of his pieces, he has to play that piece in any direction forward (the three squares: up, up left, or left), like in here where the die shows “5”:

If no piece with the same number are on the board, then one of the two pieces that are the closest to the die can be played. For instance, if the die was “2”, then the “1” and “3” can be played here:

Again, if the die is “3” then on this board the “2” or the “5” can be played:

Note that if there was no “2” on this board, then only the “5” could be played since there is no lower number than “3” on the board.

The first thing to do was to code a simple board implementing the rules of the game, which is quite straightforward. I chose Java, which is probably the programming language I’m most comfortable with, but also for another reason, which I will develop later.

To have something to compare my neural networks to, and also to test my implementation of the EWN Board and game logic, my first first step was to implement a simple (and I can call it quite “naive”) implementation of a player, based on my very limited experience of the game. Remember that I just discovered EWN, so after having a look at some games from the best players, I realized that one common strategy is to “eat up” some of your own pieces early in the game, to give momentum to your remaining pieces. Although this is a “classic” strategy, it does have one drawback: you opponent may be able to take all your remaining pieces and you lose. Nevertheless, I decided to implement this rule, along with a few others. Here is the list of rules that are hard-coded in this first player:

• if one move is a winning move (capture the last opponent’s piece or reach the opposite side of the board), then play it,
• if an opponent piece is close to winning (basically at a certain distance from the goal), always take it,
• always take your own pieces if they are within a certain number (typically from 2 to 5) if you have more than a certain amount of pieces (you don’t want to take your pieces if you have too few of them, that could be suicide in some situations),
• move forward always the most advanced piece (rushing towards the goal),
• preferably move forward-up rather than simply forward or up.

And that’s it!

So yes, this is a very simple player, but one funny fact is that this player beats me consistently! That’s how weak I am at EWN as a player.

But as those rules can have parameters (how far must an adversary piece so that we capture it? etc.), I ran a simple computer simulation using a range of possible parameters (including totally disabling a rule). And those simulation showed me that, by far, the best parameters are:

• capture pieces that are at most at a distance of 2 from the goal, but actually this rule is not very important,
• take your own pieces from 2 to 5 until you have only 2 pieces left,
• move always forward-up,

Of course, every single rule taken on its own may have some exceptions, but my “simple player” doesn’t go that far.

I also created a “random” player which picks moves at random, again to have something to compare to that doesn’t vary much in its crappiness. 🙂 And indeed my simple player was definitely better than the random one, at least.

I published this first player on the littlegolem site, with a different user than my regular user.

This is when I discovered that the site was actually crawling with bots playing EWN! Nice, the challenge was getting interesting! That’s when I met with a German researcher who also wrote a bot to play EWN on littlegolem. He even made part of his PhD on this project, so needless to say that my own simple player was quite ridiculous compared to his, and got a very bad beating!

This is when I realized that EWN is actually a statistics-driven game, since you make a choice depending on the probabilities of the next dice throws. You have a “2” ahead and want to give it momentum to reach the opposite side of the board faster? Capture your own “1”, “3” and possibly “4” and “5” pieces. The enemy has one piece slightly ahead but that piece has a 1/6 chance to be picked by the dice? Just ignore it. But you will capture that piece if it has a 1/2 chance of being played. As the Wikipedia page points out, one of the first approach is to calculate probability tables for each piece to reach the goal. All the dynamics of that game are based on probabilities. Well, almost. So that makes it a very good candidate for a Monte-Carlo type approach.

As a reminder, the general principle behind the Monte-Carlo approach is: “at a given turn, for each possible move play as many random games as possible and play the one that leads to victory more often than the others.”

Brutal. But very effective and the easiest “AI” to program. I guess. At least on a CPU.

So I implemented a simple Monte-Carlo (that I will abbreviate as “MC” now on) algorithm and launched it against my simple player. And sure enough, the MC player won, even if it was just playing 1000 games with random moves to decide which move to pick. And I’m not speaking about a “little better” here. The MC player won almost 70 games out of 100.

Then I thought: picking random moves is not really a likely outcome for a normal game, so the results for the MC player are somehow skewed. What if I use my simple player instead of a random player to play the games?

No surprise, using the simple player within the MC player improved the results against the simple player by quite a bit. But against the random MC player, the difference was not that big, it just improved it slightly, but not really significantly, only by 2.5 to 3% which is not bad but not as much as I would have expected. Note that running this simulation on a CPU is costing a lot of cycles. 1000 games with each roughly 15 to 20 moves. And at each turn, play 1000 games for every possible move (there are 1 to 6 possible moves). That’s a lot of moves to run! I’ve accelerated that to run on a GPU, but that will come later.

And still, the MC player lost to the German PhD’s carefully hand-crafted player. Well if a simple MC approach would do it, that’s what he would have picked in the first place!

This is it for this part. Next, we’ll start building a Neural Network with its associated Genetic Algorithm.

## GANN-EWN / Part 1 / GANNs (Genetic Algorithms applied to Neural Networks)

Combining Genetic Algorithms and Neural Networks is an idea that has been troubling my mind for the last 20 years. Unfortunately, life went in the way and I didn’t have a real chance of putting this idea into practice. Neural networks are very often used to do some “classifying” jobs, and they are very good at it. At least they got far beyond what we humans could program ourselves. However, as I have ideas for developing very interesting games, I also had in mind to have an AI play those games, and an AI that would actually learn how to play them, rather than programing it myself. This is exactly what the guys at DeepMind have been doing in the recent years. From Atari games to Go and Chess (and Shogi), they have amazed us all.

As I’m going back to my original interest in AI, I have taken upon myself to build an AI from scratch that would do exactly what I had in mind 20 years ago: learn how to play games.

In the meantime, things in the AI field have evolved. Graphics cards have opened new horizons for training bigger and faster Neural Networks. I have had some graphics cards that I used at some point to mine some cryptocurrencies for fun. Those cards were AMD cards, and they were getting hot very easily (the 7970 was going up to 100 degrees Celsius, I suspect that the fitting of the radiators were poorly made). I set up a watercooling system for the whole miner. Although it was a very interesting experience, it came to an end rather quickly. since ASICs kicked in and rendered graphics cards useless.

At the time, I started a project for handwriting character recognition (more on this project later…), and I used one of those cards to apply some graphics transformations (Hough transform and such). But they also consume a lot of electricity. So, sorry AMD, but I switched to Nvidia (I don’t own any stocks of either of those companies… or any other company, by the way, but I do believe that AMD is seriously losing ground here), which also has some great support with tools to build Neural Networks like TensorFlow.

So I wanted to start with a simple game and a simple goal:

• create a neural network framework from scratch, with NNs as generic as possible, including non linear functions, spikes, max and many operations, the ones that will be best will be selected by the genetic algorithm,
• a genetic algorithm to make those NNs evolve,
• all this should be able to run both on CPUs and GPUs (at least the most power-consuming parts), and I originally planed to learn some Cuda although I had prior OpenCl experience.

But everything had to start somewhere. And I didn’t want to tackle Chess or Go as my first GANN project. Anyway, the DeepMind guys have done that already!

So I had to aim for something simple for a start, and at the same time some game that I didn’t know well so that I wouldn’t be tempted to direct the algorithm and make “Stockfish for game XXX”.  On the same note, another strong criteria was also that it shouldn’t be a game that had been solved by mainstream software, like Chess, Checkers or Reversi/Othello. At the website littlegolem on which I play board games, there are some games that I don’t know well (yet) and also many games for which mainstream software is not really available. One of them is called “Einstein Würfelt Nicht” (which I will call EWN in the future) and is actually a dice game, played on a 5×5 board. When I started this project, I had never played that game but the rules seemed quite simple and I decided to have a go at it. This was of course the first attempt, the first prototype, basically some starting point, but certainly not the end point.

There are also two variants of the game on the site, which would be also nice to tackle. My initial goal there was to defeat human players and become a top player on the site. Note that this particular site is also very flexible with its players, and doesn’t ban anyone. If you don’t like playing against cheaters or robots, then that site is definitely not for you. But personally I learn a lot by playing with any kind of opponent, it doesn’t matter if I win or if I lose.

I will develop that story in the next articles, and I will also post some of the code soon on my gitlab account. However, I will not post the full code, don’t expect to be able to have a running bot that would play for you at EWN from there. But the generic code will be there and will be adaptable to other games as well.

The next part is here.