Surveiller la mortalité « toutes causes » en open source

Introduction

La transparence est essentielle pour construire une société de confiance. Étant moi-même toujours en doute avec les informations que je rencontre, quelle que soit leur source, je me suis posé beaucoup de questions sur la mortalité depuis le début de l’épidémie de Covid. J’ai d’ailleurs posté des compte-rendus détaillés (et moins détaillés) de ce que je trouvais sur la Covid, ainsi que d’autres informations ici, , et et ailleurs.

L’un des principaux problèmes qu’un citoyen lambda rencontre très vite, c’est la difficulté à évaluer des données nationales qui sont par définition des agrégats, parfois des agrégats d’agrégats… sachant que tout le monde y va de ses propres interprétations. Il me fallait donc repartir de données brutes difficilement falsifiables ou manipulables.

Quelles données ?

Par ailleurs, la comptabilisation des « morts Covid » est toujours biaisée, car comme je l’indique dans cet autre billet, on meurt le plus souvent d’une multitude de facteurs, pas seulement de la Covid (ou d’une autre maladie).

Il ne nous reste donc plus, à nous autres citoyens lambda, la mortalité « toutes causes », qui représente une réalité non biaisée : telle personne est morte à telle endroit à telle date. Or, l’INSEE fournit exactement ces données détaillées. Les données ont toujours 1 mois et demi à 2 mois de retard. Cela permet tout de même de regarder rétrospectivement ce qui s’est passé pour pouvoir anticiper ce qui va arriver. Mais surtout, cela permet de juger du degré de fiabilité de diverses sources d’informations lors des événements passés, et ainsi de se construire un indice de confiance sur telle ou telle source.

J’ai donc développé un programme pour analyser ces données et calculer des courbes de mortalité à partir de ces données brutes. J’ai également mis en place un site pour afficher les résultats, que vous pouvez consulter (le site est chez moi, il est donc possible qu’il ne soit pas toujours disponible et qu’il donne une vue erronée au moment où je fais des mises à jour des données, une fois par mois).

Petit manuel de l’utilisateur

Quelques petites remarques sur le site :

  • les données proviennent directement des données brutes, filtrées (car il y a des doublons dans les fichiers de l’INSEE),
  • le graphe correspond à une année entière, donc les données les plus à gauche correspondent au début de l’année, et celles les plus à droite à la fin de l’année,
  • les différents types d’affichage sont :
    • absolu : c’est le nombre de morts chaque jour,
    • pondéré : le nombre de morts pour 1 million d’habitants, ce qui permet de comparer d’une année sur l’autre puisque la population varie d’une année sur l’autre,
    • étalé : c’est la moyenne glissante du nombre de morts pour 1 million d’habitants sur 10 jours, cela permet d’avoir une version « lissée » des courbes,
  • on peut filtrer par département (ou par pays étranger) pour observer la mortalité dans une région donnée et ainsi cibler des événements locaux,
  • on peut également filtrer par tranches d’âges pour voir l’impact des événements en fonction des âges,
  • on peut filtrer avec un intervalle d’années et/ou de mois, afin de pouvoir comparer des années proches sans le « bruit » des autres années,
  • lorsqu’une seule année est sélectionnée (filtrage de l’année x vers la même année x), l’affichage bascule en mortalité par tranche d’âges, pour analyser un événement en particulier, par exemple.

Transparence

Dans un souci de transparence et pour faire marcher l’intelligence collective, je publie les sources de mes programmes en open source. Ainsi, chacun peut vérifier mon code, et même l’installer chez soi, le modifier, l’améliorer, et se faire sa propre opinion en toute indépendance.

Le programme java de remplissage de la base de données : https://gitlab.com/jytou/country-death-observe-java

Le site web pour en faire des courbes : https://gitlab.com/jytou/country-death-observe-php

GANN-EWN / Part 2 / Building a Hand-Crafted Player For EWN

Welcome to this second part of the GANN-EWN (Genetic Algorithms for Neural Networks applied to the game “Einstein Würfelt Nicht”) series! If you haven’t read it yet, it’s probably time to read the first part.

The game is quite simple: at the beginning of the game every player has 6 pieces labeled from 1 to 6, and every player throws a die at every turn. Depending on the result of the throw, the player may move one or two of his own pieces. The goal is to reach the opposite side of the board (diagonally) or capture all opponent’s pieces.

The rules with the dice are simple as well. If the die corresponds exactly to one of his pieces, he has to play that piece in any direction forward (the three squares: up, up left, or left), like in here where the die shows “5”:

If no piece with the same number are on the board, then one of the two pieces that are the closest to the die can be played. For instance, if the die was “2”, then the “1” and “3” can be played here:

Again, if the die is “3” then on this board the “2” or the “5” can be played:

Note that if there was no “2” on this board, then only the “5” could be played since there is no lower number than “3” on the board.

The first thing to do was to code a simple board implementing the rules of the game, which is quite straightforward. I chose Java, which is probably the programming language I’m most comfortable with, but also for another reason, which I will develop later.

To have something to compare my neural networks to, and also to test my implementation of the EWN Board and game logic, my first first step was to implement a simple (and I can call it quite “naive”) implementation of a player, based on my very limited experience of the game. Remember that I just discovered EWN, so after having a look at some games from the best players, I realized that one common strategy is to “eat up” some of your own pieces early in the game, to give momentum to your remaining pieces. Although this is a “classic” strategy, it does have one drawback: you opponent may be able to take all your remaining pieces and you lose. Nevertheless, I decided to implement this rule, along with a few others. Here is the list of rules that are hard-coded in this first player:

  • if one move is a winning move (capture the last opponent’s piece or reach the opposite side of the board), then play it,
  • if an opponent piece is close to winning (basically at a certain distance from the goal), always take it,
  • always take your own pieces if they are within a certain number (typically from 2 to 5) if you have more than a certain amount of pieces (you don’t want to take your pieces if you have too few of them, that could be suicide in some situations),
  • move forward always the most advanced piece (rushing towards the goal),
  • preferably move forward-up rather than simply forward or up.

And that’s it!

So yes, this is a very simple player, but one funny fact is that this player beats me consistently! That’s how weak I am at EWN as a player.

But as those rules can have parameters (how far must an adversary piece so that we capture it? etc.), I ran a simple computer simulation using a range of possible parameters (including totally disabling a rule). And those simulation showed me that, by far, the best parameters are:

  • capture pieces that are at most at a distance of 2 from the goal, but actually this rule is not very important,
  • take your own pieces from 2 to 5 until you have only 2 pieces left,
  • move always forward-up,
  • do advance the most advanced piece.

Of course, every single rule taken on its own may have some exceptions, but my “simple player” doesn’t go that far.

I also created a “random” player which picks moves at random, again to have something to compare to that doesn’t vary much in its crappiness. 🙂 And indeed my simple player was definitely better than the random one, at least.

I published this first player on the littlegolem site, with a different user than my regular user.

This is when I discovered that the site was actually crawling with bots playing EWN! Nice, the challenge was getting interesting! That’s when I met with a German researcher who also wrote a bot to play EWN on littlegolem. He even made part of his PhD on this project, so needless to say that my own simple player was quite ridiculous compared to his, and got a very bad beating!

This is when I realized that EWN is actually a statistics-driven game, since you make a choice depending on the probabilities of the next dice throws. You have a “2” ahead and want to give it momentum to reach the opposite side of the board faster? Capture your own “1”, “3” and possibly “4” and “5” pieces. The enemy has one piece slightly ahead but that piece has a 1/6 chance to be picked by the dice? Just ignore it. But you will capture that piece if it has a 1/2 chance of being played. As the Wikipedia page points out, one of the first approach is to calculate probability tables for each piece to reach the goal. All the dynamics of that game are based on probabilities. Well, almost. So that makes it a very good candidate for a Monte-Carlo type approach.

As a reminder, the general principle behind the Monte-Carlo approach is: “at a given turn, for each possible move play as many random games as possible and play the one that leads to victory more often than the others.”

Brutal. But very effective and the easiest “AI” to program. I guess. At least on a CPU.

So I implemented a simple Monte-Carlo (that I will abbreviate as “MC” now on) algorithm and launched it against my simple player. And sure enough, the MC player won, even if it was just playing 1000 games with random moves to decide which move to pick. And I’m not speaking about a “little better” here. The MC player won almost 70 games out of 100.

Then I thought: picking random moves is not really a likely outcome for a normal game, so the results for the MC player are somehow skewed. What if I use my simple player instead of a random player to play the games?

No surprise, using the simple player within the MC player improved the results against the simple player by quite a bit. But against the random MC player, the difference was not that big, it just improved it slightly, but not really significantly, only by 2.5 to 3% which is not bad but not as much as I would have expected. Note that running this simulation on a CPU is costing a lot of cycles. 1000 games with each roughly 15 to 20 moves. And at each turn, play 1000 games for every possible move (there are 1 to 6 possible moves). That’s a lot of moves to run! I’ve accelerated that to run on a GPU, but that will come later.

And still, the MC player lost to the German PhD’s carefully hand-crafted player. Well if a simple MC approach would do it, that’s what he would have picked in the first place!

This is it for this part. Next, we’ll start building a Neural Network with its associated Genetic Algorithm.

Part 3 is here.