com.unity.ml-agents/Documentation~/ELO-Rating-System.md
In adversarial games, the cumulative environment reward may not be a meaningful metric by which to track learning progress.
This is because the cumulative reward is entirely dependent on the skill of the opponent.
An agent at a particular skill level will get more or less reward against a worse or better agent, respectively.
Instead, it's better to use ELO rating system, a method to calculate the relative skill level between two players in a zero-sum game.
If the training performs correctly, this value should steadily increase.
A zero-sum game is a game where each player's gain or loss of utility is exactly balanced by the gain or loss of the utility of the opponent.
Simply explained, we face a zero-sum game when one agent gets +1.0, its opponent gets -1.0 reward.
For instance, Tennis is a zero-sum game: if you win the point you get +1.0 and your opponent gets -1.0 reward.
Each player has an initial ELO score. It's defined in the initial_elo trainer config hyperparameter.
The difference in rating between the two players serves as the predictor of the outcomes of a match.
For instance, if player A has an Elo score of 2100 and player B has an ELO score of 1800 the chance that player A wins is 85% against 15% for player b.
At the end of the game, based on the outcome we update the player’s actual Elo score, we use a linear adjustment proportional to the amount by which the player over-performed or under-performed. The winning player takes points from the losing one:
We update players rating using this formula:
initial_elo = 1200.0.We calculate the expected score E: Ea = 0.5 and Eb = 0.5
So it means that each player has 50% chances of winning the point.
If A wins, the new rating R would be:
Ra = 1200 + 16 * (1 - 0.5) → 1208
Rb = 1200 + 16 * (0 - 0.5) → 1192
Player A has now an ELO score of 1208 and Player B an ELO score of 1192. Therefore, Player A is now a little bit better than Player B.