Friday, 7 September 2012

Rating by Expected Score

It is clear from my previous article Rating by Maximum Likelihood that measuring the strength of a chess player's opposition by the average rating of his opponents introduces significant inaccuracy into a rating calculation.  An alternative approach to maximum likelihood is to calculate the rating that makes the expected score against the opposition, as given by the Elo formula, equal to the actual score.

The USCF version of the Elo rating system estimates the score sA (as a fraction between 0 and 1) of player A with rating RA against an opponent with a rating RB using the formula:

sA = 1 / (1 + 10^[(RB-RA)/400])


Let R be the rating of a player, and Ri, for i = 1 to N, be the ratings of his opponents.  The Elo formula gives the expected score against these opponents as:

(1/N) * Σi 1/(1 + 10^[(Ri-R)/400])

Let s be the actual score (0 < s < 1), xi = 10^(Ri/400), and x = 10^(R/400).

When the expected score is equal to the actual score:

(1/N) *  Σi x / (x + xi) = s


f(x) = (1/N) *  Σi x / (x + xi) – s

Differentiating with respect to x gives:

f'(x) = (1/N)  Σi xi / (x + xi)2

The graph of f(x) for Michael de la Maza's last 19 games (again!) is shown below:

f(0) = -s, and when x →  infinity, f(x) → 1–s.  It also is clear that the first derivative f'(x) is always positive and is monotonically decreasing.  Newton’s Method will always converge from below the solution, but will not converge from starting points above the point where the tangent to the curve passes through the origin.

f(a) + h*f'(a) = 0

h = -f(a) / f'(a)

a' = a + h

I applied Newton's Method to MDLM's last 19 games, with the same initial approximation as in the previous article.  The results are shown below:

  Rating         a             f(a)        f'(a)         h           a+h
2149.349349  236250.8463  -0.030375754  5.99774E-07  50645.35470  286896.201
2183.089919  286896.2010  -0.004116624  4.48843E-07  9171.630091  296067.8311
2188.556489  296067.8311  -9.66883E-05  4.28023E-07  225.8949878  296293.7261
2188.688982  296293.7261  -5.57165E-08  4.27530E-07  0.130321747  296293.8564
2188.689059  296293.8564  -1.75415E-14  4.27530E-07  4.10299E-08  296293.8564

The final result is precisely the same as that obtained in the previous article, Rating by Maximum Likelihood, which is not entirely unexpected, despite the very different methods of calculation.

[I provide Java code for a refinement of this algorithm that always converges from any starting point, and performs one more iteration than is necessary to achieve approximately one Elo point accuracy, in my later article Multi-User Monte Carlo.]

We can easily prove that the maximum likelihood and expected score method are equivalent.  Changing the notation slightly, the previous result is:

Σi∈W {1/ (1 + 10^[(R-Ri)/400])} - Σi∈L {1/ (1 + 10^[(Ri-R)/400])} = 0

where W is the set of indices for the wins, and L is the set indices for the losses.

Σi∈W {xi / (xi + x)} = Σi∈L {x / (x + xi)}

Σi∈W {xi / (xi + x)} + Σi∈W {x / (x + xi)} = Σi∈L {x / (x + xi)} + Σi∈W {x / (x + xi)}

Σi∈W {1} = Σi∈W∪L {x / (x + xi)}

This reduces to the expected score equation.

An important issue is that of approximating the expected score against N opponents with different ratings:

(1/N) * Σi x / (x + xi)

with the expected score against a single opponent:

x / (x + X)

When these two curves cross at the point (x, s), the constant parameter X (corresponding to the single opponent's rating) is given by:

X = x * (1-s)/s

The graph below shows the two curves for the MDLM data with a crossing point at s = 0.5 (the approximate curve is in green):

(N.B. For clarity, the graph shows the rating equivalent of x.)  This approximation leaves much to be desired at larger ratings here.

This approximation can be used as the basis of an iteration for solving f(x) = 0.  The idea here is to use the Elo formula for a single opponent to find out how many points stronger the player would need to be to bring up his calculated score to the target score, and add these points to his estimated rating.  For the MDLM data, this iteration converges linearly with a convergence ratio of just above 4.


  1. Sorry, need to read your former posts more careful:

    f(x) = (1/N) * Σi x / (x + xi) – s

    Differentiating with respect to x gives:

    f'(x) = (1/N) Σi xi / (x + xi)2

    f(x)==0 ?
    The x are different, x in s seems to be ~ x_real or ~ x_final ? And the x in f-s is x_iteration ?

  2. f(x) is defined for x = any positive real number. s is a constant real number such that 0 < s < 1. f(x) = 0 is an equation with one solution. I have not given this solution a name, but we could call it xsolution, so that f(xsolution) = 0.

    a is an initial approximation for xsolution. Newton's method always converges from a = 0 here. a' is a better approximation for xsolution. If we set a to a', the next value of a' is an even better approximation. If we carry on doing this the limiting value of a and a' will be xsolution.


  3. A reasonable algorithm here is:

    a:= 0
    h := -f(a)/f'(a)
    a := a + h
    until h/a < 1/400

    This algorithm does one iteration more than is needed to give 1 Elo point accuracy.

    1. I did copypaste the wrong sequence:

      s = (1/N) * Σi x / (x + xi)


      f(x) = (1/N) * Σi x / (x + xi) – s

      so f(x)= (1/N) * Σi x / (x + xi) – (1/N) * Σi x / (x + xi) ==0

      But i think i understand.
      the x in s is x_to_find

  4. I have amended the text slightly to make it less confusing. s is just a constant here, not the function s(x) = (1/N) * Σi x / (x + xi). Given s, we just have an equation for x.

    I also noticed that I had two signs round the wrong way in the accompanying article, and have fixed that and corrected my explanation.