Thursday 1 November 2012

Another Year of the Rabbit

The Empirical Rabbit is two years old this month, and it is time to summarise the highlights of the year.  Unlike last year, this year is not the Chinese year of the rabbit. It is the year of the dragon:





















Nonetheless, every year is a year of the rabbit on this site!

My tactics training experiments continued with The Blue Coakley Experiment.  In this experiment, I found that my distribution of correct solution times was a close fit to an exponential distribution, see An Important Discovery.  I subsequently found that my distribution of (correct and incorrect) distribution times closely fitted another exponential distribution, see A Three Parameter Model.  I got the same result in all my other tactics training experiments.  I still do not fully understand why.  I had some further interesting tactics training results to report in Basic Tactics Revision Results and Susan Polgar + Ivashchenko Revision.

I became interested in the hypothesis that the ratings of human chess players increases by K rating points for each doubling in their thinking time, as it does for computer chess programs.  On this assumption, I derived a simple relationship between a player's thinking time and his score in Rating, Time and Score.

I looked at Michael de la Maza's results on the USCF website, and found that his final rating performance was more than his official rating suggests, see Michael de la Maza Statistics.  I compared MDLM's results with those of the youngest player ever to reach a USCF rating of 2200, see Samuel Sevian Statistics.  These results made me very suspicious, see Michael de la Maza - the Verdict?.  I later noticed that a more sophisticated rating calculation gave MDLM an even higher final rating performance, see Rating by Maximum Likelihood.  I also discovered that this very well researched calculation is equivalent to a simpler and perhaps more intuitive one, see Rating by Expected Score.

I looked at the rating methods used by the popular chess problem servers, and found that none of them had a sound statistical basis, see Rethinking Chess Problem Server Ratings.  I suggested some improvements, and subsequently developed these ideas in Problem Server Ratings Revisited.  I eventually settled on a very simple and statistically sound rating method, see A Simpler Server Rating Method.  I further validated the mathematical basis of this method in Rating, Time and Score Revisited.  I also carried out some Monte Carlo simulations, see Simulating Solution Times and ScoresMulti-User Monte Carlo, and Varying K.  I presented my conclusions in More Rating, Time and Score, and Finding K.

A full list of this year's articles, including my reviews of other training material, can be found in Contents.

2 comments:

  1. Great blog! Nice summation of your contributions. This post offers a fine entry point for those of us who wish to go back through your archive following themes. Now, it's time to stop typing and instead review your suggestions for chess ratings on tactics training sites, such as Chess Tempo.

    Keep up the good work.

    ReplyDelete
  2. Thank you very much for the encouragement.

    ReplyDelete