## Tuesday, 1 March 2011

### The Bain Experiment

The purpose of this experiment was to test Dan Heisman’s basic tactics training ideas, using the problems from John Bain’s Chess Tactics for Students.  In this experiment, I not only greatly improved at solving the problems that I was practicing, but also at solving problems that I had never seen before.  Bain's book has 14 chapters:

Chapter  Problems   Motif
1       2-31     Pins
2      33-62     Back Rank
3      64-93     Knight Forks
4      95-124    Double Attack
5     126-155    Discovered Checks
6     157-186    Double Checks
7     188-217    Discovered Attacks
8     219-248    Skewers
9     250-279    Double Threats
10     281-310    Promoting Pawns
11     312-341    Removing the Guard
12     343-372    Perpetual Check
13     374-403    Zugswang
14     405-434    Identifying Tactics

(The first problem in each chapter is an illustration that duplicates the first problem of that chapter.)  The book is formatted as a workbook.  Beneath each diagram, the book says who it is to move, and gives hint for finding the solution.  There are no solutions, as such, but the hints are so detailed that a solution is hardly necessary.  The reverse side of each diagram is set aside for writing the solution, so I was able to cut out the diagrams along with their hints, doing only minor damage to some of the hints on the other side.  I folded back the hints so that they were not visible, and marked each diagram with W+, W=, W#, B+, B=, or B#, according to the result required, taking care not to look at the diagrams. (This caused some problems.  There are a few cases where the book says e.g. “win a rook”, and it is possible to win material in another way.  I gave myself the benefit of the doubt in these cases.)

The book says that the problems within each chapter are in order of difficulty.  I constructed six batches A-F of problems from Chapters 1 to 13.  Batch A consisted of problems 2, 8, 14, 20, 26..., plus problems 33, 39, 45, 51, 57..., and so on.  Batch B consisted of problems 3, 9, 15, 21, 27..., plus problems 34, 39, 46, 52, 58..., and so on. The remaining batches were constructed in the same way.  This method ensured that, as nearly as possible, each batch had the same number of problems with the same level of difficulty from each chapter.  If the book’s ordering by level of difficulty was perfect, there would be a very slight increase in difficulty from batch to batch.  Any remaining variation in difficulty between the batches can reasonably be ascribed to random factors.  I set Chapter 14 aside, because sorting the problems by motif would have given me prior knowledge, and I had a suspicion (which seems to be right) that they are duplicates of problems from previous chapters.  I thoroughly shuffled each batch - which was almost entirely ineffective - so I scrambled the order by sorting them into two piles and then into three.  I discarded problem 26, because it worked only if the opposing side blundered; and problem 95, because it was a complete dud.

The early part of my schedule was:

Day 1: A+B, A+B
Day 2: A+B, C+D
Day 4: A+B, C+D
Day 6: C+D
Day 8: A+B, C+D
Day 9: E+F

The first four repetition intervals for batches A+B were ½ day, 1 day, 2 days and 4 days, and the first three repetition intervals for batches C+D were 2 days, 2 days and 2 days.  I measured the time taken to solve each problem with a stopwatch, rounding the times to a tenth of a second.  In the diagrams below, I counted the times of any incorrect solutions as taking more than 30 seconds, whatever the actual time spent.  The cumulative distributions of solution times for the first four passes through batches A+B were:

I was clearly faster at the outset than Heisman’s typical student, and improved more rapidly.  However, a large proportion of Bain’s problems were either simple examples from Reinfeld, or simplified versions of more complicated ones, so I had an unfair advantage here!  (N.B. You can click on the diagrams to enlarge them.)  The histogram of solution times for the first five passes through A+B was:

Note the very rapid initial improvement from Pass 1 to Pass 2, which was carried out on the same day.  I got 83% in under 5 seconds on the fifth pass.  (N.B.  For the diagrams in this section, 0-5 stands for 0-4.9 seconds, and similarly for the other “buckets”.)  The histogram of solution times for the first four passes through C+D was:

￼Note the slow initial progress with the two day repetition intervals.  Most of my difficulty was with D rather than C, so I did a quick untimed pass of D, before my first pass of E+F, to reduce my times for C+D closer to those of A+B.  (I had hoped that the experiment would show an improvement on my first pass from A to B, from C to D, and from E to F. An improvement was observed from A to B, but I did worse on D than C and on F than E, so I do not believe that we can draw any conclusions here.)

Both the ½ day, 1 day , 2 days, 4 days, and the 2 days, 2 days. 2 days repetition intervals worked well here.  The intervals of 1 day, 2 days, 4 days used in the Reinfeld Experiment should give much the same results as 2 days, 2 days, 2 days (see the earlier article on that experiment).  It is not clear whether the additional repetition at ½ day would still have a significant benefit after many more repetitions at progressively doubling intervals.

Not surprisingly, I improved at the problems that I was practicing - but what about problems that I had never seen before?  Here is a histogram of solution times for my first passes through A+B, C+D and E+F:

I was astonished!  On my first pass through E+F, I got over 85% in 15 seconds, and very nearly 50% in 5 seconds.  What was going on?  One theory is that I simply got faster as a result of practice.  Another is that I learned new patterns by repeatedly solving A+B that helped me with B+C, and that repeatedly solving B+C taught me still more patterns, that helped me further with E+F.  What does the data have to say?

The simplest hypothesis is that the solution times were all reduced by a common factor.  I found that dividing all the solution times for my first pass through A+B by 1.3 gave the closest match to the solution times for my first pass through C+D.  Similarly, dividing all the solution times for my first pass through A+B by 2.6 gave the closest match to those on my first pass through E+F.  (I used the method of least squares here.)  The histogram below compares the counts in each “bucket” for my real passes through C+D and E+F with those simulated by dividing the solution times of A+B:

The fit is about as good as it could be, given the statistical variability.  It is remarkable that my first pass of E+F was 2.6 times faster than my first pass of A+B.

What about the pattern matching theory?  Imagine that x% of the problems in A+B are duplicated in C+D, and that any internal duplication within C+D is at the same level as that in A+B.  On the first pass of C+D, I will have solved x% of the problems three times already, and the remainder will be new to me.  I can approximate my performance on the x% by using the histogram for the third pass of A+B.  My performance on the remaining problems within C+D can be approximated by the histogram for my first pass through A+B.  I used the method of least squares to find the value of x% which made this approximation as close as possible to the histogram for my first pass through C+D.  The best fit was with x% = 25%.  I also approximated my first pass through E+F using the histograms for my first and fifth passes through A+B - the best fit was obtained with x% = 48% - which is almost exactly twice the value for C+D, as it should be if duplicates are equally distributed throughout the batches.  The histogram below compares my real passes through C+D and E+F with the approximated ones:

Again, the fit is good.  Bain has many problems that are the same as another problem within the book, but with one less move at the beginning.  If I tackle the harder problem first, the easier one should show up as a duplicate, but if I tackle the easier one first, the harder one may not show up as a duplicate.

What conclusions can we draw?  The data is consistent with my matching patterns that I already knew 2.6 times faster.  Since Bain’s problems were either very simple - or examples from Reinfeld that I already knew - it is possible that I already knew all the patterns, and had just become faster at finding them.  However, I am sure that I am not 2.6 times faster at solving all tactics problems at this level of complexity.  I believe that most of my improvement was pattern specific.  This interpretation is supported by the fact that my improvement in going from my first pass of A+B to my first pass of E+F was almost exactly twice my improvement in going from my first pass of A+B to my first pass of C+D.  (N.B. If the experiment had been carried out on a set of problems that had the same statistical profile as simple tactics in real games, my improvement would be real, whether or not it resulted from pattern duplication.)  It could be objected that, despite my best efforts, E+F might be easier than C+D, which might in turn be easier than A+B.  It is not possible to completely eliminate possibilities like this from a single player experiment. Please feel free to repeat the experiment with the batches in the reverse order!

For an update, see my later article: Basic Tactics Revision.

1. From your description, it looks like you're doing this on paper, how are you finding that? For the timing, are you using a stopwatch?

2. A stopwatch is not too bad, but I have now written a little Java program to do the timing and record the result, which is much better.

1. Would you mind sharing the Java program? posting it some place on your site perhaps? You might be able to collect more data that way too as people use it.

2. I will publish the source code next month.

3. My question know is,
-how specific are the learnd pattern, mabye they are all "very" related so the benefit at OTB might be "to small"
-how many pattern (tactical pattern ) are there. Heisman is talking about 2000, but this depends how sharp or unsharp you are looking at differences
- how "quick are these pattern forgotten". If there are 2000? (20000) pattern to learn and it takes 6 Month (6 years ) to learn them but you forget in 3 Months (3 years)...

Would be interesting if you are know better at CTS, ATS, TT, CT or OTB

4. There are four questions here:

(1). How specific are the patterns that need to be learned?

(2). How many patterns do we need to learn?

(3). How quickly are these patterns forgotten?

(4). Have I improved at practical chess?

If we sample more patterns in our learning set, we will find more patterns that are more alike, so the answer to question (1) depends on the answer to question (2).

The number of patterns to be learned depends on where we draw the line on how simple common the patterns have to be. The more complicated examples are likely to be less common, and can usually be broken down into simpler patterns. It is worth while to learn to solve the simple and common patterns almost on sight, but not the complicated and uncommon ones. The 2,000 number is just a guess.

You will forget most of the patterns that you learn in a few weeks unless you practice using them. You will remember nearly all of the patterns for the rest of your life, if you practice using them at intervals which roughly double with each repetition.

I have not played chess for 15 years, but it is very difficult to compare OTB performances unless you play a very large number of games against reliably rated opponents. If you are doing that, your improvement might be a result of playing chess rather than the training.

5. I felt I got better after doing this. I don't have proof but I felt like I was recognizing more tactics in my games.