[Day 171] Lying with statistics: why Allan Lichtman’s predictions aren’t that good

One of my favorite sayings was the one populated by Mark Twain and frequently (probably wrongly) attributed to the late British Prime Minister Benjamin Disraeli: “There are three kinds of lies: lies, damned lies, and statistics.” I get slightly annoyed when accredited newspapers use statistics to manipulate readers.

This afternoon, I saw this headline on the Washington Post:

learn365project_statistics_lie

My first thought was: “Wow, 30 years. That’s some serious track record of success.” But then I thought, wait a second, presidential elections happen only once every 4 years. In the span of 30 years, he can make predictions for at most 8 elections. Because of the totally messed up nature of American democracy, the presidential elections boil down to choosing between 2 candidates from 2 major political parties. If you choose at random, at any presidential election, you get a 50% chance of getting it right. The odd of getting 8 presidential elections right is (1/2)^8, approximating 0.4%. It means that if choosing winners at random, 4 out of 1000 people can get the results of 8 consecutive elections right.

If we assume that you must be very well educated to get the result of 8 elections right, say, you must have a doctorate to do so, then the chance of getting it right once you’ve got a doctorate is: 0.4/1.77 or 22.6% (because the percentage of people with doctorate in the US is 1.77%). In other words, 1 out of every 4.5 people with PhD degrees can predict the result of 8 consecutive elections right.

I looked into Professor Lichtman’s method of prediction. He worked out a system of 13 true/false statements. In his words, “an answer of true on these true/false questions always favors the reelection of the party in power.” When five or fewer keys are false, the incumbent party wins. When six or more are false, the incumbent party loses the presidency. His 13 key statements are:

  1. (Party Mandate) After the midterm elections, the incumbent party holds more seats in the U.S. House of Representatives than it did after the previous midterm elections.
  2. (Contest) There is no serious contest for the incumbent party nomination.
  3. (Incumbency) The incumbent-party candidate is the sitting president.
  4. (Third-party) There is no significant third party or independent campaign.
  5. (Short-term economy) The economy is not in recession during the election campaign.
  6. (Long-term economy) Real per-capita economic growth during the term equals or exceeds mean growth during the previous two terms.
  7. (Policy change) The incumbent administration effects major changes in national policy.
  8. (Social unrest) There is no sustained social unrest during the term.
  9. (Scandal) The incumbent administration is untainted by major scandal.
  10. (Foreign/military failure) The incumbent administration suffers no major failure in foreign or military affairs.
  11. (Foreign/military success) The incumbent administration achieves a major success in foreign or military affairs.
  12. (Incumbent charisma) The incumbent party candidate is charismatic or a national hero.
  13. (Challenger charisma) The challenging party candidate is not charismatic or a national hero.

There are several reasons why I’m skeptical of this system.

Reason 1: It favors the challenging party instead of the incumbent party.

If we assume that the statements are independent of each other and each has the probability of being false at 50%, then the number of false statements constitutes a random variable that follows the binomial distribution: X ~ Bin(13, 0.5)

learn365project_statistics_lie_2

The incumbent party wins when X <= 5, and loses otherwise. We have:

learn365project_statistics_lie_3.jpg

The probability that the incumbent party wins is 2380/(2380 + 5812) = 29%

The probability that the challenging party wins is5812/(2380 +5812) = 71%

The challenging party has 2.5 times the chance of winning as the incumbent party, which doesn’t intuitively make sense at all. Historically, out of 39 presidential elections from 1860 to 2012, the incumbent party has won 23 of the times or 59%.

Reason 2: His test data is too small and possibly biased

Since there is no public record about whether Lichtman actually made his predictions before each election, we will have to take his words at face value. He said:

I’ve since used them prospectively to predict, often well ahead of time, the results of all eight elections from 1984 to 2012.”

The best-case scenario for Lichtman’s model is that he created the system once, some time before the 1984 election, and once every four years runs his model on one data point and reports the result, without further tuning the model. In this case, the test dataset is “unseen”, but he also only tested it on only 8 data points to test on. Like I pointed out above, 1 out of 4.5 people with P.h.D can do it.

The likely scenario is that Lichtman created the system sometime before 1984, but after each prediction, he tuned the parameters: changing the statements or changing the threshold of false statements. In machine learning terms, he tests his model on the same training dataset. This gives the model high training accuracy (in this case, 8 out of 8), but it also makes the model terrible at generalizing (in this case, making predictions for the future elections).

Long story short, meh.

 

[Day 171] Lying with statistics: why Allan Lichtman’s predictions aren’t that good

Leave a comment