The (Maybe) Big Lottery

Cooperation versus defection

The Luring Lottery

Several decades ago, the magazine Scientific American ran an experiment they referred to as The Luring Lottery. They were going to set aside up to one million dollars as prize money for the randomly chosen winner.

In order to participate, readers would have to mail in a postcard with the number of entries they wished to submit. The winner would be selected randomly from among these entries. A reader who submitted a postcard with a 2 would be twice as likely to win as one who submitted a 1.

But there was a catch…

The amount given to the winner would be calculated with this formula:

$1,000,000 / (The total number of entries)

So, say one hundred people sent in postcards each bearing the number 1, each would have a 1% chance of winning $10,000 (a million dollars divided by 100 entries). Yet if all of those people attempted to increase their chances of winning by submitting the number 2, no one would have raised their odds (everyone’s still at a 1% chance), but the winner would now only get $5,000 (a million dollars divided by 200 entries).

And if one of them really tried to game it by submitting the number 1,000,000, then the winner would get less than $1 (since the one million dollars was now being divided by just over one million entries).

I’m not making this up. This lottery actually took place.

Defection vs. Cooperation

The Luring Lottery is a fantastic example of game theory at play. One choice is cooperation (a term used in the study of game theory). If everyone cooperated by limiting their entries to relatively low numbers, they would reduce their own chances of winning but sustain the overall amount paid out. Therefore, pure cooperation guarantees that someone can win something of meaning, even if you probably won’t.

But humans aren’t cooperative. They’re greedy. Their other choice is defection. The Scrooge who submitted 1,000,000 entries in my example above is a defector. If everyone defects, no one greatly increases their chances of winning relative to anyone else. But the prize money shrinks to zero.

And more concerning, if everyone cooperates but a single person defects, it has a significantly detrimental impact on the prize as well.

What do you suppose happened when Scientific American ran this test? Shoot me an email and let me know what you would have done in this situation. What actually occurred is described at the end of this article.

Get Out The Vote

We face these types of cooperation versus defection decisions today. A classic example is voting in elections (a timely conversation in the United States). Sure, the stakes are high. But how much does my vote actually matter? If I sit home on election day (or fail to mail in my ballot ahead of time), it won’t change the outcome. I may as well defect.

Yes, but by that logic, everyone else should defect too. And in this case, by defecting, not only are you removing yourself from the equation, but you’re actually strengthening the contribution of your political opponents to the outcome.

That’s a double whammy. Failing to vote both hurts your preferred candidate by removing your contribution to the numerator (their share of the vote), and it also hurts your preferred candidate by removing your contribution to the denominator (the total number of votes cast), thus strengthening the contribution of everyone else (including those voting for the other candidate).

This isn’t quite the same as the Luring Lottery. Here, defecting means removing your entries rather than raising them astronomically. So let’s find a better analogy.

Machines Learning From Machines

A more apt parallel comes to my mind, touching on (of course) artificial intelligence.

AI is trained on data taken from the internet. (If you missed my article on “How AI Works” for people with no technical background, check it out here.) And as people write their content more and more using AI, that content is published back on the internet. (This article, for one, was not written using AI.)

Now we’ve reached an interesting recursive process. The internet trains AI models. AI models output content. That content is published on the internet. It’s used to train AI models. And on and on we go.

This phenomenon is referred to as "data contamination" or "dataset pollution."

What does that have to do with The Luring Lottery? Everything.

So long as everyone “cooperates,” using AI to help their creative processes and efficiency without polluting the training pool, all is good. The dataset remains pure (i.e. human-made). The output remains high quality.

But again, people are greedy. And they will defect. We’re already seeing oceans of AI-generated content polluting social media, news, and even academic papers.

Over time, the proportion of the training pool made up of genuine human content goes down. And therefore, the models begin learning off their own data.

You may think this insignificant, but it’s tremendously important for the future of not only technology but humankind’s general reliance on it. Large Language Models (ChatGPT, Gemini, etc.) are useful in that they mimic the behavior of human beings. As human beings defect in the way described above, these machines will transition, instead, to mimicking the behavior of machines.

Picture a photocopier. Draw a picture, make a copy, and it will look pretty good. Take that copy and make a copy of it, and the quality will decrease a bit. Do that again, and again, and again, and eventually, you’re looking at garbage.

This is how the game of telephone works. Imperfect translation compounds over time. The output gets worse.

Irony rings loudly as in the case of the Luring Lottery. The more reliant we become on the technology, the worse it will get. The less reliant we become, the better it will remain.

A classic lose-lose. One that Joseph Heller (the author of Catch-22) would certainly have appreciated (and been saddened by) were he alive today.

"That's some catch, that Catch-22," he observed.
"It's the best there is," Doc Daneeka agreed.

Joseph Heller

What About The Lottery?

Back to the 1980s, when Scientific American ran that Luring Lottery experiment…

A few readers submitted numbers that were absurdly and unrealistically large (numbers like googol, which is 1 with 100 zeroes after it, or googolplex, which is 1 with googol zeroes after it). Not only did it become impossible to determine a winner, but even if a winner had been declared, their winnings would have been microscopically small.

What relief the editors of Scientific American must have felt that humans were greedy after all. Otherwise, they would have had to mail off a sizable check.