Proof of Humanity

How to be a good student in the age of AI

Cheater!

Check out this article, published earlier this week:

Now this one, published the very next day by a sophomore at Riverdale:

Same story, different coast. And there are many, many more examples of this.

In most high schools, in most cities across most countries of the world, nearly every student is now faced with a decision when tasked with a writing assignment:

  1. Use AI to write it and risk being caught

  2. Don’t use AI to write it and risk someone thinking you did

In another article of mine, I wrote about how much we need to rethink homework assignments in the age of AI. But that will only get us so far.

We (the collective “we”, meaning society) have inadvertently created one of the most perverse incentive structures in the history of education. It's scary to think that the brightest, hardest working students may be the ones people will most often suspect of cheating. Either do what you’re supposed to do, write it yourself, and do it poorly relative to everyone else. Or do what you’re not supposed to do, cheat, just do it smartly, and take comfort in the fact that almost everyone else is doing it too.

I’m reminded of something that happened to me in 10th grade English. A couple of decades have since passed, but I can remember it clearly: We were due to have a vocabulary test for which I only half-studied. On the day of the exam, my teacher was out, and we had a substitute who was notorious for letting kids do whatever they wanted.

She handed out the test papers. And it took only a couple of minutes for two students in the class to start whispering answers to one another. When the substitute called them out, uncaringly, they pressed on. So she let it happen. She let is happen!

Of course, once the precedent was set, other students got involved. And before I knew it, almost everyone in the class was openly discussing the answers to the questions.

Except for me. I tried to be the good student and do the right thing. I was one of the only students—if not the actual only student—who didn’t listen to the others and who avoided cheating. Perhaps I wasn’t being “good” at all. Perhaps I simply assumed they would all get in trouble, while I would not. It’s hard to know after all this time.

And here’s what happened. The next day, our teacher returned to the class with the test papers in hand. And she spoke at length about how pleasantly surprised she was that (nearly) everyone did so well on the exam. What an improvement since the last test! My classmates were thrilled.

I got a C.

If that isn’t an analogy for being a student in the age of AI, I don’t know what is.

The Blame Game

So what is to be done?

One thing we should not do is blame students. Nor should we blame teachers. Nor should we blame administrators. Everyone is well intentioned. Everyone is acting based on systemic incentives. Students just want to graduate with good grades, get into good colleges, and land good jobs. Teachers just want to care for their students and are scared of this new and strange world in which none of the old rules apply. Administrators are debating ways to enact helpful policies, except that each policy is a double-edged sword that might offer more downsides than upsides (Consider, for instance, the policy enacted by half of all American school districts to blanket ban AI in the classroom; talk about myopia).

Why blame anyone at all? Let’s instead focus on how we’re sending an entire generation of students the signal that hard work isn’t worth the effort. We’re all to blame for that one.

Nor is the problem limited to high schools, or to formal education for that matter. Consider the writer who, tomorrow, will be accused of having AI write their best-selling book. How to prove the counterfactual? A public ledger of Google doc revision history, I suppose?

We're entering the era of:

  1. Humans train AI

  2. AI learns to mimic humans

  3. Humans start acting differently so they're not seen as AI

  4. Repeat

Proof of Humanity

Remember the concept in cryptocurrency called Proof of Work? It’s the means by which consensus is reached among the network that certain transactions indeed took place (because crypto has no overseeing central authority). You can think of Proof of Work as the way people can become convinced of the authenticity of transactions.

With this in mind, I tried to imagine the future of education, of work, of art, of every other field that AI will impact. Will we know anymore what’s human and what’s not? Will it matter? What incentives will linger for those who hold onto doing things the old fashioned way? Will they be rewarded or punished for doing so?

If the currency of the last twenty years was information, the currency of the next twenty will be authenticity. Perhaps it’s an anthropocentric perspective (but humans are, after all, notorious for believing that they are the center of the universe): It will matter more than ever what was made by a human versus what was not.

We’ve gone through this transition before. Back in the day, goods were made by artisans; today they’re made by machines. Back in the day, work was bespoke; today it’s done in bulk. But nevertheless, you’d pay more for an original painting than a print or a counterfeit. You’d pay more for a car built by human hands than by robots. You’d pay more for… You get the idea.

We already pay a premium for authenticity. In an AI-dominated world, this distinction will become more important, not less. Commoditization of everything doesn’t mean that the origin of things won’t matter. On the contrary: we will only care more.

In the age of AI, the thing that will really matter is Proof of Humanity.

And we owe it to future generations to figure out how to reward that kid who doesn’t cheat. Because otherwise, that kid soon won’t exist.