When Social Media = AI...
1-9-90 in the Age of AI
The 1-9-90 Rule
Remember social media? That thing that began with the ability to “poke” your friends and eventually became a global pandemic (albeit a different pandemic than the one I discussed in my last post).
In the early days of web 2.0, when content went from pure broadcast to interactive, someone (no one knows who) identified an interesting pattern emerge on online platforms. I call it the 1-9-90 Rule, although I’ve also heard it called the 1% Rule (too ambiguous), the 90-9-1 Rule (doesn’t roll off the tongue) or even the 89-10-1 Rule (why?).
The Rule states the following: Given any online community, 1% of users will create new content. 9% of users will engage with that content. And the remaining 90% will passively consume the content, or “lurk”.

For instance, on Facebook: 1% of users post original content, 9% of users engage with that content (like, comment, share), and the rest just consume.
Of course, those numbers are estimates. The main takeaway is that an order of magnitude more users interact than create, and an order of magnitude further lurk instead of interact.
This pattern is wildly predictive of online behavior. When we started my previous company Anchor, in which our earliest versions were social platforms, we found a similar behavior among our users immediately: About 1% of them came to Anchor to share original content. About 9% replied to that content. And about 90% never created anything.
The reason behind this is intuitive: It is much easier to consume than to engage. It is much easier to engage minimally than to engage a lot.
Inversion
Something incredible has happened in the past few years. The whole underlying mechanism has flipped.

With the advent of AI, it is now arguably easier to create than it is to consume. Here’s why: Consumption takes time. Consumption takes mental energy. Engagement, on the other hand, can be programmed. Creation can be prompted. There exist services today that— given a detailed description of your preferred tone, subject matter, and so on— will post to social media on your behalf.
It’s like an episode of Black Mirror, where the content being generated is optimized for eyeballs and clicks. We’ve only seen the tip of the iceberg, and yet already, research suggests that more than half of the written content on the internet is either AI-generated or translated by AI.
That’s not necessarily a bad thing. My current company, Oboe, is built on the foundational idea that easier creation means content can become more readily, affordably accessible. In the realm of learning, despite the obvious disruptive impact to existing educational systems, I fundamentally believe this to be a revolutionary positive change for the world.
And yet, with the proliferation of easy, cheap content, there are obvious dangers:
Inaccuracies and misinformation will flourish.
For every actor attempting to use this new superpower for good, there will be many attempting to use it for bad.
The training of AI models on the internet’s ever-rising percentage of AI-generated content will ultimately lead to the phenomenon known as model collapse.
Plus, going back to social media, the 1-9-90 pyramid will be inverted, which means much more (lower quality) content created than ever being consumed…
Stuck in the Middle
…but I think that’s a bit of a red herring.
The headlines and the controversy will be about the 1%. What happens— we all seem to be asking— when the content that’s created online isn’t created by human beings, but by artificial intelligence?
This is the wrong question to ask. Most of the content we humans create for social platforms is already formulaic. It’s statistically optimized to lure you in. Social content creation is much more of a science than an art, and that has been the case since the day, circa the late 00s, when folks realized they could monetize your attention very cost effectively.
So how, other than being more cheaply created, will AI-generated social content be that different than what we already have?
We’re focusing on the wrong part of the pyramid. The top won’t be different. What will be different is the middle, the 9%. What we should be asking is: What happens when most of the signal that platforms use to determine which content to show algorithmically is fake?
Currently, it’s the 9% (the ones who comment, like, retweet, etc.) who determine what content gets propagated across these networks. It’s the 9% that determine the virality of the 1% and therefore control the attention of the 90%.
(The exception to this is TikTok, which has been able to unlock, through its unique format, more “signal” from the behavior of the 90% than any other platform in history.)
Here’s the real dystopian outlook on social platforms in the age of AI: Robots creating the world’s content, and even more robots interacting with that content to drive what they want to the top.
I cannot myself see how traditional social media survives this tectonic shift, or how human beings will continue to find value in consuming this content. It will require a big change in either how these platforms work or how their consumers behave. And I believe it’s pretty obvious who, of those two groups, is less likely to ever change.