How AI Doesn't Work 🤖❌

Tackling the thought provoking implications of the LLM revolution

The Flip Side

Two weeks ago, I published an article entitled How AI Works, in which I explained the fundamentals of today’s large language models (LLMs) without any complicated technical or mathematical language (opting instead to use analogies to food and meal planning).

Since, I’ve been thinking about the many implications of those fundamentals. Namely, the inverse question: now that we know what AI can do, what can it not do?

Again, no technical background needed.

What AI (Maybe) Teaches Us About the Human Brain

All modern artificial intelligence (AI) is made up of components modeled on the human brain. You may have heard of the term neural network, a fancy label for a fairly simple idea. The network is made up of “neurons” that look like this:

And these are modeled after the neurons that are firing in your brain as you read this article. They look like this:

And all these artificial neurons really do is take a bunch of numbers, multiply them by other numbers called weights, and add them together.

All of the shocking complexity of today’s artificial intelligence stems from that basic idea. If you string many (many, many, many) artificial neurons together and “train” them, these simple additions and multiplications begin to identify patterns in the data as if by magic. (“Training” here is essentially a modified version of the food similarity exercise I outlined in my previous article; instead of pushing food together or apart, you just nudge numbers up or down as needed).

The original idea of modeling AI systems after the human brain was metaphorical at best. Yet as we trained these models on more data and with more neurons, the output became miraculously human-like. And that begs the question: What if the relationship between the two isn’t simply metaphorical? What if the human brain really is just a very powerful neural network?

Are We Just Machines?

I had a conversation with a friend recently who summarized this perspective. “It seems the more data and parameters we throw at AI, the more accurate it becomes. At a certain point, it becomes irrelevant how it was trained. The output feels more and more human, and that’s all that matters.” They went on to add, “And isn’t that all people are too? Just output? Whatever you perceive as human thoughts, free will, creativity, etc. How can you tell that it isn’t just the output of a really good neural network?”

Phrased differently, the closer our AI systems are able to mimic the output that a human might generate (in terms of text, images, etc.), the more we’re left asking what actually differentiates those machines from humans. One might argue, “Machines don’t understand what they’re saying. They’re simply doing some math and identifying patterns based on their input data.” But others, like my friend, may retort, “Sure, but how do you know your brain isn’t doing the same thing?”

Share This Article:
facebook logo  twitter logo  linkedin logo  mail icon

The Limitations

And while these arguments are thought-provoking and have yielded many an interesting debate, the truth is, today’s AI is an infant in the grand scheme of AI innovation that will be invented in the years to come.

As it turns out, today’s AI has severe limitations. And any argument over questions like “Will AI systems take over the world?” or “Will AI research prove that humans are merely complex neural networks?” and the like must be answered in the negative if we’re thinking about it in the context of today’s AI.

Today’s AI has fairly substantial limitations. Let’s look at a few:

Limitation 1: The Need for Voluminous Data

Infants learn to understand language fluently and, in time, to speak it fluently as well. They do this with limited exposure to the world, with the ability to process a finite amount of spoken words, and with no ability to read.

Large Language Models (LLMs), by contrast, require a tremendous amount of data to reach the levels of quality output we see today. Take ChatGPT, for instance, which is effectively trained on the entirety of the crawl-able internet (and, arguably, on some written content it shouldn’t have accessed too).

There’s a clear winner in the race of effectiveness and efficiency between the human brain and AI. With a fraction of a fraction of what LLMs require, the human brain is able to understand language and reasoning (among a myriad other things), not to mention develop its own consciousness (more on that below).

Limitation 2: Text is Not Meaning

One thought-provoking idea posed by philosopher John Searle is called the Chinese Room. Imagine a computer that, when asked a question in Chinese, is able to look up a convincing answer that makes it seem as though it can perfectly understand and speak Chinese. That’s essentially what today’s LLMs can do.

Now imagine that, instead of the input being sent to a computer, it’s sent to a person in a room who doesn’t speak a word of Chinese. They have a book that allows them to take a question, look it up by its characters, and respond with a valid answer in the expected output language.

From the perspective of the person posing the questions, the answers are valid and fluent. However, the person in the room would not be able to understand any of the conversation; they’d simply be an intermediary. If you asked them if they understood Chinese, they would say no. By the same logic, one can’t argue that a computer ever really “understands” its output. Just because it is creating what looks like human output doesn’t mean it can think, reason, and understand as a human would.

Text is not meaning, and it’s important to differentiate between the two. I wrote an article several months ago called AI Kryptonite in which I showed a fascinating example of this at play. Because of an AI’s inability to step out of the literal words it is saying, getting it to say the phrase “<|endoftext|>” results in it behaving in extremely strange ways.

Limitation 3: Thinking Only Inside the Box

AI may seem to “think outside the box”, as a human might, yet there are very well-defined boundaries to what it knows: the limits of what it’s trained on. As mentioned above, LLMs like ChatGPT are essentially trained on the entire internet. And that means that all of the patterns it discovers to mimic human behavior are strictly limited to what human beings have already done.

The walls of the box—in the sense of thinking outside the box—are therefore the extent of what human beings have created before. LLMs can find interesting crossovers between those inputs (say, a poem that has never been written, an image of a cat flying in outer space, etc.). But even those imaginative outputs are limited to the intersections of human output to date. AI can never mirror human ingenuity that allows it to create something or understand something in an entirely novel way.

Limitation 4: The “Verbal Fossil Trail”

This one is inspired by a great article in Time by Professor Andy Clark. “At best, text-predictive AIs get a kind of verbal fossil trail of the effects of our actions upon the world,” he writes. “[The] AIs have no practical abilities to intervene on the world—so no way to test, evaluate, and improve their own world-model, the one making the predictions.”

Using the analogy from my previous article explaining how AI works, a computer may be trained to very effectively understand which foods are best paired with which other. But it will never be able to taste those foods or interact with them in any meaningful way. It will only ever “know” things in an indirect way, detached from the reality in which those dishes exist.

LLMs have the same limitation. Since their input is purely human created text, images, etc., they have no means by which they can ever interact with the world that text is meant to reflect. Their “thinking” is a merely a very effective means of finding patterns in the footprints left by humans in the past.

Limitation 5: No Sense of Self

Ask an AI if it has consciousness, and it may give you a very convincing explanation for why it does. It’ll sound human and reason like a human. Remember the Google engineer who became convinced a Google AI bot was sentient?

But recall that AIs are trained to mimic patterns found in human-generated text. So if an AI makes a convincing sentience argument, it’s because human beings have made those arguments in the corpus of data the AI was fed.

Let’s compare that to a child, whose consciousness develops without any training data to speak of.

An AI can pretend to be a self because it is mimicking the behaviors of human beings. A child develops a sense of self without ever hearing or processing philosophical debate about an “I”. Human consciousness is innate, it isn’t learned via pattern recognition. (And that’s pretty much all we can say about it…)

And So…

None of this is intended to persuade or dissuade amongst those grand philosophical arguments AI raises. But it does demonstrate one thing: In the search for “Artificial General Intelligence”—or a machine that can truly achieve human-level reasoning or consciousness—today’s technology is nowhere near where it needs to be.

If you enjoyed this article, please consider sharing it! And if you aren’t yet subscribed to this free weekly newsletter, you can subscribe by clicking here.