Artificial stupidity

As we observe the meteoric rise of LLMs (large language models) and GPTs (generative pre-trained transformers), I’m feeling two distinct emotions: annoyance and depression.

I’m annoyed because even the best of these models (GPT-4 being the current version at the time of writing) have serious fundamental flaws, and yet every company is absolutely scrambling to stuff this technology into every possible product — Microsoft has integrated it into Bing with predictable hilarity, and is now proceeding to build it into their Office suite; Snapchat is building an AI bot that becomes your “friend” when you create an account, with horrifying consequences already being observed, and so on. All of these decisions are frightfully reckless, and are driven by nothing but the latest Silicon Valley hype cycle.

Let’s get one thing out of the way: anyone who claims that these language models are actually “intelligent”, or even “sentient”, probably has a financial incentive to say that. The hype around this technology is so strong that it’s hijacking the imagination of serious academics and researchers. The hype is even stronger than it was for crypto!

These models have reignited and intensified conversations about AGI (artificial general intelligence) and how close we really are to building an intelligence that overtakes human cognition by every measure. These debates are certainly worth having, but I’m skeptical that LLMs bring us any closer to understanding anything at all about intelligence or consciousness.

The one valid observation that these LLMs have demonstrated is very simple: human language is not very complex, and it’s possible to take literally every word ever written by a human being and feed it into a language model that can synthesize plausible text based on a prompt. It really is that simple.

Yes, it is impressive that you can have a believable “conversation” with these language models, but that’s because most conversations have already been had, and 99% of our day-to-day communication can be reduced to boilerplate prompts to a language model. Neat, huh?

I can foresee a counterargument being raised here: virtually no one does long division anymore, or really any kind of arithmetic with more than two digits, because we invented pocket calculators to do the arithmetic for us, which gives us freedom to do higher-order reasoning. What’s wrong with creating more powerful technologies to offload even more menial reasoning tasks, so that we are free to think on grander scales?

The problem here is that pocket calculators are exact by their nature, and always produce a consistent and correct result. If a calculator malfunctions, the malfunction becomes clear very quickly, and is easy to repair or replace. LLMs, on the other hand, are inexact by their nature, and produce content that cannot be relied upon. It will not be clear when and how LLMs will malfunction, or even what it means for a LLM to malfunction, and what effect a malfunction will have on its output.

You might go on to say that the kind of aversion to new technology that I’m expressing dates back to Plato and his Phaedrus dialogue, in which Socrates recalls a tale about the Egyptian king Thamus being distrustful of the invention of writing:

“For this invention will produce forgetfulness in the minds of those who learn to use it, because they will not practice their memory. Their trust in writing, produced by external characters which are no part of themselves, will discourage the use of their own memory within them. You have invented an elixir not of memory, but of reminding; and you offer your pupils the appearance of wisdom, not true wisdom, for they will read many things without instruction and will therefore seem to know many things, when they are for the most part ignorant and hard to get along with, since they are not wise, but only appear wise.”

A fair point, and future developments in LLMs might prove me to be as short-sighted as King Thamus was. I’m not denying that LLMs could see plenty of excellent and positive uses; I’m simply pointing out how recklessly we seem to be deploying this technology, without understanding its potential impact.

Human language in its written and spoken forms, as unsophisticated as it might be, is integral to our mechanisms of sensemaking. And it seems to me that sensemaking, of all things, is not something to be offloaded from our own minds. We already have a problem of “misinformation” on the web, but LLMs carry the potential to amplify this problem by orders of magnitude, because the very same misinformation is part of the data on which they were trained.

The very act of “writing”, i.e. distilling and transforming abstract thoughts into words, is a skill that we mustn’t let fall by the wayside. If we delegate this skill to a language model, and allow the skill to atrophy, what exactly will replace it? What higher-order communication technique, even more powerful than the written word, awaits us?

The best-case outcome from the current LLM craze is that it’s a hype cycle that will end with a few good judicious uses of this technology in specific circumstances. And the worst case is a general dumbing-down of humanity: a world in which humans no longer bother to say anything original, because it’s all been said, and a world in which a language model consumes the totality of our culture and regurgitates it back to us. Enjoy!