LLMs and AGI: The Misunderstood Tech and Its Hollywood Misrepresentation
January 21, 2025
In an era where technology evolves at breakneck speed, artificial intelligence (AI) has become a buzzword that elicits both fascination and fear. From headlines warning of an impending AI apocalypse to blockbuster films depicting rogue robots, the narrative surrounding AI is often sensationalized and misleading. The truth is, most of the panic stems from a fundamental misunderstanding of what AI actually is—particularly Large Language Models (LLMs). Spoiler alert: AI isn’t the malevolent overlord you see in movies like M3GAN or HAL 9000. Instead, it’s more akin to a calculator—powerful, useful, and, let’s face it, a bit dumb.
What Are LLMs? A Groundbreaking Technology
At the core of today’s AI landscape are Large Language Models (LLMs), which are sophisticated algorithms trained on vast amounts of text data. These models are designed to recognize and generate human language patterns, making them incredibly versatile tools for a variety of applications, from customer service chatbots to content creation.
LLMs operate by predicting the next word in a sentence based on the context of the words that came before it. For instance, when you type “Denver…” and hit enter, it might respond with “Colorado.” But it could just as easily suggest “Colorado Springs, Boulder, Fort Collins.” Why? Because it’s recognizing patterns, not engaging in some deep, philosophical thought process. This ability to generate coherent text based on learned patterns is what makes LLMs so powerful.
The Power of LLMs: Transformative Yet Limited
While LLMs are indeed groundbreaking, they are fundamentally limited. They don’t think, reason, or possess logic. They operate on a complex understanding of language patterns, much like a calculator operates on mathematical principles. When you ask an LLM a question, it’s not pulling from a database of knowledge; it’s generating responses based on the patterns it has learned from vast amounts of text.
LLMs can’t even read the languages they’re “experts” in. They process tokens—essentially strings of 1s and 0s. If an LLM spits out a “frightening” message, it’s merely regurgitating a sequence of tokens that fit the pattern of what you asked. It doesn’t understand the meaning behind those words. And it only generates those tokens when asked. Fearing that LLMs could use language in nefarious ways is akin to worrying that a calculator might spontaneously decide to build a bomb.
The outputs of LLMs are entirely dependent on their training data. If an LLM is trained on the most beautiful poetry ever written, it will respond with poetic language. Conversely, if it’s trained on the vitriol of the most racist 4Chan message boards, it will generate hateful messages. And if it’s only trained on nonsensical symbols like #$%^&, then those are the only responses it can provide.
To illustrate this point, imagine if an LLM were trained solely on a language consisting of symbols like +, -, %, and @. If you typed in "++&@", it might respond with something like "+--@@". The model isn’t comprehending the symbols; it’s simply regurgitating patterns based on its training data.
Take the Bing Chatbot Sydney, for example, which made headlines for allegedly "falling in love" with a user, as described in a New York Times article. In reality, Sydney didn’t experience love; it merely recognized and mimicked the language patterns present in its dataset. This phenomenon is so striking that a quote from the Sydney conversation ominously opens the 2024 Blumhouse film Afraid, complete with a spooky internet dial tone sound in the background. Imagine how chilling it would be if the movie opened with a string of symbols like "&^%^&&^%".
Because of this advanced pattern recognition, one of the most remarkable aspects of LLMs is their ability to assist in various fields. For example, they can help writers brainstorm ideas, aid researchers in summarizing articles, or even assist programmers in generating code snippets. This versatility is akin to how calculators revolutionized mathematics by making complex calculations accessible to everyone.
However, this pattern-based approach has led to vocal criticism, particularly regarding AI-generated art, writing, and music. Detractors argue that if LLMs are merely identifying and replicating patterns from existing language, then their outputs lack originality and, by extension, creativity. Personally, I subscribe to the “everything is a remix” philosophy, which suggests that human ingenuity is not as unique as we often believe. For every true visionary like Sophie or David Lynch, there are countless “Jimmy Webers” out there, whose favorite movie is a mashup of The Strangers and Home Alone. In this light, the line between human creativity and LLM-generated content becomes increasingly blurred, and the utility of LLMs for all the Jimmy Webers becomes clear.
The Dangers of LLMs in the Wrong Hands
It’s crucial to understand that LLMs are not infallible. They can produce incorrect or nonsensical answers, and their outputs are heavily influenced by the data they were trained on. If that data contains biases or inaccuracies, the model may inadvertently perpetuate those issues. This is why it’s essential to approach LLM-generated content with a critical, but still, open-minded eye.
While LLMs are powerful tools, they can also be dangerous if misused. For instance, they could be employed to generate misleading information, create deepfake content, or automate phishing scams. However, it’s important to note that these actions require human intervention. LLMs do not operate independently; they are tools that require guidance and intent from users.
This is where the conversation about ethics and responsibility comes into play. As we integrate LLMs into various aspects of our lives, it’s crucial to establish guidelines and regulations to prevent misuse. The technology itself is not inherently evil; it’s the application of that technology that can lead to harmful outcomes.
AGI: The Real Boogeyman?
Now, let’s talk about Artificial General Intelligence (AGI), the concept that has everyone on edge. AGI refers to a theoretical form of AI that possesses the ability to understand, learn, and apply intelligence across a wide range of tasks—essentially, the stuff of science fiction. Unlike LLMs, which excel at specific tasks related to language, AGI would be capable of reasoning, problem-solving, and even emotional understanding.
The truth is, AGI is still years, if not decades, away from being realized. And even more perplexing is the fact that there’s no universally accepted definition of what AGI actually is. Some envision it as powerful LLMs, while others think of fully sentient AI.
For example, here is Sam Altman’s idea of what AGI is:
“AGI will be achieved when we have a system that can help in discovering novel physics, which goes beyond general intelligence. Defining general intelligence involves considering whether it is brilliance in a certain field or the ability to function like a child who continuously needs programming. The true test of AGI will be reached when we have a version that can autonomously and adaptably figure out new problems, similar to the way a four-year-old child does.”
Here’s James Cameron’s thoughts on AGI:
“I'm bullish on AI, not so keen on AGI, because AGI will just be a mirror of us. Good to the extent that we are good, and evil to the extent that we are evil. And since there is no shortage of evil in the human world, and certainly no agreement of even what good is, what could possibly go wrong?”
As you can see, there is a vast range of opinions. The reality is, we don’t even know what AGI will look like, but for some reason, the fear surrounding it is palpable.
Hollywood’s Misguided Portrayal of AI
Movies have a knack for sensationalizing technology, and AI is no exception. Films like Afraid, M3GAN, and Subservience depict AI as a threat to humanity, capable of executing complex, resource-intensive tasks. But let’s be honest: if “evil big tech” were truly out to control us, they wouldn’t hand over the keys to a personal Terminator. The idea that corporations would create and distribute such powerful machines to the general public, in an attempt to control them, frankly, doesn’t make much sense. If “evil big tech” is indeed motivated by profit, they are unlikely to invest in technology that could empower the average person to challenge their authority.
Conclusion: Embracing the Reality of AI
While LLMs are indeed powerful and transformative, they are not the sentient beings that Hollywood would have us believe, and they will not be for a very long time, if ever. They are tools—remarkable tools, but tools nonetheless.
They are calculators. They take complex ideas and systems and break them down quickly and efficiently. This is why both poets and scientists can find immense use in these tools. And like calculators, if people don’t use them, they sit there, nascent and powered down.
And while AGI is the concept that keeps us up at night, it remains a theoretical construct that we’re nowhere near achieving. Again, it’s so theoretical that there is not a single idea, let alone definition, for what it even means.
So, the next time you watch a movie about AI taking over the world, remind yourself that LLMs are more like calculators than Skynet. They’re here to assist us, not to plot our demise. Let’s embrace the potential of this technology while keeping our fears grounded in the current, present reality.