The Perils of Overusing AI Metaphors: A Call for Clarity
Written on
In our quest to distinguish the new from the old and the familiar from the unfamiliar, we risk either oversimplifying reality or introducing misleading concepts, rendering our inquiries ineffective.
Metaphors and historical analogies are inherently flawed. They necessitate a balance between accuracy and simplicity, often motivated by the desire to advance an argument. Excessive reliance on such figures of speech can degrade the quality of discussions, impacting both our comprehension and that of others.
This communicative ailment currently afflicts discussions surrounding AI.
This article is an excerpt from **The Algorithmic Bridge*, a newsletter aimed at connecting AI, algorithms, and individuals. It seeks to illuminate AI's influence on your life and equip you with the tools to navigate the future more effectively.*
Successful Metaphors as Indicators of Bias
AI metaphors often fall prey to a particular issue: they can become meaningless through overuse.
When I first encountered the term “stochastic parrot,” coined by linguist Emily M. Bender, it resonated with me. It succinctly encapsulated a significant flaw of language models: their tendency to produce randomized outputs devoid of intention. The phrase gained traction and became a go-to reference for critiquing language models.
However, after two years of ubiquitous use, the term has been diluted to the point of losing its original significance; it has transformed from a meaningful critique into a partisan symbol, reflecting the author’s biases rather than highlighting the inherent limitations of language models.
The same phenomenon occurs on the opposing side of the AI discourse. For example, the notion that ChatGPT resembles the human brain because both function as “prediction machines”, as noted by neuroscientist Erik Hoel, has become overused. While it contains some truth, it has been lost amid the ongoing narrative battles.
This is not a selective observation. Numerous examples abound, such as “ChatGPT is a blurry JPEG of the web” or “Stable Diffusion is automated plagiarism.” These metaphors often serve as rhetorical weapons aimed at discrediting opposing viewpoints rather than genuinely evaluating their validity.
I’ve witnessed individuals resorting to metaphors as if they were definitive rebuttals in social media debates. While this may be acceptable in casual online exchanges, it becomes problematic when influential figures like Sam Altman, OpenAI's CEO, dismissively reduce the “stochastic parrot” analogy to a catchphrase:
Altman’s unfalsifiable assertion suggests that there is nothing more to discuss, potentially misleading those who view him as an authority. Emily M. Bender, unsurprisingly, contests his assertion: “You are not a parrot and a chatbot is not a human.”
Statements like Altman’s do little to advance understanding. They obscure the conversation by tapping into emotional responses rather than engaging with the underlying arguments. The term “stochastic parrot” (or Bender’s ideas) isn’t to blame; the issue stems from its success and our troubling tendency to oversimplify AI discussions into dogmatic disputes through the misuse of these often well-intentioned but partially flawed analogies.
Historical Events as Distorted Reflections of Today
It appears that those involved in AI discussions are just as fond of history as they are of metaphors.
In a recent interview with Forbes, Alex Konrad and Kenrick Cai posed a question to Altman: “Do you see any parallels between the current AI market and the rise of cloud computing, search engines, or other technologies?” He responded:
“Look, I think there are always parallels. And then there are always things that are a little bit idiosyncratic. And the mistake that most people make is to talk way too much about the similarities, and not about the very subtle nuances that make them different.”
This insight is spot on. While I may disagree with his comments on the “parrotism” of humans, I concur that drawing parallels between AI and previously emerging technologies often serves to bolster narratives rather than present objective truths. For instance, arguments claiming that disruptive technology ultimately generates more jobs than it eliminates suggest we can expect similar outcomes from AI, as posited by Roon and Noah Smith in their eloquent essay titled “Autocomplete for Everything.”
Moreover, historical events that bear some resemblance to present circumstances can be manipulated rhetorically to support any argument, whether one views AI as a boon or a bane for society.
Common comparisons have included likening Luddites to those who feel threatened by text-to-image models like Stable Diffusion, equating AI to fire or electricity for their transformative potential, or drawing parallels between the generative AI hype and that of web3/crypto. I, too, have fallen into this trap; just last week, I compared the importance of learning prompt engineering to mastering English as a non-native speaker in my youth.
A recent example involved a writer who dismissed the significance of Clarkesworld’s temporary closure due to a surge in AI-generated submissions by likening it to the rise of online magazines three decades ago: “Is this really a crisis of creativity? Or an opportunity?” she asked.
Heather Cox Richardson, a history professor and the most successful Substack author to date, asserts that “history doesn’t repeat itself, but it sure rhymes.” I share her view: there is much to glean from the past. However, it’s essential to approach these comparisons with integrity; none of them are perfect (some, like the aforementioned example, are notably poor), so we should represent them as such.
Is the advent of AI writing tools like ChatGPT truly comparable to the rise of digital newspapers? Are those advocating for more regulation of generative AI companies akin to Luddites? A more nuanced analysis, devoid of grand metaphors, would certainly be beneficial.
As Altman rightly points out, we risk focusing too heavily on similarities, leading to an inaccurate projection of historical events onto a present characterized by fundamentally different circumstances—this distorts reality to fit our subjective views, in turn skewing how others perceive our arguments.
Avoid Falling for Simplistic Arguments
I have illustrated this issue using analogies, metaphors, and historical references from both sides of the AI debate (those who regard modern AI as merely a sophisticated statistical tool versus those who predict AGI is imminent).
While I believe some metaphors are more problematic than others (for instance, comparing ChatGPT to the human brain is quite a stretch), I do not advocate for any specific stance here (including my own, which aligns more closely with those who accept the “stochastic parrots” view). Instead, I aim to highlight a broader issue that affects us all.
This trend—longstanding but amplified by ChatGPT—could render the entire discourse surrounding AI ineffective and meaningless. The more we discuss AI through diluted concepts and questionable comparisons, the more we alienate those unfamiliar with the original intentions—namely, the majority of people.
This practice is widespread. It coincides with the increasing popularity of AI and the need to communicate with the public in more accessible language, but we can strive for improvement. We shouldn’t reduce discussions to mere “dunking” on one another.
My proposal—though I am not naive enough to believe it will eradicate this pervasive tendency but may mitigate it somewhat—is to avoid treating metaphors and comparisons as standalone arguments. It’s crucial to contextualize them to enhance their utility. We should recognize their limitations before substituting the subject of debate with an imperfect analogy.
While it may not always be practical to do so, making an effort to refresh the meaning of those compelling analogies that time and repeated use have dulled could prove valuable.
Subscribe to **The Algorithmic Bridge*. Connecting algorithms and individuals through insights about the AI that impacts your life.*
You can also support my work on Medium directly and gain unlimited access by becoming a member using my referral link **here*! :)*