Reevaluating Authoritarian Algorithms: Myths of Tomorrow
Written on
In a 2020 dinner conversation with a New Yorker interviewer, Yuval Noah Harari remarked that during the Middle Ages, “only what kings and queens did was important, and even then, not everything they did” (Parker, 2020). His observation underscores a common viewpoint: significant historical transformations often appear to hinge on a few towering figures who disrupt established norms.
Yet, history encompasses more than just these prominent individuals; it is shaped by the collective actions of the broader society. The likes of Einstein, Hitler, Shakespeare, Stalin, and Harari are products of the intricate web of human relationships and interactions.
Caution to the "Useless Classes"
Lessons from a Secretive Group of 1800s Vandals
You hold just as much significance as any of Harari’s monarchs; your role in the grand narrative of humanity is equally valid. The essence of history morphs based on one’s viewpoint—your unique lens. Harari's writings often venture into a populist narrative that can feel detached from the realities of life, akin to sensational television rather than serious literature.
This is not to dismiss all of Harari’s insights as incorrect. Many of his conclusions, scientific concepts, and societal issues carry elements of truth. The challenge lies in discerning where his imaginative storytelling concludes and reality begins. In a fictional narrative, such ambiguity may be acceptable, but in work touted as pivotal to humanity's understanding, this conflation is problematic.
Harari seems aware of this distinction. He once stated, “If we think about art as kind of playing on the human emotional keyboard, then I think A.I. will very soon revolutionize art completely” (Parker, 2020). His work embodies the emotional manipulation he critiques, invoking fears of a dystopian future dominated by artificial intelligence, which diverts attention from more pressing inquiries.
While he addresses valid concerns, such as the use of technology for surveillance and societal control, his framing often casts these issues as unavoidable outcomes of humanity's flaws.
He suggests a curious techno-optimism: “When the biotech revolution merges with the infotech revolution, it will produce Big Data algorithms that can monitor and understand my feelings much better than I can, and then authority will probably shift from humans to computers” (Harari, 2017, Listen to the Algorithm, Para. 8). This assertion is misleading. While it is true that companies misuse algorithms for dubious predictions, this is the crux of the issue (Narayanan, 2022).
Moreover, Harari posits that the rationality of machines reflects a reality where “some people are far more knowledgeable and rational than others, certainly when it comes to specific economic and political questions” (Harari, 2017, Big Data is Watching You, Para. 5). This perspective aligns him with the neoliberal elite, a curious alliance given his frequent critiques of their technology-centric lifestyles.
As noted by Darshana Narayanan in a 2022 Current Affairs article:
> “Harari’s motives remain mysterious; but his descriptions of biology (and predictions about the future) are guided by an ideology prevalent among Silicon Valley technologists like Larry Page, Bill Gates, Elon Musk, and others. They may have differing opinions on whether the algorithms will save or destroy us. But they believe, all the same, in the transcendent power of digital computation.” (Narayanan, 2022)
Harari himself remarks that once “AI makes better decisions than us about careers and perhaps even relationships, our concept of humanity and of life will have to change” (Harari, 2017, The Drama of Decision-Making, Paras. 15–16). He presents this idea not as a positive change but as an inevitability.
The concern is that Harari's portrayal of the future often leans towards a pessimistic dystopia, reminiscent of a world in Cyberpunk 2077, which promotes the same fundamental principle as optimistic thinkers like Steven Pinker: the normalization of surveillance capitalism.
This point has been raised by others, including Narayanan, and remains significant: treating these emerging technologies as predetermined outcomes is perilous. While history can seem inevitable when viewed superficially, subscribing to a deterministic worldview is misguided.
History's trajectory was not solely dictated by plagues, agricultural advancements, or gunpowder; it evolved through the communication and collaborative reasoning of diverse groups. This evokes the Greek philosophical notion that human history thrives on extensive dialogues about moral conduct.
Examining the ancient Greeks may illuminate why Harari's work captivates the neoliberal elite. In 21 Questions for the 21st Century, Harari suggests that “there is nothing wrong with blind obedience, of course, as long as the robots happen to serve benign masters” (Harari, 2017, Digital Dictatorships, Para. 2). While he addresses future robotic subservience, this raises broader implications.
He further claims that the “liberal belief in the feelings and free choices of individuals is neither natural nor very ancient,” noting that historically, authority was derived from divine laws, with the source of authority shifting only in recent centuries (2017, Listen to the Algorithm, Para. 1).
This simplification of freedom and free will parallels Plato’s notion of a tyrannical philosopher-king and the view of democracies as unruly masses derailing governance (found in 488a–489d). However, Plato's perspective may be biased, given his tumultuous times and personal losses. Nevertheless, Harari's vision of an AI-dominated future evokes Plato’s ideal state. He intertwines humanity's salvation with a world of benevolent technology overseen by wise technologists steering civilization in the right direction.
When he highlights human failings, such as the risk posed by “robots not being inherently dangerous, but rather the natural stupidity and cruelty of their human masters” (Harari, 2017, Digital Dictatorships, Para. 4), he frames the solution as a “benign government [where] powerful surveillance algorithms can be the best thing that ever happened to humankind” (Harari, 2017, Digital Dictatorships, Para. 8).
Harari's childhood may provide further insight. He once engaged in war games of his own creation and described a time when he identified as a “stereotypical right-wing nationalist” (Parker, 2020). Although he has since distanced himself from nationalist ideologies, his worldview remains influenced by a specific class perspective.
A revealing incident from 2017 illustrates this disconnect.
> “A Palestinian laborer posted a picture of himself at work with a bulldozer, captioned ‘Good morning!’ An algorithm misinterpreted the Arabic transliteration, leading to his arrest on suspicion of terrorism.” (Harari, 2017, Digital Dictatorships, Para. 12)
While Harari attempts to address the Israeli government's treatment of Palestinians, his language reflects an insulated privilege, still tethered to vestiges of his nationalist past. The experience for the Palestinian individual involved was likely far from “comic,” highlighting a significant oversight in Harari’s analysis.
This incident uncovers a deeper flaw in Harari’s focus on technology. He notes the algorithm's failure but neglects to question its presence. What role does Facebook play in the harassment of a worker simply taking a selfie? Such inquiries are absent from Harari's discussions. Despite his exploration of various technological pitfalls, he avoids addressing the systemic issues meaningfully.
Shoshana Zuboff’s 2019 work, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power, provides an essential critique of Harari's approach. “We worry about companies amassing our personal data,” she writes, “yet we fail to question why our experiences are reduced to behavioral data in the first place” (Rendition: From Experience to Data, Para. 1).
Harari depicts a stark vision of a technologist's utopia, reframing it as a narrative rather than the fable he critiques in Sapiens. He endorses what Zuboff terms “datafication,” suggesting that our emotional limitations pale in comparison to machine logic.
Are the algorithms crafted by surveillance capitalists dangerous? Certainly. But Harari's populist approach can also be harmful, as illustrated by the experience of Ukrainian photographer Hannah Hrabarska after reading Sapiens. She reported feeling “more compassionate” yet less engaged with politics, leading her to believe that individual actions matter little in the grand scheme (Parker, 2020).
What value does compassion hold if it detaches one from the necessary engagement with the world? If one perceives themselves as insignificant compared to an AI run by Facebook, it leads to a self-fulfilling prophecy, where only the elite technologists, akin to Plato's ideal, guide humanity. The artificial intelligence they advocate merely serves to commodify lived experiences as behavioral data (Zuboff, 2019, Rendition: From Experience to Data, Para. 17).
Greetings! I’m Odin Halvorson, an independent scholar, film enthusiast, fiction writer, and technology aficionado. If you appreciate my work and wish to support me, please hit that “clap” button (up to 50 times!) and subscribe! You can also join my newsletter for free access to my articles!
References Harari, Y. N. (2018). 21 Lessons for the 21st Century. Random House. Narayanan, D. (2022, July 6). The Dangerous Populist Science of Yuval Noah Harari. Current Affairs, March/April 2022. https://www.currentaffairs.org/2022/07/the-dangerous-populist-science-of-yuval-noah-harari Parker, I. (2020, February 10). Yuval Noah Harari’s History of Everyone, Ever. The New Yorker. https://www.newyorker.com/magazine/2020/02/17/yuval-noah-harari-gives-the-really-big-picture Zuboff, S. (2020). The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power (First Trade Paperback Edition). PublicAffairs.