Professor Matt Jones of Gresham College opens his lecture series by confronting a modern prophecy: the belief that "the end is nigh" due to the rapid ascent of Artificial Intelligence, asking whether this impending future is a reality or a fantasy, a utopia or a dystopia. For generations, humans have considered themselves the "apex predator" and "top dog," but the current anxiety stems from the possibility that we will be subjugated, assimilated into the machine, or become domesticated as AI’s "pet". Initial polling of the audience revealed many were "nervous, worried, scared" about this future, though a significant portion remained "excited [and] optimistic".
The challenge to human supremacy is already etched into history, despite humanity’s proud gallery of greats—from Sir Edmund Hillary and Tenzin Norgay summiting Everest in 1953 to landing on the moon. Jones suggests that AI’s own pantheon might begin with Lee Sedol, a Go Grandmaster, who felt "ashamed" and "powerless" after being beaten by AlphaGo, a result Jones, as a technologist, found deeply puzzling and embarrassing. This angst is amplified by predictions detailed in reports such as the AI Stanford 100-year investigation, which foresees that by 2050, robot teams will defeat human world champions, and AI will be "smarter faster cleverer than all of the scientists and all of the professors in the world".
Jones highlights that this wave of worry began roughly 11 years ago, driven by voices like Stephen Hawking and Elon Musk.

Related article - Uphorial Radio

This period is characterized as a "scorching hot AI summer" which is unlike previous booms and busts in AI history because, alongside the pervasive hype, there is a heightened sense of human vulnerability stemming from recent destabilizing events like the financial crash and the pandemic. The technological backbone of this shift lies in foundation models (also called frontier models), which utilize stupendous amounts of diverse data (audio, visual, image, sensor data) and a new architecture known as transformers running on powerful machines. Unlike earlier machine learning systems designed for narrow tasks (e.g., finding tumors or denying loans), foundation models are generalizable. The theoretical progression that concerns many is foundation models leading to agentic AI (autonomous software agents), then to artificial general intelligence (AGI)—which does not require sleep or food—and finally to super intelligences.
However, Jones raises a fundamental moral question why technologists are focused on pushing AI to be "faster brighter better" when there are hundreds of millions of "natural intelligences" globally that could be enabled and empowered through careful application of technology, as evidenced by common literacy rate maps. He connects the fear of AI overlords to historical desires for entities above us—gods, kings (such as Louis XIV), empires, and extractive industrial technologies. Jones cautions that individuals are participating in this risk by slowly "welcoming the tiger into our homes"—a reference to the children’s book The Tiger That Came to Tea, where the welcomed guest consumes everything and disappears. This welcome manifests as anthropomorphism, the natural human tendency to ascribe intelligence and human qualities to non-human entities. Historically, people were mesmerized by Hans the clever horse, who appeared to do mathematics but was merely reacting to the subtle body language cues of his human questioner. Similarly, users of the 1960s Eliza chatbot poured their hearts into the system, quickly granting it authority. Joseph Weizenbaum, Eliza’s creator, noted that people are "very eager to trust what's coming out of the machine," warning that a "certain danger lurks there". This trust leads to people feeling surpassed, humiliated (like Sedol), or suffering the angst that their time is up, particularly as foundation models encroach on human hallmarks like creativity—the ability to produce language, art, and music.
Despite the real and present dangers, such as deep fakes manipulating democratic outcomes or the philosophical threats like Nick Bostrom’s paperclip apocalypse, Jones offers reasons for hope. He emphasizes that AI remains fragile and fallible, often making mistakes that require human double-checking. Furthermore, human intelligence possesses inherent advantages that AI lacks: it is embodied (nurtured through physical experience), modulated by emotions and hormones, and socially constructed through interaction with families, schools, and communities. These complex, non-rational layers of intelligence lead humans to engage in profoundly non-optimal activities, like cheese rolling or international bog snorkeling, that a rational AI would reject. Ultimately, Jones does not believe AI will subjugate humanity but stresses the urgent need to build AI systems that function purely as tools under full human control, designed to empower and liberate the hundreds of millions of "natural intelligences" globally.