In a small room in San Diego last week, a man in a black leather jacket explained to me how to save the world from destruction by AI. Max Tegmark, a notable figure in the AI-safety movement, believes that “artificial general intelligence,” or AGI, could precipitate the end of human life. I was in town for NeurIPS, one of the largest AI-research conferences, and Tegmark had invited me, along with five other journalists, to a briefing on an AI-safety index that he would release the next day. No company scored better than a C+.

The threat of technological superintelligence is the stuff of science fiction, yet it has become a topic of serious discussion in the past few years. Despite the lack of clear definition—even OpenAI CEO Sam Altman has called AGI a “weakly defined term”—the idea that powerful AI contains an inherent threat to humanity has gained acceptance among respected cultural critics.

Granted, generative AI is a powerful technology that has already had a massive impact on our work and culture. But superintelligence has become one of several questionable narratives promoted by the AI industry, along with the ideas that AI learns like a human, that it has “emergent” capabilities, that “reasoning models” are actually reasoning, and that the technology will eventually improve itself.

I traveled to NeurIPS, held at the waterfront fortress that is the San Diego Convention Center, partly to understand how seriously these narratives are taken within the AI industry. Do AGI aspirations guide research and product development? When I asked Tegmark about this, he told me that the major AI companies were sincerely trying to build AGI, but his reasoning was unconvincing. “I know their founders,” he said. “And they’ve said so publicly.”

In parallel with the growth of fear and excitement about AI in the past decade, NeurIPS attendance has exploded, increasing from approximately 3,850 conference-goers in 2015 to 24,500 this year, according to organizers. The conference center’s three main rooms each have the square footage of multiple blimp hangars. Speakers addressed audiences of thousands. “I do feel we’re on a quest, and a quest should be for the holy grail,” Rich Sutton, the legendary computer scientist, proclaimed in a talk about superintelligence.

The conference’s corporate sponsors had booths to promote their accomplishments and impress attendees with their R&D visions. There were companies you’ve heard of, such as Google, Meta, Apple, Amazon, Microsoft, ByteDance, and Tesla, and ones you probably haven’t, such as Runpod, Poolside, and Ollama. One company, Lambda, was advertising itself as the “Superintelligence Cloud.” A few of the big dogs were conspicuously absent from the exhibitor hall, namely OpenAI, Anthropic, and xAI. Consensus among the researchers I spoke with is that the cachet of these companies is already so great that setting up a booth would be pointless.

The conference is a primary battleground in AI’s talent war. Much of the recruiting effort happens outside the conference center itself, at semisecret, invitation-only events in downtown San Diego. These events captured the ever-growing opulence of the industry. In a lounge hosted by the Laude Institute, an AI-development support group, a grad student told me about starting salaries at various AI companies of “a million, a million five,” of which a large portion was equity. The lounge was designed in the style of a VIP lounge at a music festival. It was, in fact, located at the top of the Hard Rock Hotel.

The place to be, if you could get in, was the party hosted by Cohere, a Canadian company that builds large language models. (Cohere is being sued for copyright and trademark infringement by a group of news publishers, including The Atlantic.) The party was held on the USS Midway, an aircraft carrier used in Operation Desert Storm, which is now docked in the San Diego harbor. The purpose, according to the event’s sign-up page, was “to celebrate AI’s potential to connect our world.”

With the help of a researcher friend, I secured an invite to a mixer hosted by the Mohamed bin Zayed University of Artificial Intelligence, the world’s first AI-focused university, named for the current U.A.E. president. Earlier this year, MBZUAI established the Institute for Foundation Models, a research group in Silicon Valley. The event, held at a steak house, had an open buffet with oysters, king prawns, ceviche, and other treats. Upstairs, Meta was hosting its own mixer. According to rumor, some of the researchers downstairs were Meta employees hoping to be poached by the Institute for Foundation Models, which supposedly offered more enticing compensation packages.

Of 5,630 papers presented in the poster sessions at NeurIPS, only two mention AGI in their title. An informal survey of 115 researchers at the conference suggested that more than a quarter didn’t even know what AGI stands for. At the same time, the idea of AGI, and its accompanying prestige, seemed at least partly responsible for the buffet. The amenities I encountered certainly weren’t paid for by chatbot profits. OpenAI, for instance, reportedly expects its massive losses to continue until 2030. How much longer can the industry keep the ceviche coming? And what will happen to the economy, which many believe is propped up by the AI industry, when it stops?

In one of the keynote speeches, the sociologist and writer Zeynep Tufekci warned researchers that the idea of superintelligence was preventing them from understanding the technology they were building. The talk, titled “Are We Having the Wrong Nightmares About AI?,” mentioned several dangers posed by AI chatbots, including widespread addiction to chatbots and the undermining of methods for establishing truth. After Tufekci gave her talk, the first audience member to ask a question appeared annoyed. “Have you been following recent research?” the man asked. “Because that’s the exact problems we’re trying to fix. So we know of these concerns.” Tufekci responded, “I don’t really see these discussions. I keep seeing people discuss mass unemployment versus human extinction.”

It struck me that both might be correct: that many AI developers are thinking about the technology’s most tangible problems while public conversations about AI—including among the most prominent developers themselves—are dominated by imagined ones. Even the conference’s name contained a contradiction: The name NeurIPS is short for Neural Information Processing Systems, but artificial neural networks were conceived in the 1940s by a logician-and-neurophysiologist duo who wildly underestimated the complexity of biological neurons and overstated their similarity to a digital computer. Regardless, a central feature of AI’s culture is an obsession with the idea that a computer is a mind. Anthropic and OpenAI have published reports with language about chatbots being, respectively, “unfaithful” and “dishonest.” In the AI discourse, science fiction often defeats science.

On the roof of the Hard Rock Hotel, I attended an interview with Yoshua Bengio, one of the three “godfathers” of AI. Bengio, a co-inventor of an algorithm that makes ChatGPT possible, recently started a nonprofit called LawZero to encourage the development of AI that is “safe by design.” He took the nonprofit’s name from a law featured in several Isaac Asimov stories that states a robot should not allow humans to be harmed. Bengio was concerned that, in a possible dystopian future, AIs might deceive their creators and that “those who will have very powerful AIs could misuse it for political advantage, in terms of influencing public opinion.”

I looked around to see if anyone else was troubled by the disconnect. Bengio did not mention how fake videos are already affecting public discourse. Neither did he meaningfully address the burgeoning chatbot mental-health crisis, or the pillaging of the arts and humanities. The catastrophic harms, in his view, are “three to 10 or 20 years” away. We still have time “to figure it out, technically.”