AI Is the Nuclear Bomb of the 21st Century | Opinion

How much has the human race learned from history?

In 1945, just weeks after the U.S. detonated atomic bombs over Japan, killing at least 100,000 people and changing the scale of war forever, scientists who worked to create these weapons of mass destruction formed an organization to control their spread and stop their use. "Our very survival," they wrote in the first issue of the Bulletin of Atomic Scientists, "is at stake."

Since then, citizens around the world have marched against these warheads, leaders have signed arms control agreements to limit them, and civic and religious leaders have helped established norms against their use.

Now, a new marvel of science and technology is fast emerging that some of its creators worry may have the potential to similarly threaten our existence: generative artificial intelligence. Also called human-competitive intelligence, generative AI refers to algorithms that enable computer systems, on their own, to quickly learn from the storehouses of data on the internet and perform seemingly thoughtful tasks previously reserved to humans, such as creating video, writing software, analyzing data and even chatting online.

Business leaders are particularly enthralled by AI's growing capabilities. In their latest quarterly earnings presentations, top execs of S&P 500 companies talked up AI an average of 13 times, twice as often as they did a year ago. C-suite officers at Microsoft, which is investing $10 billion in OpenAI, the lab behind online chatbot ChatGPT, cited the term 50 times, while at Alphabet, whose Google subsidiary now offers a conversational AI search tool, top execs mentioned it 64 times.

artificial intelligence is the nuclear bomb
iStock

The enthusiasm goes well beyond the tech sector. Executives at companies as varied as McDonald's, Caterpillar, Home Depot, Roche and Nike all repeatedly called out AI in their financial presentations for its help with such tasks as automating scheduling, managing supply chains, and developing new and even revolutionary products like personalized medicines.

JPMorgan Chase, America's biggest bank, is particularly bullish. In an interview, CEO Jamie Dimon predicted that generative AI, like "every technology that's ever been adopted," will be an overall good for the economy by boosting productivity. But when pushed, he acknowledged that if things don't turn out that way, "that's where society should step in."

It seems society is trying to step in.

According to a recent Harris Poll, two-thirds of American adults—across all income and education levels—don't trust generative AI and believe it presents a threat to humanity. That same percentage also thinks AI will hurt the economy and employment. Additionally, more than four in five agree that it would be simple for someone to abuse the technology to do harm.

Anxiety increases with age. But even members of Generation Z—people under 27 years old, who are the most familiar with AI of any age group and by a large majority excited by its development—are the most likely to say that AI will worsen social inequalities.

Society, based on our findings, would welcome intervention now. Asked whether industry regulation is warranted, 53 percent of American adults in our poll say yes, with only 15 percent saying no. (The rest are neutral.)

Society's concerns are mirrored by many of the founders of this new technology. A few weeks ago, the Future of Life Institute, whose mission is to steer technology away from large-scale risks, released an online petition that calls for a universal six-month timeout on training generative AI more advanced than OpenAI's GPT-4. It has been now signed by more than 30,000 people including some of the world's preeminent technologists (and one of this essay's writers).

The petition succeeded in drawing attention, for a moment at least, to the potential hazards of an AI arms race.

So what, exactly, should society do?

The two most widely supported actions, endorsed by majorities of those surveyed, are to prevent the use of a person's image, voice or other identifiable traits being used by AI without their permission, and requiring AI users to disclose whenever the technology was employed to create publicly available content. And for almost half of respondents, that's only a starting point: They also want to the government to establish an official group to police the AI industry and enact laws that restrict the access to and development of generative AI tools.

Asked who should be responsible for policing AI, 60 percent of those who support industry supervision answer either an independent oversight body composed of government officials, generative AI experts and other stakeholders, or simply the federal government. Another 11 percent would empower the United Nations or another international body.

We're heartened to see the wheels of government begin turning. The Biden administration is now accepting comments on possible federal regulations on AI systems including performance audits to hold its users accountable. The National Institute of Standards and Technology, meanwhile, is getting input on its first version of a risk-management framework for AI development and deployment.

"In order to realize the benefits that might come from advances in AI, it is imperative to mitigate both the current and potential risks AI poses," the White House said in a statement after President Joe Biden and Vice President Kamala Harris summoned the chiefs of Microsoft, Google and OpenAI to remind them of their responsibilities.

On May 1, the so-called "godfather of AI," Geoffrey Hinton, disclosed that he had quit his job at Google so he could be free to join the campaign against it. "It's hard to see how you can prevent the bad actors from using it to do bad things," he said in an interview. In Hinton's achievements and change of heart, he is reminiscent of J. Robert Oppenheimer, who oversaw the creation of the atomic bomb, only to regret it. Oppenheimer then went on to help found the Bulletin of Atomic Scientists to control their use and spread.

The challenge of generative AI is too important, however, to leave to scientists. As was the case at the dawn of the nuclear age, we all have a role to play in demanding governance of this new technology. Scientists, along with society more generally, have made it clear that now is the time.

Rachel Bronson is CEO of the Bulletin of Atomic Scientists. Will Johnson is CEO of the Harris Poll.

The views expressed in this article are the writers' own.

Uncommon Knowledge

Newsweek is committed to challenging conventional wisdom and finding connections in the search for common ground.

Newsweek is committed to challenging conventional wisdom and finding connections in the search for common ground.

About the writer

Rachel Bronson & Will Johnson


To read how Newsweek uses AI as a newsroom tool, Click here.

Newsweek cover
  • Newsweek magazine delivered to your door
  • Newsweek Voices: Diverse audio opinions
  • Enjoy ad-free browsing on Newsweek.com
  • Comment on articles
  • Newsweek app updates on-the-go
Newsweek cover
  • Newsweek Voices: Diverse audio opinions
  • Enjoy ad-free browsing on Newsweek.com
  • Comment on articles
  • Newsweek app updates on-the-go