Skip to main content

Blog

You are here:

Geoffrey Hinton’s resignation from Google is AI’s canary in a coal mine

It’s no understatement to say that Geoffrey Hinton is as close to a god as we’re ever going to have in the Artificial Intelligence space. The British-Canadian computer scientist and cognitive psychologist literally wrote the book on machine learning, after all. Along with AI rock stars Yoshua Bengio and Yann LeCun he picked up the 2018 Turing Award, the Nobel Prize for computer science, for creating the very building blocks of today’s large language models (LLM). Hinton is widely credited with creating the foundations of deep learning, which defines how LLMs are trained and how they synthesize and connect the data. So, his decision to quit his very plush job with Google so he could ring the alarm bells about AI means we should all stand up and take notice. We can’t afford to ignore him.

GROWING CONSENSUS

Hinton resigned from Google last week, saying AI could pose a more urgent threat to humanity than climate change. He said he wanted to speak out on these risks without worrying about damaging his now-former employer’s reputation.

The timing of Hinton’s resignation – so soon after the release of an open letter from the Future of Life Institute, signed by over 1,000 top industry researchers, which called for a 6-month moratorium on testing next-generation large language models – is no coincidence. It signals a growing momentum among scientists and researchers about the need to address AI’s dark side before we get too far down the track.

Notably, Hinton did not sign that letter, as he disagreed with the group’s ask to stand down on research. But his concern certainly echoes that of the broader group.

“It’s utterly unrealistic,” he told Reuters in an interview. “I’m in the camp that thinks this is an existential risk, and it’s close enough that we ought to be working very hard right now, and putting a lot of resources into figuring out what we can do about it.”

With AI-powered tools already being used by groups to spread misinformation – for example, the Republican Party recently released an attack ad based entirely on AI-generated imagery – the window for slowing down the rampant rush toward an uncertain AI-powered future is narrowing. 

Hinton’s move is a major warning that echoes several recent headline-generating announcements from other AI heavyweights:

  • Eliezer Yudkowsky, co-founder of the Machine Intelligence Research Institute, says the Future of Life Institute letter doesn’t go far enough. He says he agrees with colleagues who say unmitigated artificial general intelligence (AGI) development could kill off the human race.
  • Paul Christiano, who ran the large language model alignment team at OpenAI, echoed that sentiment on the Bankless podcast, saying, “I think maybe there’s something like a 10-20% chance of AI takeover, [with] many [or] most humans dead.”
  • Sam Altman, OpenAI co-founder and CEO, issued his own warning during an episode of Lex Fridman’s podcast: “I’m a little bit afraid, and I think it’d be crazy not to be a little bit afraid, and I empathize with people who are a lot afraid,” he said. “The current worries that I have are that there are going to be disinformation problems or economic shocks, or something else at a level far beyond anything we’re prepared for. And that doesn’t require superintelligence.”
  • Michael Schwarz, Chief Economist at Microsoft, warned attendees at a World Economic Forum panel in Geneva that AI’s dark side is very real. “I am confident AI will be used by bad actors, and yes it will cause real damage. It can do a lot of damage in the hands of spammers with elections and so on,” he said. “Once we see real harm, we have to ask ourselves the simple question: ‘Can we regulate that in a way where the good things that will be prevented by this regulation are less important?’ The principles should be the benefits from the regulation to our society should be greater than the cost to our society.”

HISTORY REPEATS ITSELF

This explosion in interest in – and concern around – AI tools is precisely the kind of generational inflection point seen during previous tech revolutions. The global spread of integrated circuits, personal computers, commercial internet, smartphones, wireless access, and others, each came with their own concerns and dark sides.

The growth of generative AI is as historically ground-breaking as those earlier revolutions in technology but given the very nature of the technology this time around, the potential for misuse and abuse is significantly more pronounced.

The aforementioned experts are warning we’re on the verge of unleashing a massive wave of technology without understanding how it all works, how it will impact humanity, and how we can mitigate the risks of abuse by malevolent actors.

The potential for misinformation and disinformation, for example, is immense. Yet beyond tepid early moves by governments to virtue-signal their concern, we’re seeing little concerted or meaningful effort expended to rein it in.

For example, as part of its plan to promote the development of ethical AI, the White House held a high-profile AI summit earlier this month. But after inviting the CEOs of major players like Alphabet, Microsoft, OpenAI, and Anthropic, it neglected to also bring in ethics specialists. Not that they would have been able to invite the one from Alphabet, as she had already been fired – or quit, according to her former bosses.

Major announcements with dollar figures attached are a nice start, but they must be followed up with sustained investment and leadership, and they must involve a broad range of stakeholders with the expertise to influence policy directions. PR-friendly meetings with only the most senior leaders in the room just won’t cut it, and it’s careless to assume it would.

THE BOTTOM LINE

The firestorm of controversy surrounding AI’s potential impact on humanity is only going to intensify, and open letters, high-profile resignations, and dire warnings about the future of humanity all underscore just how critical it is that we get it right.

But at the end of the day, AI is simply put, software. Incredibly sophisticated, adaptable, and potentially dangerous software of course, but still, just software. And the same principles and best practices that have always applied to software development – and the entire software lifecycle – apply to AI, as well.

We wrote about it recently, and we’ll continue to write about it to ensure our community is able to put the current Zeitgeist into perspective. The sky isn’t falling, and humanity isn’t about to meet its AI-driven end. But the time to discuss how AI works, how it can benefit us, where the most pressing risks lie, and how we as individuals, organizations, and society can best manage it all, is now.

Developing great software has always been based on simple discussions about solving problems and this is something we’re intimately familiar with. Give us a call if you’d like to have that discussion with us.