Login to your account

Username *
Password *
Remember Me

Blog

Geoffrey Hinton’s resignation from Google is AI’s canary in a coal mine

Geoffrey Hinton’s resignation from Google is AI’s canary in a coal mine

It’s no understatement to say that Geoffrey Hinton is as close to a god as we’re ever going to have in the Artificial Intelligence space.

The British-Canadian computer scientist and cognitive psychologist literally wrote the book on machine learning, after all. Along with AI rock stars Yoshua Bengio and Yann LeCun he picked up the 2018 Turing Award, the Nobel Prize for computer science, for creating the very building blocks of today's large language models (LLM).

Hinton is widely credited with creating the foundations of deep learning, which defines how LLMs are trained and how they synthesize and connect the data. So, his decision to quit his very plush job with Google so he could ring the alarm bells about AI means we should all stand up and take notice.

We can’t afford to ignore him. 

growing consensus

Hinton resigned from Google last week, saying AI could pose a more urgent threat to humanity than climate change. He said he wanted to speak out on these risks without worrying about damaging his now-former employer’s reputation.

The timing of Hinton’s resignation – so soon after the release of an open letter from the Future of Life Institute, signed by over 1,000 top industry researchers, which called for a 6-month moratorium on testing next-generation large language models – is no coincidence. It signals a growing momentum among scientists and researchers about the need to address AI's dark side before we get too far down the track.

Notably, Hinton did not sign that letter, as he disagreed with the group’s ask to stand down on research. But his concern certainly echoes that of the broader group.

“It’s utterly unrealistic,” he told Reuters in an interview. “I'm in the camp that thinks this is an existential risk, and it’s close enough that we ought to be working very hard right now, and putting a lot of resources into figuring out what we can do about it.”

With AI-powered tools already being used by groups to spread misinformation – for example, the Republican Party recently released an attack ad based entirely on AI-generated imagery – the window for slowing down the rampant rush toward an uncertain AI-powered future is narrowing. 

Hinton's move is a major warning that echoes several recent headline-generating announcements from other AI heavyweights:

  • Eliezer Yudkowsky, co-founder of the Machine Intelligence Research Institute, says the Future of Life Institute letter doesn’t go far enough. He says he agrees with colleagues who say unmitigated artificial general intelligence (AGI) development could kill off the human race.
  • Paul Christiano, who ran the large language model alignment team at OpenAI, echoed that sentiment on the Bankless podcast, saying, “I think maybe there's something like a 10-20% chance of AI takeover, [with] many [or] most humans dead.”
  • Sam Altman, OpenAI co-founder and CEO, issued his own warning during an episode of Lex Fridman’s podcast: “I'm a little bit afraid, and I think it'd be crazy not to be a little bit afraid, and I empathize with people who are a lot afraid,” he said. “The current worries that I have are that there are going to be disinformation problems or economic shocks, or something else at a level far beyond anything we're prepared for. And that doesn't require superintelligence."
  • Michael Schwarz, Chief Economist at Microsoft, warned attendees at a World Economic Forum panel in Geneva that AI’s dark side is very real. “I am confident AI will be used by bad actors, and yes it will cause real damage. It can do a lot of damage in the hands of spammers with elections and so on,” he said. “Once we see real harm, we have to ask ourselves the simple question: 'Can we regulate that in a way where the good things that will be prevented by this regulation are less important?' The principles should be the benefits from the regulation to our society should be greater than the cost to our society.”

 

History repeats itself

This explosion in interest in – and concern around – AI tools is precisely the kind of generational inflection point seen during previous tech revolutions. The global spread of integrated circuits, personal computers, commercial internet, smartphones, wireless access, and others, each came with their own concerns and dark sides.

The growth of generative AI is as historically ground-breaking as those earlier revolutions in technology but given the very nature of the technology this time around, the potential for misuse and abuse is significantly more pronounced.

The aforementioned experts are warning we're on the verge of unleashing a massive wave of technology without understanding how it all works, how it will impact humanity, and how we can mitigate the risks of abuse by malevolent actors.

The potential for misinformation and disinformation, for example, is immense. Yet beyond tepid early moves by governments to virtue-signal their concern, we're seeing little concerted or meaningful effort expended to rein it in.

For example, as part of its plan to promote the development of ethical AI, the White House held a high-profile AI summit earlier this month. But after inviting the CEOs of major players like Alphabet, Microsoft, OpenAI, and Anthropic, it neglected to also bring in ethics specialists. Not that they would have been able to invite the one from Alphabet, as she had already been fired – or quit, according to her former bosses.

Major announcements with dollar figures attached are a nice start, but they must be followed up with sustained investment and leadership, and they must involve a broad range of stakeholders with the expertise to influence policy directions. PR-friendly meetings with only the most senior leaders in the room just won’t cut it, and it’s careless to assume it would.

the bottom line

 

The firestorm of controversy surrounding AI’s potential impact on humanity is only going to intensify, and open letters, high-profile resignations, and dire warnings about the future of humanity all underscore just how critical it is that we get it right.

But at the end of the day, AI is simply put, software. Incredibly sophisticated, adaptable, and potentially dangerous software of course, but still, just software. And the same principles and best practices that have always applied to software development – and the entire software lifecycle – apply to AI, as well.

We wrote about it recently, and we’ll continue to write about it to ensure our community is able to put the current Zeitgeist into perspective. The sky isn’t falling, and humanity isn’t about to meet its AI-driven end. But the time to discuss how AI works, how it can benefit us, where the most pressing risks lie, and how we as individuals, organizations, and society can best manage it all, is now.

Developing great software has always been based on simple discussions about solving problems and this is something we’re intimately familiar with. Give us a call if you’d like to have that discussion with us.

 

 

Read 296 times Last modified on Wednesday, 10 May 2023 20:34
Rate this item
(2 votes)


Our exceptional talented developers and supportive team, combined with our highly effective, well-developed methodology has provided custom applications to Fortune 500 corporations and entrepreneurial companies.

 

Latest Posts from Blog

DeSantis campaign launch livestream meltdown reaffirms Twitter’s tech decline

DeSantis campaign launch...

When Ron DeSantis decided to forego a traditional...

Montana’s TikTok ban is an appalling line in the sand for freedom of speech

Montana’s TikTok ban is a...

In the wake of Montana’s decision to ban the Chine...

Collaboration tools hit their stride in the post-COVID age

Collaboration tools hit t...

It’s been just over three years since a newly disc...

Geoffrey Hinton’s resignation from Google is AI’s canary in a coal mine

Geoffrey Hinton’s resigna...

It’s no understatement to say that Geoffrey Hinton...

Follow these 4 steps to build a successful post-pandemic roadmap for hybrid work

Follow these 4 steps to b...

Three years after the COVID-19 pandemic sent milli...

Batten down the hatches and hit the gas: 8 steps to recession-proof your IT budget

Batten down the hatches a...

There’s no way to sugar coat it: the economy is he...

Massive Pentagon data leak shines light on insider cybersecurity risks

Massive Pentagon data lea...

Cybersecurity has long been focused on keeping the...

Today’s CIO role is evolving more rapidly than ever. Here’s what’s driving it.

Today’s CIO role is evolv...

Not so long ago, the Chief Information Officer was...

Tech leaders pen letter demanding AI pause: are we moving too quickly?

Tech leaders pen letter d...

Over 1,100 of some of the most influential names i...

Twitter source code gets leaked online. Here’s why it matters.

Twitter source code gets...

By now, we should all be used to eye-popping headl...

AI chatbots go next-level – and productivity apps are their next target

AI chatbots go next-level...

OpenAI’s ChatGPT has been justifiably generating h...

STEP Software celebrates 18 years – and looks to the future

STEP Software celebrates...

It isn’t every day a business celebrates its 18th...