Tech leaders pen letter demanding AI pause: are we moving too quickly?
WHAT THEY’RE ASKING FOR
The letter, Pause Giant AI Experiments: An Open Letter, was published by the Future of Life Institute, a non-profit organization partially backed by Elon Musk. It says, “Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.”
The signees are asking all AI research labs to halt, for at least 6 months, the training of any AI tool that is considered more powerful than the current state-of-the-art, GPT-4. They suggest using the pause to build and implement common safety protocols to minimize the potential for abuse and ensure outside experts can properly oversee ongoing research.
Notably, the letter does not advocate a full-on pause in AI-related research. It requests only a pullback from training activities that could lead to deployment of even more advanced models, the inner workings of which researchers are still struggling to understand.
TWO SIDES TO EVERY ARGUMENT
Predictably, the letter has touched off a raging debate over the ethics of AI, and our individual and collective responsibility to ensure we don’t end up living in a dystopian Hollywood nightmare.
On the one hand, it’s difficult to ignore a letter signed by names like Apple co-founder Steve Wozniak, OpenAI founder Elon Musk, and Yoshua Bengio, founder and scientific director of Mila, a Montreal-based AI research institute,
Considering who the signees are – AI researchers and pioneers, computer scientists, academics, entrepreneurs, authors, and government officials – it would be careless to dismiss the letter as an ill-considered rant. It speaks to the litany of very real risks of allowing the technology to evolve without sufficient protections or guardrails in place.
On the other hand, it can seem somewhat disingenuous for people like Mr. Musk to spearhead a rather sudden move to freeze next-generation platform training. Mr. Musk’s Tesla vehicles, after all, use AI-based tools extensively, as does the SpaceX Crew Dragon. Many of the signees have profited handsomely from AI development – all while the planet has struggled to come up with a plan that balances development with the future safety of the human race.
The cynical among us would be forgiven for concluding that all of the signees have a significant stake in the future of AI development, and aren’t above angling for a pause to maintain their leadership and delay the entry of competing actors.
The sheer logistics of halting global development are also beyond the scope of imagination. Beyond having no historical precedent, there is simply no global entity with the power to ensure compliance.
And even if such a planetary AI police force existed, it would lack the ability to rein in rogue nations like Russia, China, North Korea, and Iran; all of which have invested heavily in sponsoring digital insecurity., Not all nations, after all, would agree to be part of this proposed planetary Kumbaya moment.
Indeed, if we suspend disbelief for a moment and envision this pause actually taking place, it would only serve to empower these four nations and endanger all other nations.
LISTEN TO BILL
The most cogent argument against this letter comes from none other than Microsoft co-founder Bill Gates. While he is no longer involved in its day-to-day operations, there is some irony in the fact that Microsoft has emerged as an early leader in the rush to integrate AI-based tools into its productivity products.
Microsoft has also invested heavily – some reports say more than $10 billion – in ChatGPT maker OpenAI and is providing much of the muscle to monetize the just-released GPT-4 large language model. The Redmond, Washington-based organization’s efforts are ramping up just as Google’s owner, Alphabet, pivots toward AI in an all-hands-on-deck effort – known internally as a “code red” – to inject AI technologies into its core search-based products.
Speaking with Reuters after the letter was published, Gates said a global pause simply couldn’t be pulled off, and even if it could, it wouldn’t address AI’s fundamental problems. He instead suggested focusing on the beneficial use cases for AI, while keeping an eye on what he calls “the tricky areas.”
Even before the Future of Life Institute letter was published, Gates, who has been a vocal supporter of AI, published a letter of his own, The Age of AI has Begun, in which he called for investments in AI to help address global inequity.
Gates’s comments notwithstanding, the dollars at stake underscore why any form of pause is highly unrealistic. The signees likely appreciated this even before they lent their names to the project. Their words are a profound warning to us – way more than a simple request – and the very fact that they’ve asked for a 6-month stop reinforces just how serious they perceive the threat to be.
LEARN FROM HISTORY
We’ve seen what can happen when technology races ahead with no built-in controls or legislation. Social media provides an ideal and recent case study. For the better part of the past two decades, the rampant spread of social platforms has illustrated the costs society pays when advances in technology outpace any relevant protections.
Facebook’s Cambridge Analytica scandal, for example, rather starkly underscores what happens when profit-seeking organizations move too fast and break too many things. In this case, privacy concerns were swept under the rug within a technology and process architecture that allowed abuse to occur completely by design.
Instagram’s widely reported impact on teen mental health – and the parent company’s decision to dial up the algorithm as the research emerged – also makes the case for greater scrutiny of emerging technologies.
So temporarily slowing down AI’s rush to the future so we can put proper guardrails in place isn’t a completely bad idea. And as infeasible as it will be to implement and maintain such a global standdown, at the very least this letter should prompt a long-overdue discussion. (We wrote about this issue recently here: TikTok bans are a long overdue wakeup call for mobile security).
THE BOTTOM LINE
Business leaders are easily forgiven for wondering how to make sense of the current firestorm surrounding AI’s rapid evolution. The release of ChatGPT to the public in November 2022 has unleashed an unprecedented wave of interest – and investment. As was the case with earlier generational shifts in technology, like the PC, the web, social media, and smartphones/apps, it can be difficult for organizations to decide when and how they’ll start working with these technologies,
But as the open letters and the countering claims fly back and forth, organizations of all types and sizes, in all sectors, need to at the very least, start having their own discussions about their projected AI journeys. Even if there are no definitive answers yet, they can start asking questions about how their businesses might be impacted and how others in their competitive sector are beginning to assess this new landscape.
At the same time, they should be speaking with trusted partners to brainstorm strategies and temperature-check their plans. Ultimately, AI is software. Sophisticated software, of course, but software all the same. And since software is what we do, we’re here when you’re ready to brainstorm your own AI journey. No letter required.