Skip to main content

Blog

You are here:

As business concerns over AI ethics continue to worsen, new best practices emerge

No one doubts that generative AI is the next big thing in tech. But all that promise comes with a dark side that’s prompting organizations of all types to ponder the ethical implications of using it. To some, it’s an ominous minefield. But it doesn’t have to be.

EARLY AND BUGGY

Since ChatGPT was first made available for public consumption in late 2022, it has served as the poster child for the generative AI revolution. It has also become something of a proving ground for millions of AI newbies still trying to figure out how all of this applies to them.

But like any new technology, it comes with its own dark side. AI chatbots tend to “hallucinate”. Also known as going rogue, they can just as easily return outright lies the longer we interact with them. These imperfections are largely based on the fact that their training methods often involve ingestion of huge amounts of unvetted data from the open internet. Additionally, the training algorithms, also in their understandably early stage of development, tend to generate responses that aren’t always as accurate as they ought to be.

These structural weaknesses have prompted major industry leaders to call for a pause in training next-generation large language models for 6 months to allow time for global discussion. They have also prompted the Godfather of AI, Geoffrey Hinton, to quit his job at Google so that he can speak freely about the risks.

AI’s issues are also giving rise to questions around whether it will ever evolve into technology we can trust to be unbiased, fair, transparent – and free of misinformation and disinformation. These questions also highlight the challenges both developers and end-users face in adapting to, or otherwise working around, the key risks associated with widespread use. They include the following:

1 – Inherent bias

Most currently available generative AI platforms use similar large-scale training methods. They are trained using huge amounts of data culled from the open internet. As that data is often imperfect in one way or another, the resulting AI systems that feed off of them tend to introduce biases, as well. For example, a dataset that reflects historically imbalanced perspectives on race or ethnicity will be at greater risk of amplifying discrimination in its responses. This could unknowingly lead to unfairness when AI-based systems are used in areas like hiring, justice, or banking.

Organizations considering implementing AI-based decision making tools must be keenly aware of this potential impact, and have preemptive safeguards in place – minimizing the potential for ethical missteps.

2 – Compromised data privacy

Because AI-generated outputs are based largely on the ingestion of diverse datasets, they can potentially result in unintended sharing of otherwise-private information. Similarly, data shared by end-users while using these tools can subsequently be added to the training databases – which puts it at ongoing risk of being exposed to other users in the future. The risk is real enough that Alphabet recently warned its own employees against entering personal data into the Google Bard chatbot.

This presents serious ethical challenges to organizations that deal in any way with stakeholders’ – or even their own – private information. It also reinforces the need to have appropriate security measures and data protection protocols in place to mitigate the potential for this new kind of data leakage. These organizations will also need to maximize transparency around data collection, and to use that openness to build trust and minimize the potential for ethical missteps.

3 – Limited transparency

A number of AI experts have recently admitted they don’t always fully understand what’s happening inside a given model. It’s called the AI black box problem, and it presents serious ethical challenges around transparency and the ability to explain it. Some sectors, such as health care and autonomous vehicles, are at particularly high risk given their need for a repeatable – and fully understood – decision making process.

To stay on the right side of the ethical line, organizations must implement AI that can clearly explain how and why certain decisions are made. This ability will maximize trust in the process and lead to safer implementation in especially sensitive use cases.

4 – Threats to employment

Like so many generationally significant technologies that came before it, AI will impact large numbers of jobs. Like any new tech, it will create new jobs as well, but no transition of this scope happens without some degree of uncertainty across society. Beyond the headline-grabbing fears of job displacement, AI also holds the potential to exacerbate socioeconomic disparities on a broader scale.

As organizations move forward with AI-based initiatives, the technology has the potential to impact their people. Businesses owe it to their stakeholders to invest in retraining and upskilling programs softening the impact of any AI-induced job loss. Greater partnerships between governments, businesses, and academia are necessary to facilitate the transition to an AI-powered economy as vast numbers of workers adapt and adjust to a very different employment landscape.

5 – Frightening autonomy

The benefits of AI-powered autonomous technologies – including massive gains in efficiency, cost effectiveness, and consistency – are easy to understand. But they don’t come without compromise, particularly around human decision making oversight. Organizations must strive for an ideal balance between deploying AI-based tools efficiently and ensuring humans remain an integral part of the process.

Before implementing such systems, organizations must draft and implement ethical guidelines for stakeholders. Governments also need to introduce regulatory frameworks to further define the limits of AI autonomy – and ultimately ensure decision making power is never left exclusively to the machines.

THE BOTTOM LINE

All technologies carry their own ethical baggage, and AI is no exception. But the rapidly escalating adoption curve of this historically significant new capability compels organizations to figure out the ethical landscape sooner than they might have experienced during previous generational shifts.

Fortunately, no organization is an island, and the entire tech industry – with the support of governments and key stakeholders – is rapidly striving to figure out how to implement appropriate ethical frameworks without hindering forward progress.

We can certainly extract value from AI without selling our collective souls. It’ll just take a little work to get there.

Want to learn more? Reach out to us anytime.