Skip to main content


You are here:

ChatGPT’s arrival reinforces the need to embrace – not ban – AI chatbots

Do artificial intelligence-driven chatbots like OpenAI’s ChatGPT represent a mortal threat to the very future of knowledge workers? Or do these virally popular examples of generative AI simply represent another specimen of disruptive technology that opens up new opportunities to move humans further up the value chain? The answer, as with most things in tech, is more complex than it might initially seem. But it’s becoming increasingly clear that no one can afford to ignore the debate. This is no time to opt out of AI: it’s coming for us whether we like it or not. And while its impact on businesses, careers, and society at-large promises to be fundamental, it just as likely to not be as apocalyptic as the naysayers fear.


Artificial Intelligence (AI) has been one of those on-the-horizon technologies for what seems like forever. But public interest and engagement has been somewhat limited. That’s largely because it’s been accessible mostly to developers, researchers, and laboratories.

OpenAI’s decision to release ChatGPT to the public in November seems to have changed the game markedly. By opening up its consumer-friendly chatbot to everyday users, the organization, co-founded by Elon Musk, has sparked a global explosion of interest.

At the same time, all this newfound attention is fueling a similarly global-scale debate over the role knowledge workers will play in the future AI-driven economy – and or whether such workers will have roles, at all.

To step back, chatbots represent something of a generational leap beyond the search engines we’ve been using for the past generation. Unlike a search engine, where users punch in some search terms and in return receive a list of links, a chatbot is much more interactive.

Ask it fully formed questions, and it uses AI to formulate similarly fully formed responses. It’s like a conversation – often eerily human-like – and it can be used to generate a broad range of content – from writing business letters, emails and academic papers to song lyrics, and short stories. It can even write software code based on a prompt or figure out how to pick the lock on the front door.

So, it can do lots of things – but can it do them particularly well? A Wharton professor had ChatGPT take a final exam for an operations management MBA course, and the results may surprise you. While the chatbot did, in fact, pass the exam, the prof would have scored it between a B and a B-. Not exactly dean’s list material.

But here’s the thing: because ChatGPT is an AI-based tool, it will learn and get better over time, particularly as OpenAI introduces new underlying language models and as the platform continues to scour the open internet and train itself with the data it finds.

Expect those B grades to inevitably improve. And as they do, expect the debates – over ethical use, impacts on employment, and educational best practice – to intensify.


As you can imagine, the public release of ChatGPT and the subsequent explosion in global interest is prompting some radical responses from industry stalwarts. Google likely has the most to lose, as chatbots represent an existential threat to the very future of search. The company has declared a “code red” and is actively reorganizing itself to develop AI-based technologies and hopefully not be outflanked in the process. It hurriedly released its Bard platform, but a public demo went awry when the Googlified bot mistakenly said that the James Webb Space Telescope had taken the first-ever photos of a planet outside of our solar system. Apprehensive investors panic-sold, and parent company Alphabet saw $100 billion in market value evaporate.

Microsoft is having somewhat of a better time after greenlighting an additional investment of $10 billion USD into OpenAI, and has since begun integrating the technology into its core products. Millions have signed up to access the remodeled Bing search engine – updated with an OpenAI-based chatbot originally code-named Sydney – and Microsoft has similar integration plans for Office, Teams, and Edge. 

Google has every reason to be scared.


To no one’s surprise, educators are worried about the potential for cheating – and some, including public school administrations in New York City and Seattle, have banned ChatGPT outright from school networks and devices. It’s a bit of a different story in higher education, where colleges and universities worry about compromising academic integrity, and instead of banning the tools outright are working on ways to change the way they teach and evaluate students’ work.

The academic experience holds significant lessons for businesses; avoiding the technology outright, is no longer a viable choice. Indeed, it never was. And while some roles within any given business may be disrupted out of existence, it’s significantly more likely that existing roles will be freed from administrative drudgework. New roles will be developed that allow workers to expand their productivity, increase their efficiency and expand their overall business value. 

Chatbots make great thought-starters and brainstorming aids. When faced with the proverbial blank screen, a creative knowledge worker can use a chatbot to drive the ideation process, get inspired, and refine ideas and concepts. It can help expose and explore new facets and angles and highlight new areas of research. We’ve been using search engines for decades to spark and drive the creative process, so chatbots are really little more than another, more expansive tool to add to our professional toolkit.


Amid the hype around ChatGPT specifically and AI in general, we seem to be missing out on the technology’s fundamental weaknesses, that in their current state, they’re susceptible to churning out factually wrong, and sometimes dangerous, content. We forget that they are only as good as the data used to train them. And on that latter basis, it’s clear the engines are ingesting a lot of questionable data, and they aren’t sophisticated enough (yet) to tell legitimate content from outright misinformation.

The danger lies in what we ultimately do with what a chatbot spits out. If we blindly use it as is, we’re making ourselves unnecessarily vulnerable. If we take the output, then apply our own research and refinement to it, we’d likely catch and resolve any erroneous information. Knowledge workers have always brought their own sense of due diligence to their roles, and this does not change in the age of AI.

If anything, the advent of generative AI, like any revolutionary technology before it, allows writers, software developers, and other creatives to ditch the admin-heavy aspects of their roles and find new ways to create and distribute value. 


ChatGPT and other AI-based tools like it will rock our world, just as other groundbreaking new technologies have done throughout history. We owe it to ourselves to dive right in and figure out what it can and cannot do, so that we can extract maximum value from it – while at the same time, protecting ourselves from the inevitable risks. Sticking our heads in the sand is not the answer. 

The developers at STEP Software are already evaluating ChatGPT and similar generative AI tools to better understand how we can help further refine the software development process. If you’re still trying to figure out how generative AI can help your business, click here. We promise you we’ll continue to follow this generationally significant and disruptive technology, and we similarly promise you’ll always end up speaking with a real human being, if you want to know more.

Note: This article is part of our ongoing Creative Disruption series. In these articles, we explore what happens when technologies once seen as science-fiction become everyday reality. Check out the initial article – Creative Disruption – why we should embrace change, not fear it – for more, and click here if you’d like to suggest a future topic.