Worried ChatGPT will steal your job? You’ll have to out-think artificial intelligence
The artificial intelligence chatbot, ChatGPT, has heralded a new era of fear for our careers. But Nick Chatrath argues our lives can be improved by AI, rather than our livelihoods stolen.
Unless you’ve spent the last three months living under a large and well-secluded rock, you’re probably familiar with the work of ChatGPT, the new AI chatbot. People have used it to write song lyrics, school papers, and existential musings on the meaning of life (to name just a few examples). With each new use case comes a flurry of questions: Does ChatGPT really come across as human? If it can do Task X, could it also do Task Y? And, perhaps most importantly: How frightened should I be of this thing taking my job?
Such questions can be amusing and addictive, much like using ChatGPT itself. But it’s easy to ignore the forest when you’re busy analysing each leaf on the tree.
How we use AI tools to make our lives better is a question that leaders—of organisations, communities, and society at large—urgently need to answer. Make no mistake, AI is developing extremely quickly. Last October, almost nobody had heard of ChatGPT. Four months later and it’s poised to change the way students take exams all around the world. Before that, the AI image generator Dall-e went from Twitter curiosity to existential threat to artists in a similarly quick span. And it wasn’t long before people started noticing the built-in biases that made it much harder to call the tool an unequivocal force for good.
That’s fair, because like most tools, AI-enabled ones have no inherent moral quality. They can be used to build (like a hammer in the hand of an altruistic person) or to destroy (like a hammer in the hand of a vindictive one). This is why it’s vital for leaders to update their thinking about AI tools like ChatGPT. If their mindsets and values are oriented toward collective flourishing, then it’s easy to envision AI being used for purposes that benefit humanity as a whole. In more selfish hands, it’s equally easy to imagine the opposite.
Of course, the mere intention to use AI for positive purposes isn’t enough. History is full of examples of people who wanted to do good and ended up causing great harm instead. Leaders and those around them would benefit from making a concerted effort to think more clearly, expansively, and deeply about how they’re adopting AI. It’s not a one-off effort, either. As AI becomes more ubiquitous and powerful, the only way to ensure we use them well is to update our own operating systems—to give our brains a new set of tools as well.
So how do we do that? One of the answers is simple enough: thinking independently. It can be a challenging one to follow, because many people assume they’re doing it already. Sure, we might paraphrase or synthesise ideas we pick up from outside sources—like cable news, social media, or colleagues and acquaintances—but those are still our ideas because we thought them. Right?
In some cases, maybe. But more often than not, we don’t give ourselves the time and space to truly think independently. Most of us spend our lives in environments that are constantly interrupting our thought patterns. As a result, our minds are oriented toward reactive thinking instead of generative thinking. We exert so much mental energy planning our responses to various stimuli (a news article, a coworker’s suggestion, an advertisement that caught our eye) that we have little left over for developing our own ideas about the world. Over time, our brains are trained to become consumers rather than creators.
When it comes to ChatGPT them, we need to ask: how do I want my relationship with AI to be? Rather than placating our urge to jump to the Twitter-friendly conclusion.
We can start by nurturing the conditions to think for ourselves: conditions like not being constantly interrupted and eliminating rush (social media platforms, as you might’ve guessed, are one of many culprits here). We can also make a conscious effort to orient our thought processes away from entry-level thinking (where we give partial attention to another’s view until we hear the keyword we want to challenge) to decent listening (where we attend as long as we feel is necessary in order to reply in a compelling way) and ultimately to generative listening (where we cultivate fascination with what the other person, or even the ChatGPT, will say next).
Generative attention is a game-changer, especially in an AI world. Have you ever been fascinated by another person? You have the capability for generative attention. The more you nurture it, the more independently you will think, and the more AI will compliment how we work, rather than subsume it.