18.9 C
Johannesburg
Sunday, November 24, 2024

The power and pitfalls of generative AI a year after ChatGPT

Must read

The power and pitfalls of generative AI a year after ChatGPT

Are you an AI optimist or a pessimist? Do you think AI will help humanity solve some of our biggest and most complex challenges, or are you worried about the existential risk of superintelligent AI in the wrong hands? Much of the commentary on the topic, even from well-respected media outlets, often falls into this binary choice. Not only is it possible to be both optimistic and wary of AI and its prospects both now and in the future – it’s necessary, says Claire Neilson, Head of Content at GinjaNinja.

It feels hard to believe now, but this time last year ChatGPT wasn’t yet part of the global lexicon. Within months of its release, ChatGPT became the fastest-growing consumer application in history. Today, there are thousands of think pieces and no shortage of confident opinions on the topic of generative AI – many of which veer unnervingly close into the realm of the expert but without the background, knowledge, or tech credentials to truly qualify as such.

As a writer, PR professional, and human being, I’ve had my own feelings about AI and ChatGPT in particular. Bear in mind that I’m a millennial and watching Terminator 2 might’ve been something of a seminal moment in my predisposition towards AI. When I first used GPT, an experience best described as a heady mix of awe, disbelief, and curiosity, with a sliver of existential anxiety – it felt like science fiction, watching a machine create a piece of writing in mere seconds, had become science fact before my eyes. But what was clear to me then and what’s obvious now, is that there’s no going back. Since first using GPT, I’m now less enthusiastic about its ‘writing skills’ than others – even with decent prompting, it often returns basic, high-school level text. Passable? Maybe for some. Painfully generic? Also, yes. That said, generative AI has its place. It’s a useful tool for suggesting ideas, summarising content, or reworking a tricky paragraph, but it’s still just a tool.

Aside from some of the obvious impacts of a deluge of AI-generated content, including using AI-generated work and passing it off as one’s own, or, what used to be universally understood as plagiarism, there are more complicated and far-reaching effects at play including how we define and value intellectual property, privacy, identity, and creative ownership, to say nothing of the perils of disinformation at scale.

John Grisham, Jodi Picoult and George R.R. Martin are among 17 authors suing OpenAI for what they’re calling ‘systematic theft on a mass scale’, the latest in a wave of legal action by writers concerned that artificial intelligence programs are using their copyrighted works without permission. Before them, another group of authors, including Pulitzer Prize winner Michael Chabon, brought a case against OpenAI before a federal court in San Francisco, accusing the Microsoft-backed company of misusing their writing to train ChatGPT. For their part, OpenAI and other companies argue that AI training makes fair use of copyrighted material scraped from the internet.

Mo Gawdat, a serious AI expert as the ex-chief business officer of Google X, argues in his book Scary Smart that technology is putting our humanity at risk to an unprecedented degree, and that many of those risks are already here. Just some of these risks include a massive redistribution of power in a world already beset by social and economic inequality, threats to the very concept of truth, and a significant wave of job losses – the second and third order effects of which could be far more profound than we might first imagine.

One year on from Generative AI becoming mainstream, there’s so much we still don’t know and some things we can’t yet know. But here’s what I believe is true: AI is only going to benefit us if it enhances our thinking, creativity, and capacity for connection, rather than becoming a cheap proxy for it. In Joan Didion’s The Year of Magical Thinking, she gives a stirring account of the inner workings of her life and mind in emotional extremis after the sudden death of her husband and the abrupt end to a close, symbiotic partnership spanning more than 40 years. There is a cadence, an economy with words where the reader is moved by what is said and, perhaps more importantly, by what is left unsaid. Of course, it’s entirely possible to train a large language model to compose a piece of text in the (until now) inimitable style of Didion. It might even produce something vaguely affecting, but it would always be a cheap imitation of something deeper, more intricate, more messy and beautiful than a machine can grasp at – the expansive range of emotions that shape the universe of the human experience.

As Ian Bogost says in his essay for The Atlantic, ChatGPT Is Dumber Than You Think: “Perhaps ChatGPT and the technologies that underlie it are less about persuasive writing and more about superb bullshitting. A bullshitter plays with the truth for bad reasons – to get away with something.” The best writing, whether in literature or PR, doesn’t try to get away with anything. It doesn’t need to because it strives for something greater for its own sake – to communicate, educate, delight, and enlighten in the most truthful and compelling way possible. Not only are those things worth striving for, but they’re worth protecting because they matter for the flourishing of individuals, businesses, societies, the planet, and humanity.

- Advertisement -

More articles

- Advertisement -

Latest article