ChatGPT and Writing: A Deadly Combination!
Someone of Indian origin emailed me in 2014 with some queries from Saudi Arabia, where he was teaching at a university. I was then a PhD candidate at a university in Christchurch, New Zealand. He was interested in a PhD programme at the same university and needed further information. I sent him a detailed response, addressing all his queries to the best of my knowledge. I bumped into someone several months later in the corridor of our college compound. He said that he was the one who emailed me from Saudi Arabia. He thanked me for the detailed response and said, "I've not met a non-native speaker yet writing so well in English." Flattered, I said smugly, "I'm a composition professional." He burst my bubble immediately, "Why do you need to learn to write? Machines will do that soon." I laughed it out. Writing is inherently a human attribute. Machines can't write.
Neither can ChatGPT (Generative Pre-Trained Transformers). While grammar is a non-negotiable requirement, a piece of writing is not a syntactic structure. It's, instead, a semantic sculpture. There's always something elegantly unusual with an authentic piece of writing. Language and grammar are the two pieces of the big puzzle that writing is. Writing is the generation of thoughts – filtered through vision, imagination, and intuition – that touch and transform. Writing that endures doesn't replicate any model of thought and language. It renews language to create a new model of thought. Since the last week of November this past year, when OpenAI launched ChatGPT, writing seemed re-defined. Flaccid prose that is clinically precise is being passed off as writing. And writing ability is attributed to machines without cognition and sensation. That's an egregious error as to what it takes to accomplish a piece of writing. There's nothing magical or mysterious about writing.
ChatGPT seems both magical and mysterious. Writing has never been instant and progressive. It has always been intermittent and recursive. With ChatGPT, which is a Large Language Model (LLM), writing is instant and progressive. The human language repertoire seems infinitesimal compared to the infinite source that this software draws from. Experts estimate that the human brain can process approximately 70,000 thoughts a day. The software ChatGPT, however, draws from 175 billion parameters, which are lexical chunks and units of thought. The software is fed on a massive amount of text – books, Wikipedia pages, newspapers, and social media posts – amounting to half a trillion words of text, as Steven Pinker claims in his recent The Harvard Gazette interview. It predicts patterns based on the parameters it is built on. Plausible prose emerges. The generation of prose, then, is reduced to looking for, locating, and enacting collocations. This is unqualified naivety!
Steven Pinker claims in The Sense of Style that there's no algorithm to write a sentence. A sentence is an experiential entity given its Latin root, sententia, which means sentimental thinking. The generation of a sentence presupposes an understanding of and engagement with the physical world. Plonking down words in a formulaic fashion severely falls short of the definition of a sentence, because one must understand the meaning behind those words. ChatGPT doesn't. It generates words based on a given input. Besides, Joe Moran claims in First You Write a Sentence that a sentence is more than its meaning. It's also a metrical structure, where logic and lyric as well as sense and sound meet. A sentence is music created by thyme, rhythm, and pausing. That music is heard only in the head, though. ChatGPT is a language model that has nothing to do with sound (and image). As such, what ChatGPT regurgitates is not writing per se. It's just a tantalising transcription.
Compromising with the skill and quality of writing is indeed an existential crisis. Writing is not a single skill manifested. Writing is a cluster of skills deployed.
So, it snares. As they claim, ChatGPT is a deep-learning tool. Essentially, though, it's a toy for deep-faking, for Ian Bogost argues in his article, "ChatGPT Is Dumber Than You Think," in The Atlantic, that it's less about persuasive writing and more about superb bullshitting. It's logically dysfunctional and is terrible at maths. It cites false quotations and references. It's a half-baked app. Fixing it to the precision seems apparently unlikely, because the software is already a juggernaut. Even its creators don't know how it works and why. That's what Ian Bogost and Steven Pinker believe. ChatGPT, then, is a carefully crafted confusion. DeepMind CEO and co-founder Demis Hassabis, one of the pioneers of artificial intelligence, urges caution against artificial intelligence in his interview with Time this past January. He warns, "We are the guinea pigs." We're both addicted to and imprisoned by technology. Crises mount.
Compromising with the skill and quality of writing is indeed an existential crisis. Writing is not a single skill manifested. Writing is a cluster of skills deployed. Writing presupposes immersive reading, deep thinking, and deft languaging. These are not the skills suddenly discovered. These skills are gradually developed. With ChatGPT around, no one needs to cultivate these skills. Writing is reduced to prompting. One even doesn't need to write a prompt. One can buy prompt. Prompt engineering is already a side hustle for some people, as Charlie Warzel claims in his essay, "The Most Important Job Skill of This Century," in The Atlantic. And it will be a booming cottage industry in the times ahead. Commodifying writing is akin to cannibalism. Writing is the metric of intelligence that sustains our civilisation. Francis Bacon, the 16th-century English philosopher, claims that writing makes an "exact man" (and, of course, woman). People armed with the ability to write are going to be in short supply because of ChatGPT.
So is quality prose. In his article in The Atlantic, "What Happens When AI Has Read Everything?" Ross Anderson argues that good prose has never been in infinite supply in that good prose is one of the most challenging things to produce in the known universe. For the large language model that ChatGPT is, the generation of quality prose is contingent upon ingesting quality prose, which are books written by better writers. Anderson cites the finding of a research team led by Pablo Villalobos at Epoch AI that claims that ChatGPT will run out of reading quality material by 2027. Since the emergence of the printing press in the 15th century, humans have published approximately 125 million titles, as the researchers at Google Books find. The Epoch team estimates that approximately 10 million to 30 million of these books are digitalised to be machine-readable. When AI has read all the books ever produced – and our reliance on and obsession with ChatGPT becomes a routine – it can't ingest quality prose anymore. As such, ChatGPT will recycle the same patterns of prose created by itself. We're off to a prose wasteland. That sounds dystopic, but that's apparently likely.
Therefore, the popularity of ChatGPT worries sceptics and scholars alike. It's a disruptive discovery. Banning it doesn't end it, because we can no longer uninvent it. Writing is a high-stake activity.
Noam Chomsky, Ian Roberts, and Jeffrey Watumull, in their recent pitch in The New York Times, disqualify ChatGPT as an authentic tool for generating quality prose as such. They claim that the crux of machine learning is description and prediction. The mark of true intelligence, however, is an explanation. Intelligence also consists of creative conjectures, creative criticism, and moral thinking. Any A1-powered system critically lacks these attributes. Language – and the ability to use language in a creative and ethical fashion – is genetically endowed. Because machines can't understand and utilise the sophisticated principles and parameters of creating an infinite number of sentences with a finite means of rules of syntax and word formation (that's grammar!), software like ChatGPT is built on the "fundamentally flawed conception of language and knowledge," they claim. When, then, ChatGPT succeeds, we fail.
Therefore, the popularity of ChatGPT worries sceptics and scholars alike. It's a disruptive discovery. Banning it doesn't end it, because we can no longer uninvent it. Writing is a high-stake activity. It critically determines professional success and intellectual excellence. Any quality prose implies knowledge internalised, and creativity excelled. Serious writing as such is exhausting. Now that there's a quick fix out there, people, students in particular, will gladly outsource the struggle of writing to machines. Countermeasures are emerging apace to watermark AI-generated prose. That perhaps means nothing. When we preach celibacy to a group of young adults, can we expect them to behave? ChatGPT, as it appears, is irresistible.
When, however, it comes to writing, ChatGPT is a BIG nothing. Writing is not a revelation. It's not taking dictation, either. It's an intellectual achievement that takes a lifetime of preparation to develop. William Faulkner, who received the Nobel Prize in literature in 1949, said in his Paris Review interview that a writer needs three things: experience, observation, and imagination. Which one of these attributes does ChatGPT have? None! It's a passing fad, as all technologies are these days. The sooner we get disillusioned with ChatGPT, the better. Why? Because it has nothing to do with writing!
Dr Mohammad Shamsuzzaman is associate professor at the Department of English and Modern Languages in North South University.
Comments