How generative AI is changing the way we write and speak
Do you find you're often saying you want to "delve into" an issue or "navigate the landscape" of it? Or perhaps the latest version of your CVs says that you're "adept" at being "meticulous"?
If so, your use of language is probably being influenced by artificial intelligence.
The rise of 'delve'
ChatGPT and other large language model tools are "designed to make writing easier by offering suggestions based on patterns in the texts they were trained on", said Ritesh Chugh, an associate professor of information and communications, on The Conversation. And because they are "trained on vast amounts of text from various sources", they "tend to favour" the most commonly used words and phrases in their "outputs".
This is now having a clear effect on human "outputs": in the 18 months after ChatGPT was launched, the use of words such as "meticulous", "delve", "realm" and "adept" have increased by between 35% and 51%, according to a study by researchers at the Max-Planck Institute for Human Development.
Certain words and phrases are "popping up" everywhere, said Chugh on The Conversation. They may "sound fancy" but their "overuse can make a text sound monotonous and repetitive".
'Global dominance'
As the language of AI chat tools seeps into human communication, the "terse" terms we use when prompting a chatbot "may lead us to dispose of any niceties or writerly flourishes when communicating with friends and colleagues", said The Atlantic.
Where once chatbots "learned from human writing", now the "influence may run in the other direction". In this way at least, AI might have "already won its campaign for global dominance".
Most people "don't realise their language is changing", said The Verge, which is why they carry on using favoured chatbot terms. But, as researchers pick up on the changes, others, including examiners, will start to spot them, too. The word "delve" is already "academic shibboleth" – a "neon sign in the middle of every conversation flashing ChatGPT was here".
'Robotic undertone'
The continuing overuse of certain words and phrases will mean "writing losing its personal touch", said Chugh on The Conversation. It will become trickier to "distinguish between individual voices and perspectives as everything takes on a robotic undertone".
The danger of this is that "AI is quietly establishing who gets to sound 'legitimate'", said The Verge. What's at stake are the verbal "imperfections" that "build trust". We don't want to "lose the verbal stumbles, regional idioms and off-kilter phrases that display vulnerability, authenticity and personhood".
Nor should we want to "lose agency over our thinking", said Los Angeles Magazine, and, instead of expressing our own thoughts, articulate whatever AI helps us to articulate.
What can you do to guard against this? Always prompt a chatbot to "write clearly, without using complex words". If you use a chatbot response, watch out for repetition and edit the text before sharing or submitting it. You can also use the settings to create a list of words to exclude.
"In the end, writing should be about expressing your ideas in your own way", said Chugh on The Conversation. "While ChatGPT can help, it's up to each of us to make sure we're saying what we really want to – and not what an AI tool tells us to."