Why AI will never fully replace the writer

AI writing tools have dominated the news cycle on a seemingly daily basis since OpenAI’s ChatGPT broke into the mainstream late last year. In the months since ChatGPT went public, Google and Bing have released their own AI programs called Google Bard and Bing AI, respectively. 

This rapid rise of AI has not only sparked a renewed interest in AI but also renewed warnings from so-called AI doomers. Admittedly, this is a bit of a pejorative term, but it’s an effective one for getting across the viewpoint of these people. AI doomers think the growth of AI will eventually lead to the end of humanity as we know it. 

AI doomerism goes back to the industrial revolution and the birth of the sci-fi genre, so you’ve probably seen it illustrated in media at some point in your life. Currently, AI doomerism is incredibly popular among people in the writing industry. Editors, publishers, researchers, and most vocally, writers think they will soon lose their jobs to AI. 

This opinion is also very prevalent among some of AI’s biggest benefactors, such as Twitter CEO Elon Musk and his trusted advisor and Silicon Valley investor Jason Calacanis

There’s no doubt that Bard, Bing AI, and ChatGPT have the potential to revolutionize the way we write and potentially cause devastating effects to the writing industry. However, a closer look at Bard and Bing AI can give us insight into why people like Musk, Calacanis, and concerned writers are wrong. 

The fact that these programs rely on human-generated data means that AI will never truly replace the writer.

While ChatGPT is probably the most impressive AI language model on the market right now, Bard and Bing AI present the biggest threat to writers right now. This is because those programs draw on the Google and Bing search results for their AI-generated context. This allows for Bard and Bing AI to create more up-to-date and factually accurate responses. 

By comparison, ChatGPT draws on a set database that’s only accurate as of about September 2021, which regularly leads to ChatGPT and programs using GPT-3 and GPT-4 to put out factually inaccurate content. 

What Bard and Bing AI are doing is the next logical step of AI language models. Eliminating factual inaccuracies from language models takes AI one step closer to replacing human writers who, on the whole, are much more factually accurate than models like ChatGPT. 

With all that being said above, we must consider what being factually accurate means to an AI model compared to a human when we think about whether AI becoming “factually accurate” will really be the death of the writer. 

Google and Bing are looking to cut out the middleman with their search results. These websites who are now search engines want to become aggregators of information who display information to you using their AI chatbots instead of just directing you to another website where you can learn more for yourself. 

Once you take this one step further, Google and Bing can write whole papers, blogs, and memos for you without you doing any of the research yourself since they are pulling from websites you would have gone to for your research anyway. 

To understand why AI will never replace the writer, we need to go one step further and think about what happens after you publish your AI-generated content out into the wide world of the internet. 

Once your piece is published, it now becomes a part of the huge dataset AI models like Bard and Bing AI will pull from. Without human writers adding considerably to the internet database, the natural conclusion of this is the internet being flooded with AI content that may or may not be accurate. 

AI is already prone to creating fake information through a process called AI hallucination. Once the internet is full of AI-generated content, this problem will only get worse. AI programs will draw from inaccurate AI-generated content believing it to be accurate. 

To put it bluntly, without humans regularly creating content, AI language models will eat each other until there is nothing factually accurate left.

For humans, this means not being able to trust anything we look up. If you’re skeptical of this, consider Google’s “Featured Snippets” feature. 

If you’ve ever used Google you are familiar with this feature. Featured Snippets are the highlighted and bolded excerpts from the top link at the top of your Google search results. You may not have known it, but this feature is powered by AI, which explains why it regularly pushes misleading quotes, factually incorrect information, and even racist propaganda

A Google Search featured snippet

To be clear, this model is a ranking model that pushes certain results to the top above others. It is not a language model like Google’s Bard.AI or Microsoft’s Bing AI. However, Bard and Bing AI are still vulnerable to the same kind of errors because ranking models like this will be how the language learning model chooses which content is “accurate” when creating responses to prompts.

Finally, as a general and obvious point, AI will never be able to do the work of reporters on the ground. People going to Google during something like a natural disaster, terrorist attack, or even something nicer like a music festival will never be able to get up-to-date information from an AI chatbot without a human creating content to draw from. 

The undeniably impressive nature of AI writing tools and decades of sci-fi anti-AI propaganda has writers rightfully worried about the future of their jobs. However, something we all need to remember is that AI learns from us. 

The human written word is a fundamental part of creating an AI language model. The future of these models will be even more reliant on humans because they will be expected to be factually accurate. AI models can’t be factually accurate without humans. That will never change, so human writers should learn to embrace AI to make them more efficient writers instead of fearing it.

Leave a Comment