How academia is fighting AI like chatGPT

Ever since AI writing technology broke into the public consciousness last November with OpenAI’s ChatGPT becoming available to the public, one of the major concerns has been how AI will affect academia.

Worries over how high school teachers and university professors would know if a work was genuine dominated headlines.

Conservative firebrand and former psychology professor Jordan Peterson even predicted AI would bankrupt some universities after he did some experimenting with ChatGPT.

In this article, we are going to take a look at the ongoing discussion surrounding AI writing in schools, discuss what’s being done about it, and then predict two paths for the future of academia and AI. 

Let’s get into it.

AI problems are similar to problems with plagiarism and the internet

If you just pay attention to the media, AI advancements signal a pretty dire situation for the academic field. Thankfully for those in that field though, they’ve experienced a similar crisis before.

The spread of the internet brought about many great things. An interconnected World Wide Web meant people could speak with others all around the globe. people in New York could read the words of people thousands of miles away in China. Rare books and obscure academic literature could be made available to the masses. 

The internet fundamentally changed society for the better in so many ways. Another thing it did was also made it incredibly easy for students to plagiarize from others.

With the entire world at their fingertips, it has never been easier to find information and pass it off as your own. This was a problem for academia. How did they combat it? Enter anti-plagiarism software.

That’s right, the same nerds who created academia’s new plagiarism problem were the ones who solved it. Anti-plagiarism software is incorporated in schools and universities all around the world to ensure students are submitting totally original work.

For those not paying attention to the AI space, this same chain of events is repeating itself with AI writing tools. The most well known AI writing program is OpenAI’s ChatGPT. Nearly everyone has heard of ChatGPT at this point, but less people are aware of the fact that OpenAI actually created an “AI Classifier” that people can use to determine whether text was generated by AI.

Some people cynically say that OpenAI is just capitalizing on a problem they created, but it’s indisputable OpenAI attempting to be transparent about identifying AI text is a good sign for academia. 

How educational institutions are fighting AI writing

One of the key differences between the challenges academia faces with AI that it didn’t face when fighting plagiarism is that AI is constantly evolving. The fight against AI is a constant fight against new AI programs that might be harder to detect. 

We’ve written at length about AI detection programs, so we know how effective they can be. We also know that there are a lot of AI content detectors out there and some are much more effective than others. 

Right now, the majority of programs are based on detecting ChatGPT created content, but as more companies get in the game, it will become harder and harder to detect AI content. 

As long as AI companies are being truthful about their products, having AI companies create ways to identify when their programs were used means educators will have an extra weapon at their disposal when combatting AI usage among their students. 

An example of companies helping in the fight is OpenAI. Not only has OpenAI released their AI Classifier, but they’ve also started working on creating a watermark that would make ChatGPT-generated content instantly recognizable. This technology is not currently available, but it could be a gamechanger to the current AI detection landscape, especially in academia. 

Right now, teachers, professors, and school administrators have to rely on independently created AI detection tools. One of the most commonly used tools is one being utilized by one of the most familiar names in academia, Turnitin. 


Turnitin is one of the premier names in plagiarism detection software. It is used by thousands of educational institutions around the world, so it makes sense they would be one of the companies trying to tackle the problem of students using AI writing tools. 

The good people at Turnitin announced at the end of last year that they were trying to build their own AI detection software. This software would then be incorporated into their plagiarism detection software.

If you’ve ever used Turnitin, you’ll know that their plagiarism detection software basically compares your paper to Turnitin’s own database of papers from other institutions and from publishing books, journals, websites, etc. 

This is an impressive system, but it’s not nearly as complex as AI language models and AI detection tools.With that being said, Turnitin has already managed to create their own AI detection program. This tool is specifically marketed as being able to detect ChatGPT-generated writing. 

Turnitin claims that their AI detection tool can detect 97% of AI-generated writing with just a 1% false positive rate. In this context a false positive is when a human-generated piece of writing is determined by an AI detection program to be AI-generated. This is the worst case scenario in any industry, but especially in academia. 

If a student is falsely accused of using an AI writing tool on an assignment, they could receive a failing grade or even be thrown out of a university. 1% is a low number, but when you consider that Turnitin is used at over 15,000 institutions that serve hundreds of thousands of students, a 1% false positive rate could represent a significant number of falsely accused students. 

This is just one of the many issues AI writing poses for academic institutions in the future. 

The future of AI writing in academia

The way we see it, there are two ways academia can deal with AI writing in the future.

The first is rather simple. High schools and universities could simply give in and learn to love the machine. There is a not insignificant amount of people who believe AI writing tools should be treated as just that; tools. 

These people think that the use of AI writing programs should be allowed in the same way schools allow students to use word processing programs that have spell check.

We don’t see this ideology catching on any time soon, but it very well could be the end point of the problem of AI in academia, especially if it simply becomes too hard for institutions to detect AI. 

The second possibility for the future of AI writing in schools is something we’ve touched on already. Academic institutions and AI writing program developers can work together to create AI detection tools. 

Using the examples from earlier, let’s say that Turnitin and OpenAI collaborated to make a program using Turnitin’s anti-plagiarism technology and OpenAI’s watermarking technology to detect AI text. 

Theoretically, this program would be able to detect AI-generated text 100% of the time with no false positives. However, once we dive deeper, this becomes more complicated. 

As soon as OpenAI releases their watermarking technology, someone will come out with a program to remove the watermark. Also, other companies make AI text generation programs, so OpenAI’s AI detection tool will need to be able to detect those programs as well. 

You can see how complicated this problem is and how there is no one-size-fits-all solution. 

We don’t ever expect there to be a one-size-fits-all solution, but the partial solutions we outlined here will likely be the steps academic institutions take for the time being. 

In the future, who wins the war between AI and universities could shape the future of the world, so this is an area we will be watching closely, and it’s one you should be watching too

Leave a Comment