Is Jordan Peterson right about AI in 2023?

Just before the end of 2022, polarizing Canadian psychologist, author, and lecturer Jordan Peterson spoke at the The Democracy Fund’s History of Civil Liberties in Canada Series town hall event at Canada Christian College in Whitby, Ontario.

In his wide-ranging discussion with Canadian businessman Conrad Black, Peterson discussed AI technology and predicted what AI will be capable of in 2023. His comments were posed as a warning to the audience about the consequences of AI on universities and society as a whole. 

Like many comments from Peterson, what he said about AI went viral. This one clip posted to Facebook has been viewed over two million times.

Here at Humanoid, we like to think we are somewhat of experts when it comes to what AI is capable of, so we thought it would be good to add our two cents to the discussion about Peterson’s predictions for AI technology in 2023. 

We will provide a detailed summary of what Peterson had to say about AI, but free to watch the video and see Peterson’s comments for yourself before hearing what we have to say.

What did Jordan Peterson have to say about AI? 

Jordan Peterson’s claims about what AI can do now and what it will be able to do were very wide ranging. We can’t get into everything in detail, so here is a short list of everything he discussed before we address some of his points more directly:

  • ChatGPT wrote an essay about his book in the style of the King James Bible and Tao Te Ching that was good enough for him to think he wrote it;
  • ChatGPT “nailed” an essay about the intersection of Daoist philosophy and the Sermon on the Mound;
  • ChatGPT created a list of activities performed along with working code for a supposed Twitter engineer that convinced the engineer’s bosses that the he actually created that code;
  • ChatGPT was given an SAT test and scored an average score;
  • A college professor prompted ChatGPT to write an essay and grade it, which ChatGPT did to great effect;
  • ChatGPT produced a screenplay and character descriptions for the “next $900m Hollywood blockbuster.” Those descriptions were then fed into an AI image generator to create pictures of the characters. 
  • ChatGPT and AI technology will make universities go out of business in the next few years;
  • Whoever creates the most impressive AI first will rule the world, which is scary because Chinese Communist Party is working on an AI model; and
  • AI can created personalities, like the movie characters ChatGPT created, can be imported into physical forms that humans can interact with

For most people, the major takeaways from Peterson’s comments are his rather scary predictions about AI bankrupting universities; fully functional, human-like robots appearing in 2023; and China ruling the world via their AI model being the first to market. 

Here are our thoughts about what Peterson got right and wrong with his comments about AI in 2023. 

What Jordan Peterson got wrong about AI

We think Jordan Peterson was mostly correct about AI, but there were some points he presented that were either missing context or perhaps a bit too ambitious. 

AI-generated text is still easy to recognize

We will try to be generous to Peterson in this article since he is obviously not an expert on AI technology, but, simply put, Peterson is really overstating what ChatGPT and other AI text generators can do right now. 

While Peterson and the university professor he mentioned were fooled by what ChatGPT could produce, anyone who has used the technology regularly can still easily identify AI-generated text. Even if you don’t have the human expertise, there are many programs that can determine whether text was AI generated (we reviewed them here).

AI-generated text often lacks specifics, is factually incorrect, and lacks the human flair that makes human-generated text engaging to read.

Peterson predicts ChatGPT and other AI text-generators will make universities “go broke” because students and professors will be able to do their jobs using AI. As someone very familiar with academia, I think this is very unlikely. 

Just as universities have had to adapt to how the internet has made plagiarism much easier than it was in the past through things like incorporating anti-plagiarism programs into their assessment software, universities will do the same regarding AI and incorporate AI-detection tools

AI in the physical form (AI robots) are a long way away

When most people think about AI, they imagine human-like robots from fiction, like Blade Runner’s replicants or the androids of Detroit: Become Human. In reality, AI is limited to text and image generators, like ChatGPT and DALL-E. 

Don’t get us wrong, there is plenty of work going on in the AI robotics space, but it’s very far off from what Peterson thinks will be capable this year. 

Boston Dynamics has been working on building AI powered robots for over a decade. Them and their competitors are working on robots programmed to react to commands and their surroundings using AI.  

Peterson warned that these AI robots would be able to download AI created personalities and interact with humans. This is very far down the pipeline and certainly not something that can happen this year. 

Until recently, Boston Dynamics had trouble creating robots who could maintain their balance performing physical movement. It will be a long time before we see these robots incorporating human-like personalities. 

What Jordan Peterson got right about AI

Just to get these issues out of the way, Peterson is factually right about many of the things he asserts. Unless he was lying, which there’s no reason for him to do, Peterson was rightly impressed by ChatGPT and the two essays it created for him. 

He also is factually correct about ChatGPT’s SAT score, AI potentially creating the next blockbuster, and about college professors taking aim at ChatGPT.

AI text generation could reach the levels predicted by Peterson

While we think Peterson overstated the capabilities of AI text generators, the principles of what he’s saying about what AI text generation could do are pretty accurate. 

ChatGPT uses a model called GPT-3. GPT-4 is supposed to come out later this year. According to its creators, OpenAi, GPT-4 will be far more advanced than GPT-4 and theoretically could be capable of what Peterson claims could be possible this year. Unfortunately, we’ll not know if Peterson’s predictions are true until we see GPT-4 for ourselves. 

AI created interactive personalities do exist

As discussed, it will be a long time before any AI created personality exists in a physical form, but that doesn’t mean these AI-generated characters don’t currently exist. 

The next step in the world of AI is making the technology more interactive. ChatGPT represented an important milestone in this journey towards fully interactive AI models. 

Other companies that are focused on creating interactive AI experiences have turned to creating chatbots allowing you to speak with historical figures. “Historical Figures” is one of these apps.

With Historical Figures, you can have “conversations” with historical figures who have been dead for hundreds of years. This is made possible by using available media (e.g., writings, speeches, biographies, etc.) to replicate the historical figures’ speech. 

Unfortunately for Historical Figures, reception to their app has been overwhelmingly negative due to the many factual inaccuracies presented by their AI historical figures. For example, the app’s AI-recreated version of famed-anti-semite Henry Ford claimed he didn’t hate Jews.

Despite this, the app is worth mentioning because it represents exactly what Jordan Peterson is talking about regarding AI personalities. He talked about descriptions of movie characters being turned into personalities. While this is possible, there are still many hurdles in the way before creating fully fleshed out personalities out of nothing but human-made descriptions of a fictional person. 

What Historical Figures is doing represents the clearing of one of those hurdles. It’s far easier to create a human-like AI personality when you have thousands of pages of text generated by the person you’re trying to recreate to base your AI model on. 

Once that is perfected, the next step will be creating personalities out of nothing. We don’t know when this will be possible, but Jordan Peterson is right that it is coming down the pipeline relatively soon.

Conclusion

Jordan Peterson was rightly impressed by ChatGPT and the other emerging AI technologies he discussed in his Democracy Fund lecture last month, but he is missing some key context about AI that makes his statements about what AI is capable of now and the future of AI a little off base. 

The future of AI is bright, but we might have to wait a little longer for AI to be capable of what Jordan Peterson predicts will be possible this year. 

1 thought on “Is Jordan Peterson right about AI in 2023?”

Leave a Comment