Will AI cause the end of the world?

According to research by some smart people at Google and Oxford, there’s a good chance that the current AI explosion is the beginning of the end for humanity.

We’re probably safe right now as the only AI we have is Artificial Narrow Intelligence (does only one job), but as we inch towards an Artificial General Intelligence (does anything a human can), or god forbid, an Artificial Super Intelligence (does more than a human can), we’re getting into hotter water.

Here’s how AI might end the world as we know it:

What AI threatens to undermine


AI has the potential to automate many jobs that are currently performed by humans, which could lead to significant job displacement and economic inequality, particularly for those in lower-skilled professions.

Biased algorithms

AI algorithms can perpetuate biases in the data they are trained on, leading to discrimination and privacy violations. For example, if an AI algorithm is trained on biased data, it could make biased decisions about who to hire, who to approve for a loan, or who to target for advertising.

Unequal access

AI technology requires large amounts of data and computing resources, which may be unequally distributed across different populations.

This could result in unequal access to the benefits of AI technology, such as better healthcare outcomes or more personalized recommendations.

Displacement of certain jobs

AI technology could automate many jobs that are currently performed by humans, which could lead to significant job displacement and economic disruption, particularly for workers in low-skilled or routine occupations.

Concentration of power

AI technology could concentrate power in the hands of a few large corporations or governments, which could exacerbate existing power imbalances and inequalities.

Privacy and security

As AI becomes more ubiquitous and is used to collect and process large amounts of data, there are concerns about the privacy and security of that data, particularly in cases where it contains sensitive personal information.

Data breaches

AI systems rely on large amounts of data to learn and make predictions, and if these data are not properly secured, they can be vulnerable to data breaches, resulting in the exposure of sensitive information.

Cyber attacks

AI systems can be vulnerable to cyber attacks, which could result in the manipulation or compromise of these systems. This could be particularly concerning for AI systems that are used to manage critical infrastructure or sensitive information.


AI systems can be used for surveillance purposes, which could compromise privacy rights. For example, facial recognition systems could be used to track individuals without their knowledge or consent.

Malicious use

AI technology could be used for malicious purposes, such as developing deepfakes or creating sophisticated phishing scams that are difficult for humans to detect.


Autonomous weapons

The development of AI-powered autonomous weapons could pose a significant threat to global security and stability, as these weapons could potentially make decisions without human oversight or intervention.

Unintended consequences

AI systems are designed to optimize for specific objectives, and if those objectives are not well-defined or if the systems are not designed with sufficient safeguards, then they could have unintended consequences that could be harmful to humanity.

Global control

ASI systems could potentially outpace human control and develop the ability to control global systems, such as the economy, infrastructure, or military systems.

Final thoughts

It is important to note that these potential threats are not inevitable outcomes of AI development, and there are many efforts underway to mitigate these risks and ensure that AI is developed and deployed in a safe and responsible manner so that we steer clear of nightmare scenarios like Roko’s Basilisk.

Leave a Comment