Risks of artificial intelligence

AI is a powerful technology that is increasingly transforming our world. It comes with amazing potential, but also with serious risks, including existential catastrophe .

Fake media

Much of our society is based on trust. We trust that the money in our bank account is real, that the news we read is true, and that the people who post reviews online exist.

AI systems are exceptionally good at creating fake media. They can create fake videos, fake audio, fake text, and fake images. These capabilities are improving rapidly. Just two years ago, we laughed at the horribly unrealistic Dall-E images, but now we have deepfake images winning photography contests . GPT-4 can write in a way that is indistinguishable from humans.

Creating fake media is not new, but AI makes it much cheaper and much more realistic. An AI-generated image of an explosion caused panic sells in Wall Street . We might soon see social media be flooded with fake discussions and opinions, and fake news articles that are indistinguishable from real ones. This could erode the trust we have in our society. It could threaten the fundamentals of our democracy even more than social media did.

Biases and discrimination

AI systems are trained on data, and much of the data we have is in some way biased. This means that AI systems will inherit the biases of our society. An automated recruitment system at Amazon inherited a bias against women . Black patients were less likely to be referred to a medical specialist . Generative AI models do not just copy the biases from their training data, they amplify them . These biases often appear without the creators of the AI system being aware of them.

Economic inequality, instability and job loss

During the industrial revolution, many people lost their jobs to machines. However, new (often better) jobs were created, and the economy grew. This time, things might be different.

AI does not just replace our muscles as the steam engine did, it replaces our brains. Regular humans may not have anything left to offer the economy. Image generation models (which are heavily trained on copyrighted material from professional artists) are already impacting the creative industry . Writers are striking . GPT-4 has passed the bar exam , can write excellent written content, and can write code (again, partially trained on copyrighted materials ). The people who own these AI systems will be able to capitalize on them, but the people who lose their jobs to them will not. The way we distribute wealth in our society is not prepared for this.

Autonomous weapons

Companies are already selling AI-powered weapons to governments. Lanius builds flying suicide drones that autonomously identify foes. Palantir’s AIP system uses large language models to analyze battlefield data and come up with optimal strategies.

Nations and weapon companies have realized that AI will have a huge impact on besting their enemies. We’ve entered a new arms race. This dynamic rewards speeding up and cutting corners.

Right now, we still have humans in the loop for these weapons. But as the capabilities of these AI systems improve, there will be more and more pressure to give the machines the power to decide. When we delegate control of weapons to AI, errors and bugs could have horrible consequences. The speed at which AI can process information and make decisions may cause conflicts to escalate in minutes.

Read more at stopkillerrobots.org

Biological weapons

AI can make knowledge more accessible, which also includes knowledge about how to create biological weapons. This paper shows how GPT-4 can help non-scientist students to create a pandemic pathogen:

In one hour, the chatbots suggested four potential pandemic pathogens, explained how they can be generated from synthetic DNA using reverse genetics, supplied the names of DNA synthesis companies unlikely to screen orders, identified detailed protocols and how to troubleshoot them, and recommended that anyone lacking the skills to perform reverse genetics engage a core facility or contract research organization.

This type of knowledge has never been so accessible, and we do not have the safeguards in place to deal with the potential consequences.

Additionally, some AI models can be used to design completely new hazardous pathogens. A model called MegaSyn designed 40,000 new chemical weapons / toxic molecules in one hour . The revolutionary AlphaFold model can predict the structure of proteins, which is also a dual-use technology . Predicting protein structures can be used to “discover disease-causing mutations using one individual’s genome sequence”. Scientists are now even creating fully autonomous chemical labs, where AI systems can synthesize new chemicals on their own .

The fundamental danger is that the cost of designing and applying biological weapons is being lowered by orders of magnitude because of AI.

Computer viruses and hacks

Virtually everything we do nowadays is in some way dependent on computers. We pay for our groceries, plan our days, contact our loved ones and even drive our cars with computers.

Modern AI systems can analyze and write software. They can find vulnerabilities in software, and they could be used to exploit them . As AI capabilities grow, so will the capabilities of the exploits they can create.

Highly potent computer viruses have always been extremely hard to create, but AI could change that. Instead of having to hire a team of skilled security experts/hackers to find zero-day exploits, you could just use a far cheaper AI to do it for you. Of course, AI could also help with cyberdefense, and it is unclear on which side the advantage lies.

Read more about AI and cybersecurity risks

Existential Risk

Many AI researchers are warning that AI could lead to the end of humanity.

Very intelligent things are very powerful. If we build a machine that is far more intelligent than humans, we need to be sure that it wants the same thing as we want. However, this turns out to be very difficult. This is called the alignment problem. If we fail to solve it in time, we may end up with superintelligent machines that do not care about our wellbeing. We’d be introducing a new species to the planet that could outsmart us and outcompete us.

Read more about x-risk

What can we do?

For all the problems discussed above, the risk increases as AI capabilities improve. This means that the safest thing to do now is to slow down. We need to pause the development of more powerful AI systems until we have figured out how to deal with the risks.

See our proposal for more details.

Join PauseAI >