List of p(doom) values

p(doom) is the probability of very bad outcomes (e.g. human extinction) as a result of AI. This most often refers to the likelihood of AI taking over from humanity, but different scenarios can also constitute "doom". For example, a large portion of the population dying due to a novel biological weapon created by AI, social collapse due to a large-scale cyber attack, or AI causing a nuclear war. Note that not everyone is using the same definition when talking about their p(doom) values. Most notably the time horizon is often not specified, which makes comparing a bit difficult.

  • Yann LeCun
    one of three godfathers of AI, works at Meta

    (less likely than an asteroid)
  • Vitalik Buterin
    Ethereum founder

    (Specifically means AI takeover)
  • Geoff Hinton
    one of three godfathers of AI

    (chance of extinction in the next 30 years if unregulated)
  • Machine learning researchers

    (From 2022, median value is 5%)
  • Lina Khan
    head of FTC

  • Paul Christiano

    (Cumulative risks go to 50% when you get to human-level AI)
  • Dario Amodei
    CEO of Anthropic

  • Yoshua Bengio
    one of three godfathers of AI

  • Elon Musk
    CEO of Tesla, SpaceX, X

  • Emmet Shear
    ex-CEO of Twitch, ex-short term CEO of OpenAI

  • AI Safety Researchers

    (Mean from 44 AI safety researchers in 2021)
  • Scott Alexander
    Popular Internet blogger at Astral Codex Ten

  • Eli Lifland

  • AI engineer

    (Estimate mean value, survey methodology may be flawed)
  • Holden Karnofsky
    Executive Director of Open Philanthropy

  • Jan Leike
    alignment lead at OpenAI

  • Zvi Mowshowitz
    AI researcher

  • Dan Hendrycks
    Head of Center for AI Safety

  • Eliezer Yudkowsky
    Founder of MIRI

What about yours?

We've built the AI Outcomes App to help you think about how probable the various outcomes from AI are.

Try it out
Join PauseAI >