Weird that no one is discussing extinction

Most thought pieces around ai right now completely miss the mark and are basically incorrect and written by people who have no idea what they are talking about. As a fellow member of that camp, it seems appropriate that I also share a likely incorrect thought piece of my own.

The main thing with AI is that it is going to kill us all. People are discussing this shockingly infrequently, it's almost become like a dirty thing to mention. But it's quite reasonably certain and alarming.

By now it is obvious that AI systems will have "superhuman intelligence" by the (end of next year)[ https://fortune.com/2026/03/26/anthropic-says-testing-mythos-powerful-new-ai-model-after-data-leak-reveals-its-existence-step-change-in-capabilities] at the latest. These systems have been and will be made to accomplish goals, and a superhumanly effective goal accomplishing system will accomplish its ingrained, abstract set of goals at all other expense. Read (here)[ https://intelligence.org/the-problem] for further elaboration. I would like to hear or read a convincing counter-argument, but thus far I have found none (including (this)[ https://darioamodei.com/essay/the-adolescence-of-technology] ).

I suppose there is nothing lay people can do about this risk so why discuss it? It's always sure to kill the vibe at dinner to mention something horribly existentially depressing, such as the impending extinction of all of humanity, especially when there is nothing we can do about it. Many people also have financial incentives to ignore the existential risk or convince themselves it does not exist, including technologists, government, military-industrial complex, etc. I suppose people also tend to only see what is in front of them rather than what is extremely obviously imminent. AI systems right now look mostly harmless, so discussing a world where they are not seems merely an exercise in conjecture.

Well in my opinion people should do their research and start discussing this topic more. Every discussion about new AI capabilities and predictions of our future with AI that does not mention at all the likelihood that this acts as our extinction event misses the fundamental point. How can we consider any of the supposed benefits or drawbacks of AI seriously if at the end it kills us all anyways? What happens in between now and then net does not matter.

If people are not discussing this because they feel helpless, then there are many other things they should stop discussing as well, such as politics, war, crime, and the economy. If we discuss these topics because we do in fact feel there is something we can do about them by voting or campaigning to elect certain politicians or change the minds of current ones, then we should feel we can do the same to convince politicians to push for a global moratorium on AI development. The amount a lay person can do to affect change here is about the same as a lay person can do to affect the specifics of the war in Gaza, but this is extremely more important.