About:

Nirit Weiss-Blatt is the author of the AI PANIC newsletter, a Substack publication that tracks the hype and panic surrounding Artificial Intelligence.

Website:

Specializations:

Subscribe to RSS:
The UK AI Security Institute's paper critiques recent claims about AI systems developing 'scheming' capabilities, arguing that such studies are based on flawed evidence. The authors identify four methodological flaws in existing r...
The blog post critiques a segment from CBS's '60 Minutes' featuring Anthropic's AI research, particularly a study on AI blackmail. It argues that the portrayal of the AI's behavior as unprompted and malicious is misleading. The au...
The blog post discusses the rise and fall of the rationalist community, particularly focusing on the Machine Intelligence Research Institute (MIRI) and its founder Eliezer Yudkowsky. It highlights Jessica Taylor's experiences with...
The blog post discusses the radicalization of Sam Kirchner, a cofounder of the 'Stop AI' activist group, who abandoned nonviolence and assaulted another member, raising concerns about potential violence against AI researchers. The...
The blog post critiques the book 'If Anyone Builds It, Everyone Dies' by Eliezer Yudkowsky and Nate Soares, which warns of the existential threat posed by superintelligent AI. While some praise the book for its alarming message, m...
Human resilience in the age of AI can be enhanced through agency and decentralization, countering the deterministic views of AI Doomers advocating for control.
The text discusses the media's portrayal of AI, focusing on the negative aspects and the impact of influential movements on the narrative. It highlights the need for more balanced and accurate coverage of AI and suggests ways to i...
The text discusses the shift in the AI community from focusing on existential risks to embracing the economic potential of AI. It highlights the change in policy proposals, the shift in focus from AI safety to AI opportunity, and ...
The text discusses the growing 'AI Existential Risk' ecosystem, sparked by ChatGPT's launch in 2022. It mentions leading voices in the 'AI will kill us all' camp, financial backers, and organizations advocating extreme authoritari...
The text discusses the AI panic in 2024, highlighting the extreme discourse and influence of the 'AI Existential Risk' movement. It covers the EU AI Act and California's SB-1047, and how the panic led to a backlash. The post also ...
The text discusses the controversial beliefs and extreme experiences of the Effective Altruism movement, focusing on Leverage Research. It also delves into the ideologies of Rationalism and Effective Altruism, and the concept of A...
The text discusses the use of metaphors and analogies in the discourse surrounding Artificial Intelligence (AI) and its implications for policy. It highlights the importance of clarity in terminology and the impact of different an...
Effective Altruism (EA) was marketed as evidence-based charities serving the global poor, but it was actually focused on AI-safety/x-risk. The movement was designed to attract people with the global poverty angle and then lead the...
The text discusses the expansion of the 'Panic-as-a-Business' industry, focusing on the growing 'AI Existential Risk' ecosystem and the new organizations and groups involved. It provides detailed information about the 'AI Existent...
In May 2021, the Future of Life Institute received a large cryptocurrency donation from Ethereum co-founder Vitalik Buterin, amounting to $665.8 million. The donation was primarily funded by Buterin and a dog-themed shitcoin. FLI ...
The article discusses the influence of the Effective Altruism movement on the AI Existential Risk ecosystem, which has been funded with half a billion dollars. It details the events surrounding the firing of Sam Altman from OpenAI...
Effective Altruism has invested half a billion dollars to build an ecosystem around the 'AI Existential Risk' ideology. The detailed information aims to familiarize readers with the many players involved, including funding sources...
The text discusses the AI Panic Campaign and its lobbying efforts to influence policy in the medium term. It outlines the fear-based campaigns designed to promote fear-based AI governance models and the efforts to influence the UK...
The text is an exposé on the 'x-risk campaign' and its efforts to target 'human extinction from AI' and 'AI moratorium' messages based on various demographics. It focuses on two organizations, 'Campaign for AI Safety' and 'Existen...
The text is about Ilya Sutskever's views on the impact of Artificial General Intelligence (AGI) on society and the economy. He discusses the potential extinction of various professions, the role of AI in shaping democracy, and the...
The media thrives on fear-based content and plays a crucial role in the self-reinforcing cycle of AI doomerism. The author outlines the main flaws in AI media coverage and suggests ways to fix it. The flaws include AI hype, induci...
The text provides a list of 10 leading 'AI Frames' that categorize media coverage of AI, ranging from positive to negative. It includes descriptions of each frame and discusses the need for conversation and further discussion base...