About:

Miles Brundage is an independent AI policy researcher with a personal Substack.

Website:

Specializations:

Interests:

AI policy

Incoming Links:

Subscribe to RSS:
Miles Brundage summarizes his recent research contributions related to AI verification and governance, highlighting his involvement in several papers and projects. He discusses the importance of international verification for AI a...
Immediate and effective AI regulation is essential to mitigate risks as advancements in AI technology outpace governance efforts, despite the challenges ahead.
AVERI has launched to establish rigorous third-party auditing standards for frontier AI, ensuring safety, security, and accountability in AI development.
The article discusses the concept of 'megapapers,' which are extensive research papers involving numerous authors, institutions, and disciplines. The author, Miles Brundage, explains the unique characteristics of megapapers, their...
The post discusses the importance of third-party auditing for frontier AI systems, emphasizing the need for rigorous, independent assessments to ensure safety and security. The author critiques the current state of AI auditing, wh...
The text discusses the framework for governing frontier AI, which requires standards for safety and security, incentives for AI developers to follow those standards, and evidence that the standards are being followed. It also expl...
The text discusses the concept of an 'IAEA for AI' and its potential to address safety and security risks related to AI. It highlights the need for access and credibility, and the challenges of implementing safety and security sta...
The author discusses the analogy of AI being a liquid, not a solid, and how it impacts the technology and policy. AI is described as being fluid in terms of access, diffusion, and quantity, and the author emphasizes the need to un...
The text discusses the need for strong governance in frontier AI systems, highlighting the importance of organization-level governance in addition to individual AI system safety. It emphasizes the need for a shift in focus from in...
Dean Ball's blog post discusses the concept of private governance of AI, which he has written about before. The post invites readers to think critically about the proposal and encourages public discussion.
The text discusses the challenges of protecting AI systems from theft by sophisticated state attackers, the difficulty of securing information security, and the costs associated with implementing new security measures. The author ...
The author discusses the important AI-related topics that they won't be focusing on this year, including AI safety, security, and policy, the role of AI agents, the impact of AI on employment, and the EU AI Act. They emphasize the...
Vice President Vance articulated several components of an emerging American AI policy agenda at the AI Action Summit. He emphasized discouraging overregulation, the upsides of AI, the US’s commitment to leading in AI, and concerns...
The recent release of DeepSeek's R1, the most capable open source AI model, has generated concern among American technologists, policymakers, and investors. The author argues that the R1 is just the latest chapter in the modern hi...
The text provides feedback on the second draft of the General-Purpose AI Code of Practice, discussing the process, the challenges, and the potential improvements. The author shares their views on the compliance of big companies wi...
The author discusses the need for security improvements at frontier AI companies, emphasizing the importance of security relative to the technology of AI. He highlights the urgency of the situation and the potential risks of under...
The text discusses the need for trustworthy AI and proposes the idea of creating an organization called the Global Association of AI Ombudspeople (GAAIO) to verify the claims made by companies and countries about AI. It outlines t...
The text discusses the upcoming AI policy decisions and the implications of AI governance. It emphasizes the need for improved governance of AI and the urgency of the situation. The author also highlights the rapid progress of AI ...
The author reflects on a recent lecture at Berkeley and discusses the idea of a 'CERN for AI' which would pool resources from different countries and companies to develop AI securely and safely. The author outlines a plan for the ...
The text discusses the potential of AI to help with AI governance, highlighting the ways in which AI can be used to oversee other AIs, monitor for misuse, improve security, and help humans make better decisions. It also addresses ...
The text discusses how the Trump administration can implement policies that foster AI advancement and economic growth while also addressing other policy objectives such as ensuring safety and international stability. It explores p...
The text discusses the issue of too little safety and security in AI development and deployment, using the metaphor of 'bread and butter'. It argues that the industry's talent is not sufficiently applied towards making AI safe and...
The text discusses the pace of AI progress and whether it should speed up, slow down, or stay the same. The author argues that the pace of AI progress is currently very fast and that society should consider 'installing brakes' on ...