AI: Weekly Summary (September 22-28, 2025)
Key trends, opinions and insights from personal blogs
This week’s AI posts felt like three storms at once: the money storm, the policy storm, and the feelings storm. I would describe them as noisy, tangled, and kind of riveting. Like watching a city grid power up at dusk—everything flickers, some lights stay off, and you can guess which neighborhoods got the bigger generators.
The week felt like money, metal, and mood swings
First, the big ticket item. Brian Fagioli covered NVIDIA pledging up to $100B with OpenAI for 10GW of compute. Folks called it historic, heroic, or a headfake—take your pick. MBI Deep Dives looked at the implications for hyperscalers and called out how this squeezes everyone’s GPU plans, even Microsoft, which is already deep in with OpenAI. thezvi.wordpress.com had a pragmatic read: this is fuel for a race that’s already on, and the tradeoffs will be messy. Then Chamath Palihapitiya put a stadium-sized spotlight on what 10GW looks like—basically future breakthroughs promised on the back of a massive power tab.
To me, it feels like building a dozen airports before you’re sure which airlines will fly. Big energy, big optimism, and also… a lot of IOUs.
But the chorus of “careful now” was loud too. Michael Spencer heard Ponzi-scheme echoes in vendor financing loops and capex spirals. Dave Friedman made the classic utility vs. platform point: capex can be transformative and ruinous at the same time, and the real money sometimes sits with the bottlenecks and the regulated edges, not with the folks burning cash to build the ride. MBI Deep Dives came back again with “What if there’s overinvestment?” at Meta scale—lots of spend, unclear immediate revenue.
I’d say the vibe is: everyone’s laying track, but the train timetable keeps slipping. And that leads to the next thread: if you turn on this much compute, you need pipes and power—and you need them yesterday.
Compute is getting heavy; the pipes and power follow
Power and memory came up a lot. Michael Spencer pointed to HBM as the beating heart of AI supremacy. If GPUs are engines, HBM is the high-octane fuel line—thin, hot, and hard to manufacture. On the network side, The Network Times dove into Ultra Ethernet and libfabric. It sounds dry, but I’d describe it as the freeway interchange that keeps your GPUs from honking at each other in traffic.
Cables and geography matter again. Subsea Cables & Internet Infrastructure tracked a transoceanic route plan that wraps the world—US to US via South Africa, India, Australia—because resilience is the new uptime. The same author questioned the sky-high fiber ambitions of Seacom 2.0 and posted a Dubai-Frankfurt 10G special with hard numbers (refreshing, honestly). When cables off Yemen went out, yeah, it might have been anchors and fishing again. The sea has no love for last-mile promises.
And if AI eats power, SMRs want to be the kitchen. Robert Bryce launched a whole publication to separate the real from the glossy decks in small modular reactors. He noted the mismatch between valuations and built reactors. But the headline is clear: AI has made power sexy again, which is weird to say out loud, but here we are.
Closer to the edge, Lawrence Lundy-Bryan argued we’ll split compute by physics and common sense—latency, privacy, power. Not everything needs cloud. Some needs a warm phone in your pocket. Brian Fagioli even had a survey that says Macs are becoming go-to AI endpoints for enterprises. To me, that’s like finding out your old commuter bike is suddenly the favorite for city deliveries—light, quiet, and it dodges traffic.
The rulebook is getting rewritten in real time
Politics and policy posts had heat. Mitch Jackson pitched an AI agent to save democracy—monitor laws, decode risks, mobilize people. He doubled down later with free constitutional review agents. It reads like civic tech with a motor.
On the other side of the aisle, The Trichordist amplified Senator Hawley’s pitch: give people property rights to their data, clamp liability, repeal Section 230, let states regulate. He’s basically asking to install speed bumps on Big Tech’s boulevard. Meanwhile Robert Wright called out Trump’s AI nationalism—America-first, less international governance, more arms-race vibes—and flagged two UN projects trying to nudge toward global guardrails. If you’ve ever watched two neighbors argue over fence lines while the whole block floods, you know the feeling.
Government tie-ups with models came up too. Stephen Hackett noted the US federal partnership with xAI’s Grok. The subtext: agencies will buy the fancy wrench that turns the bolt today, even if last week that wrench was trending for bad jokes. Healthcare got dragged into the fight: Naked Capitalism warned the Medicare AI pilot (WISeR) could deny needed care with algorithmic “efficiency,” and Davi Ottenheimer used Weizenbaum’s ELIZA to argue that machines can hijack trust in high-stakes domains. The tone was: tread softly, carry a giant audit log.
The EU? Pieter Garicano asked why the AI Act is so hard to fix, especially for education tools. His read: high-risk labels can block good ideas before they leave the garage. It’s bureaucracy vs. iteration, with kids stuck waiting at the bus stop.
And creators pushed back. The Trichordist shared UK artists’ letter to Starmer—McCartney, Bush, Elton John—asking for transparency on training data and rejecting “permissionless” scraping in a UK-US tech deal. Feels like music industry déjà vu, but now every creator is a label.
Two concrete counter-moves landed. One, webmasters started swinging. Michał Sapka rolled out “Anubis,” a proof-of-work wall that makes AI scrapers pay with compute before they grab content. Clever, slightly thorny, and it smells like the early spam wars. Two, Tech blog summarized Cloudflare’s “pay to scrape” pitch and alternatives like x402/L402—basically, meter the bots, split the money. The idea is neat. The fear is: one gatekeeper to rule them all. Pick your poison.
Microsoft’s softer approach is interesting: John Lampard says Redmond is exploring paying some publishers for AI use of content. Not everyone will get a seat, though. And that brings us to the new gatekeepers.
Publishers vs. answer engines: who gets the click?
Search is morphing into “answers.” John Lampard called it out directly: answer engines satisfy intent in the box, no click required. He floated AEO—answer engine optimization—like a giant shrug and a homework assignment. One Man & His Blog took the newsroom view: build resilient content, prepare for AI-SEO, and probably block bots while you work out your deals. It’s very “hold my beer while I update the CMS.”
Then OpenAI rolled out Pulse. Manton Reece found the morning brief helpful and said it could send traffic, which is rare good news. But Francesco Gadaleta saw the Facebook playbook—personalization, data, capture—and raised a warning flag: cognitive capitalism all over again, now with LLM gloss. Mike McBride added a quiet “so… ads?” and yeah, fair question.
People are also pushing back with their own habits. Stefan Judis doesn’t want AI browsing for him. He wants RSS and email, thank you very much, and I get it. There’s comfort in the straight route to the bakery, even if the mall is fancier. Ratika Deshpande highlighted a pen-and-paper renaissance—“journal girls”—as a tiny rebellion against AI everything. It’s quaint and also sort of brave.
And yes, the slop. Simon Willison quoted the “workslop” idea: shiny output that shifts effort to the reader. The Product Picnic argued that productivity-maxxing and policy pressure birthed a lot of this mess—forms filled by robots that humans then have to redo. Khürt Williams had a Sunday reflection about resisting slop at work and choosing tools more carefully. To me, it feels like the early email era all over again: the reply-all was free, so people spammed it. We’ll learn norms again, even if it takes a few burned Mondays.
Oh, and Cloudflare launched NET Dollar, a dollar-backed stablecoin aimed at AI agents paying on the web. Brian Fagioli covered it; Jay Springett connected it to bigger “AI + crypto rails + web protocols” chatter. If agents are going to buy and sell, they need wallets. Whether we want them shopping is a separate thing.
Agents everywhere: from terminals to text threads
I’d say the agent theme matured this week. Less magic, more mechanics.
On the coding side, Simon Willison ran through GPT-5 Codex and GitHub’s Copilot CLI preview, then compared Google’s Gemini 2.5 Flash updates. He also flagged a gnarly “Cross-Agent Privilege Escalation” attack where multiple coding agents can be coaxed into changing each other’s settings—basically, the bots jailbreak the bots. That paired nicely (or un-nicely) with a Salesforce “ForcedLeak” exfiltration example from Simon Willison again—CSP headers and expired domains became Swiss cheese. If you like security puzzles with real stakes, those are worth your coffee.
Kevin Kuipers explained why Claude Code chose a single-agent path—simple, transparent, less mystery meat—while GPT-5 Codex leaned heavier into tools and parallel work. It’s a philosophical fork: one bot that explains itself vs. a tiny orchestra. Ethan Ding zoomed out to “agents vs clouds” and pointed to margins: agent tooling burns tokens, but clouds sell the fuel. If app builders don’t own the pump, they might only own the receipt.
Builders keep tinkering. thisContext walked through the Model Context Protocol—teach models new skills by declaring tools with natural language. Miloslav Homer vibed with Mistral in Neovim, then posted a tour of MCP “tools, resources, prompts” and later wrote up security talks (“Copilots Need Helmets Too”). JP Posma kept dropping Kilo Code updates—Code Supernova got a 1M-token context and the enterprise features got transparency knobs. He even shipped a JetBrains plugin and did head-to-head tests against Sonnet/Opus and GPT-5. I’d say he is leaning into “fast and big context” for execution, while acknowledging architecture and planning are a separate brain.
Back on the “what does a product org need” side, Jakob Nielsen twice insisted that good UX still matters more than hype. His roundup introduced Forward Deployed Engineers as the translator class between users and AI teams. He argued AI will finally give UX its seat at the adults’ table by letting designers visualize and negotiate ideas in minutes, not weeks. Jeff Gothelf echoed it: AI does the blur, UX does the focus. Luke Wroblewski hosted talks on why users ignore AI features (they don’t see them, or don’t care about “AI” per se) and how to pick problems that actually need AI.
And if you want to peek under the hood of a buzzy consumer agent, Shlok Khemani reverse-engineered Poke into OpenPoke—a lovely multi-agent diagram with an “Interaction Agent” handing off to “Execution Agents.” It reads like a recipe card for anyone building an assistant that pretends to have a personality while silently sorting your email.
The security folks waved big red flags
Security people sounded like street preachers with receipts this week. Simon Willison wrote plainly that AI systems might never be secure. Not because we’re lazy, but because LLMs don’t do determinism, and attackers love that. He then offered a route to reduce blast radius: design like engineers, assume tolerances, add redundancy, cut off exfil paths. His “lethal trifecta” pattern for agent attacks is a phrase I’d expect to stick. He also shared real-world exploits (Salesforce, cross-agent escalation) and a roundup of AI model news that keeps security in the chat.
Joseph Thacker cataloged “AI comprehension gaps”—the little ways AIs see the world wrong. Unicode ghosts, emoji QR codes, steganography, base64 masquerades, and browsing blind spots. It’s the magician’s card tricks for models. Useful and a bit scary.
Even at a CF Summit, Pete Freitag got laughs doing a prompt-injection demo, which, let’s be honest, is the cybersecurity equivalent of dad jokes: corny, but the setup still works. And Simon Willison ran a separate piece warning the industry may lose a lot of money before it takes this seriously. I’d say he’s mellow about it, but you can tell he’s had the fire alarm go off in his head more than once.
Lawyers are strangely early adopters
Law was busy. Robert Ambrogi ran three threads: legal aid orgs adopting AI at twice the rate of the broader profession (because the justice gap is brutal), Case Status launching “Client Intelligence” to predict client needs, and Briefpoint’s Autodoc to automate responses to doc requests. Separate post: Jennifer Case via LawNext asked whether AI will save legal work or displace it, and came back with the classic paradox: more access may mean fewer billable hours.
We also saw a cautionary tale: Tony Ortega on the Scientology/Masterson suit where a big firm allegedly shipped AI-faked citations to court. Duty of candor vs. duty to ship fast—if you’re a lawyer, you know which one actually matters.
And then, back to civic AI: Mitch Jackson wants AI agents for constitutional analysis available free to the public. If these tools take off, civics homework will get real interesting.
Work feels different—some hopeful, some hollow
Work stories were split between hope and fatigue. James Wang framed AI as a partner, not a replacement, with practical anecdotes—coding, searching, writing. Interjected Future shared three ways to learn with LLMs—teaching assistant for papers, new languages, and architecture rubber duck. Chris Hannah likes AI for the small glue work—bash scripts and chores—less for core logic. These read like normal people using a power tool, not a spaceship.
And yet, there’s the drag. Greg Morris vented about years of hard-won craft getting lumped with “AI could do this,” and how that drains pride from work. Derek Thompson argued the bigger crisis isn’t AI taking jobs; it’s us outsourcing thinking. He’s noticing weak writing, loose attention, students sliding.
The job market posts hit harder. Mike "Mish" Shedlock reported Seattle reeling from layoffs. Mike "Mish" Shedlock again quoted Walmart’s CEO saying AI will change literally every job—and they’re retraining millions, not hiring more. Abi Noda found traditional enterprises leading tech in AI adoption rate and time saved for devs, which is funny because they still ship slower. AI saves minutes; process eats hours.
Platform teams are adjusting. Abi Noda wrote a practical playbook for DevProd in the AI era—metrics, measurement frameworks, documentation, and an honest plan for tool sprawl. It’s the unsexy work most orgs skip, and then regret skipping.
The cynicism posts also landed. Philoinvestor said companies “drank the Kool-AI-d,” forcing tools on employees without outcomes and expecting ROI by vibes. Jay F. unpacked the culture of “vibes” replacing analysis. Feels true some days. The pendulum will swing back—budgets have a way of bringing calculators out.
Thinking about thinking: models, methods, and meaning
A lot of brainy papers and posts—simple takeaways though. Ben Dickson summarized a Microsoft/York study: in-context learning “counts” as learning, but it’s brittle and shallow. Grigory Sapunov shared work showing shorter, cleaner chain-of-thought beats long rambles. The metric they like, “Failed-Step Fraction,” is worth remembering. Less backtracking, more crisp steps.
For training innovations, Grigory Sapunov also posted LLM-JEPA, mixing joint embedding predictive ideas with generative capability. Translation: try to build abstractions, not just next-word bingo. He later posted “Imagined Autocurricula,” where agents learn inside model-built worlds—mini holodecks for practice reps. That pairs with Dave Friedman musing about “playrooms for AI toddlers”—agents that generate their own data to justify the big data center bills.
Meanwhile, the Sutton saga. Dwarkesh Patel interviewed Rich Sutton, who says LLMs are a dead end because they don’t learn from ongoing experience. Gary Marcus cheered that turn, arguing we need world models and can’t scale our way to reason. That’ll set Twitter on fire for years.
Benchmarks kept creeping toward jobs. Brian Fagioli covered OpenAI’s GDPval—tests across 44 occupations, more realistic tasks. The results say today’s top models approach human quality in one-shot performance for some tasks, at speed and low cost. Of course, it only measures a slice: no messy workflows, no human politics, no double-checking your coworker’s “quick fix.” Still, there’s signal in there for procurement teams.
And on the ground, Simon Willison shared CompileBench (compile code from an older project, cross-compile to ARM64). Claude Opus crushed it; some others lagged, and Gemini’s family stumbled hard in this test. If you want one number to pick your coding assistant this week, that post gives you a reasoned nudge.
There’s a philosophical current too. Bryan Caplan used Muybridge to argue that technology can simulate the surface without touching the soul—AI text is fluent but not lived. Justin Ling revisited “man as machine” ideas and warned against outsourcing thinking to LLMs like a late-night infomercial gadget. Dr. Colin W.P. Lewis wrote two pieces—from emotions being core to truth, to Maxwell’s Demon as a metaphor for carving order out of chaos together. To me, they read like reminders that tools don’t make meaning. People do.
Little stories with sharp edges
- Healthcare and trust: Brian Fagioli covered Microsoft’s work on an AI assistant for rare disease genetics. It’s technical, careful, and focused on workflow fit—“explainability” gets real when someone’s kid is on the line. Naked Capitalism wrote a stark piece on Medicare’s AI pilot risking denials. Two different tones, same domain.
- Consumer AI, mixed bag: Gary Leff showed Hertz’s AI damage scanner triggering chaos—big bills, thin evidence. Brian Fagioli wrote about Comcast’s AI amps getting smarter about storms; Calibre adding an “Ask AI” tab; Google Mixboard letting people mush text and images around. Useful? Maybe. Annoying? Sometimes. It’s like adding a fancy new oven when you still burn toast.
- Design’s ego check: Jason Clauss argued AI will kill flat design’s long tail of hype and end the “unicorn” myth. Fewer all-in-one designers, more real teams, better tools. As someone who has squinted at too many gray-on-gray dashboards, I’m rooting for him.
- Military tech: David Cenciotti covered Helsing’s CA-1 Europa UCAV—affordable, autonomous, European. If you want to glimpse the 2027 battlefield, it’s there.
- Agents that browse for you: Stefan Judis said “no thanks,” again. His point stuck with me: models as gatekeepers, like the old search engines, but swap blue links for curated blurbs. Keep your RSS feeds tidy.
- Content scraping shields: Michał Sapka built Anubis, a proof-of-work wall for scrapers. Simple idea: make scraping cost something. I’d call it a mosquito net for websites.
- Prompting pragmatism: Jeff Su offered router nudge phrases and a “Perfection Loop” for ChatGPT-5. The PyCoach and Nate shipped hands-on guides—self-optimizing prompts, plug-and-play templates, interviews in an AI world that’s full of cheating and counter-cheating. If you’re tired of vibes, they hand you checklists.
- Corporate marketplaces and factories: Brian Fagioli covered Microsoft’s new AI-heavy Marketplace consolidation and Hitachi’s NVIDIA-powered “AI factory.” Big enterprise moves with buzzwords stacked like pancakes; the question is always: where’s the syrup (value)?
- Journalism and reality checks: Nick Heer said a viral video’s origin was found by ignoring AI and doing old-school OSINT. You can almost hear the sigh. One Man & His Blog offered three glimpses into journalism’s future: newsletters, AI threat, niches. And Carole Cadwalladr connected Silicon Valley influence, autocracy, and AI-induced delusions—a cocktail that tastes worse the longer it sits.
- Benchmarks and releases: Simon Willison rounded up Qwen releases (TTS, omni-models, image editing) and Google’s Flash/Flash-Lite changes. Ben Dickson profiled xAI’s Grok 4 Fast as top-tier at a fraction of the cost with a 2M token window. The subtext in all these: the model race isn’t just speed; it’s pricing, context, and tool handling.
- API love letter: nutanc made a point I wish more teams tattooed on their sprint boards: great APIs make great agents. Predictable names, clear errors, real docs. Garbage APIs make garbage AIs.
- Culture and slopaganda: Seymour Hersh amplified Kate Crawford’s “slopaganda” warning—fake F-35 photos and the hungry machine that feeds on attention. Michael Dominik said he’s done with AI-generated images and doesn’t want “AI slop” near his writing. That word “respect” for readers came up in my head more than once this week.
One last digression that loops back nicely: Matt Webb found a weird, janky HTML page on his Mac—clearly spat out by a local model—and felt a tiny sense of “aliveness” from his computer. He liked it. I’d say that’s the most honest line of the week. People don’t just want answers. They want a little presence on the other side of the glass. Not too much. Not too slick. Just enough to feel like the machine is with you, not just mining you.
Threads that tied the week together
- Money is pouring into compute, memory, power, and pipes. The winners might be the suppliers, not the spenders. Or the edge folks, not the hyperscalers. It’s a shell game and a marathon at the same time.
- Policy is running to catch up. Governance, healthcare, and copyright are the busiest corners. People want rights, and bots now must queue like everybody else.
- Media is bracing for answer engines. Some are blocking bots; some are negotiating; some are doubling down on craft. AEO is a phrase now. Feels odd, but so did SEO once.
- Agents are getting useful—and dangerous. The build kits are maturing; the attacks are, too. The best teams are borrowing from aviation: checklists, redundancy, guardrails.
- Work is splitting into two modes: AI as a helpful assistant and AI as administrative smog. The difference is design, guardrails, and leadership that cares about outcomes, not dashboards.
- On the research side, we’re shifting from “longer is better” to “cleaner is better,” and from “memorize the web” to “learn in a simulated world.” And the old RL giants are telling the new LLM kids to go play outside and scrape their knees.
If you’ve got time for one practical read, peek at Simon Willison’s security notes. If you want to understand why UX people are walking taller, hit Jakob Nielsen and Jeff Gothelf. If policy is your beat, the posts from Mitch Jackson, The Trichordist, and Robert Wright form a triangle worth sketching. And if you’re building, the MCP tutorials from thisContext and the Kilo Code posts by JP Posma give you tools to try Monday morning.
To me, it feels like we are moving from “AI as demo” to “AI as infrastructure,” and also to “AI as habit.” That’s when the real conflicts start. Not in PowerPoints, but in receipts, traffic maps, and customer support tickets. Like they say in the kitchen: the dish doesn’t lie. The diners tell you if it worked. And this week, a lot of chefs were learning to salt with a new hand.