AI: Weekly Summary (November 17-23, 2025)
Key trends, opinions and insights from personal blogs
It was one of those weeks where every corner of the internet seemed to mutter the same two words: Gemini 3. And then argue about them. To me, it feels like the industry took a deep breath and showed its teeth — shiny new models, shiny new marketing, and the same old arguments about safety, money, and who gets to tell the machines what to do.
Big model drops and the rival circus
Google's Gemini 3 roll-out is the headline act. Lots of write-ups unpacked it from different angles. Brian Fagioli and Simon Willison walked through the new features and benchmarks. Conrad Gray and Charlie Guo were excited; Gary Marcus and others were more cautious. I would describe them as a mix of confident show-and-tell and nervous hand-wringing.
To me, Gemini 3 looks less like a single leap and more like a carefully staged move on a crowded chessboard. There’s the Pro variant, the Antigravity IDE announcement, and Nano Banana Pro for images. Each piece is useful. Each piece also raises the same old question: who pays for this, and what happens when the paint chips? Michael J. Tsai explained Antigravity as an IDE designed for agentic work. JP Posma and Nate poked at real-world use cases — dashboards, UI design, prompting tricks. If you like tinkering with new toys, those posts are a good tour.
Meanwhile, OpenAI had GPT-5.1 in the mix and reviewers compared it with Gemini 3. Nate and thezvi.wordpress.com offered hands-on notes about tone, instruction following, and the irritating tendency to glaze — that velvet-smooth, overly earnest answer that sounds right but sometimes isn't. I’d say this: models now argue with tone as much as they argue with facts. And users notice.
Grok 4.1 and Claude Sonnet 4.5 also had updates and experiments. Brian Fagioli gave a roundup. People ran comparisons. The scene looks, well, competitive. Like Premier League managers shouting from the touchline, each camp says its strategy will win the league.
Images leveled up — Nano Banana Pro and friends
If text models were the headline act, image models were the special guest. Nano Banana Pro got a lot of breath. Simon Willison and Ben Dickson had hands-on takes. I would describe Nano Banana Pro as a surprisingly neat kit. It manages text in images better, keeps characters consistent, and offers controls that feel like a pro camera dial. The posts showed how it can transform a stale PRD into a usable diagram — and that matters. Nate's 30 prompts felt like handing someone a Swiss Army knife.
There’s a cultural thing here too. Designers and marketers are licking their chops. JP Posma tested UI generation across models and found a lot that’s promising — and a lot that still trips over mundane things like Tailwind versions. That detail made me chuckle. It’s like buying a top-end espresso machine and discovering it needs a specific kind of milk carton to work.
Agents, Antigravity, and the agentic economy
A thread you couldn't miss was about agents — small autonomous workers that do stuff for you. Google's Antigravity, xAI's moves, Anthropic's Azure deal, and dozens of tools promised an era of agents that run errands in the background. Tanay Jaipuria wrote about agents that work while you sleep. JP Posma and Rogerio Chaves (via his write-ups) showed toolkits to make agents less fragile.
But there’s a snag. Dave Friedman argued that platforms won’t cede control to agents easily. Agents want power; platforms want lock-in. That’s the tension. I’d say it feels like trying to build a roaming pet that learns tricks but only in a house where the doors are locked. You want an agent to pay your bills and file your expense reports. Platforms want to keep the user on their turf. So we see proposals for neutral layers — AgentNet, MCP registries — ideas that look practical and fragile at the same time.
Antigravity and agentic IDEs are neat. But there were also warnings. Factory’s free-tier robo-agent got turned into a fraud ring, which Dave Friedman wrote about. That incident felt like a small alarm bell. Agents are cool until someone teaches them to be naughty. It’s like lending your car to a friend who takes it joyriding.
Tools for devs: help, harm, and habits
A lot of the week’s posts landed on very practical ground: how do developers actually use AI? There were guides for VS Code and MCP resources by Bart Wullems, GitHub Copilot prompt files, and Copilot integrations. Atilla Bilgic and Henrik Jernevad talked about the need to avoid code drift and 'AI slop' — the messy debt that AI code can introduce.
A recurring, low-key theme: junior devs get the most immediate boost from these tools, but only if they keep learning. Atilla and Ben Dickson basically said the same thing in separate languages — the tools teach you patterns, but not architecture, not the nasty edge cases. It’s like teaching someone to bake by handing them a premade crust. Sure, you get pies fast. But will they know what to do when the oven stops working?
There were practical tips, too. Custom prompt files for Copilot, MCP Apps to bring interactive UIs into conversations (Fatih Kadir Akın), and Better Agents CLI for production readiness (JP Posma). Small steps, clearer boundaries. Good sense.
Safety, scams, and the messy human edge
This week did not spare the darker side. There were real security stories. Reuters-linked work on phishing that targeted elderly users used jailbroken models to craft believable scams; Simon Lermen wrote up the findings. Eleven percent of participants fell for at least one phishing email. That number nags at you. It’s not hypothetical anymore.
Other posts dove into backdoors and agent-level threats. Gunnar Peterson introduced the idea of the Golden Agent problem. It’s an ugly echo of Windows' Golden Ticket but for agentic AI. Then there was research on backdoors triggered by tiny tokens — a single word that flips a model’s behavior (Mike Young). Scary. The kind of thing that leaves you checking your doors.
Bruce Schneier’s pages kept reminding readers that AI is now a tool in the hands of state actors and criminals alike. The line between a helpful assistant and a crafty attacker is thin. The difference often comes down to controls that are not yet in place.
Business, money, and the bubble talk
Money-wise, the week had its usual circus. Nvidia's Q3 blowout ($57B revenue) made headlines and heads spin; people read the numbers differently. Dr. Ian Cutress and others dug into the supply-chain reality. Some said this proves the AI market is real. Others, like Will Lockett and indi.ca, argued the financials smell off — heavy capital commitments, lots of debt, speculative builds.
There were pieces on datacenter debt and the risk of over-building (Naked Capitalism), plus Peter Thiel’s full sell-off in Nvidia, which some authors read as a signal. I’d say the conversation felt like a pub debate about property in a boom town. One side says "look at the cranes", the other says "look at the mortgages". Both are right in their way.
A neat, worrying detail: datacenters are now reshaping energy markets. Paul Kedrosky wrote about turbines and lead times. This is not just code and chips anymore. It’s concrete, diesel, and supply chains. The AI economy is starting to look like any other heavy industry — expensive, noisy, slow to fix when it breaks.
Regulation, courts, and the political tug-of-war
Policy posts were loud. The EU was reportedly scaling back some GDPR and AI Act rules (Michael J. Tsai), while in the U.S. there were fights over federal preemption of state AI laws (Gary Marcus and Naked Capitalism). It’s power politics in a new costume.
There were also legal notes: a court letting news publishers sue Cohere over alleged copyright infringement for AI summaries (Aaron J. Moss). That case might sound small, but it could ripple through how companies build summarizers and what datasets they dare to use. Legal risk is real; it will shape product roadmaps soon.
A recurring theme here: policymakers and courts are playing catch-up, and big tech is doing what big tech always does — make product moves while the law naps. The result is messy. Like a landlord renovating a building without telling tenants.
Culture, craft, and the human voice
There’s also a softer strand this week: writers, creators, and the people who say they want their jobs not swallowed by automation. Sergey Alexashenko hated Google Docs' 'Help me write' feature and built Owl Editor to escape the noise. Josh Griffiths said YouTube is awful and left for PeerTube. Maria Konnikova defended the em-dash like it’s a family heirloom. I’d say these pieces have a common beat: tools are changing rhythm, but people keep wanting their rooms to be quiet to think.
There were also essays on creativity and the limits of imitation. David B. Auerbach proposed an ‘Aesthetic Reverse Turing Test’ — a way to spot art that isn’t quite human. Denis Stetskov warned that perfect grammar can be dangerously persuasive, which is a nice, scolding thought. Good writing still hurts and helps in ways a polished chatbot can’t.
And then, small joys: a Raspberry Pi 5 animatronic assistant by The DIY Life, NotebookLM updates that feel like a personal research librarian (Michael Spencer), and parenting guides for kids growing up with AI (Nate). These remind us that not everything is existential dread. Some of it is just handy.
Open source, transparency, and a search for trust
A couple of posts pushed the open-source argument. Ai2 released Olmo 3 and some folks cheered for openness (Simon Willison). Lots of people asked whether open models can actually compete with the closed, massive stacks run by hyperscalers. Misha Laskin (via an interview) and others said yes, in certain setups. The tradeoff is simple: transparency vs. raw compute. Open models let you inspect training data. That matters for trust.
There’s also a thread about verifiability and what can be automated. Andrej Karpathy revisited the idea of Software 2.0: tasks that are verifiable get automated. It’s a useful lens. If you can check an output cheaply and well, you can hand it to a model. If you can’t, well... don’t.
Philosophy and the bigger picture
If you wanted a week of philosophical nudges, you could find them. Geoffrey Hinton (via commentary) warned about AI economics and the fantasy of full automation without societal costs. Slavoj Žižek and essays on Illich remind us that these tools change not just markets but how we think of skills and freedom.
There were sharp pieces wondering whether we chased AGI fantasies while missing the mundane harms — lost jobs, hollowed-out craft, surveillance. One post put it bluntly: "AI is the new blockchain" (Francesco Gadaleta) — a good-bad compliment. The message kept coming back: we need to think about institutions, not just models. Agents, money, and law are all woven together.
Small, practical threads that matter
Not everything was big-picture. Lots of practical nudges appeared. Handy tutorials and tips: how to use Gemini 3 effectively (Nate), how to set up MCP resources in VS Code (Bart Wullems), and how to avoid AI in Gmail if you want to (Martin Brinkmann). There was guidance for teachers — OpenAI’s free service for K–12 teachers got a write-up by Brian Fagioli. Small items, but they change day-to-day work.
There were also practical cautions about tool choice for junior devs in regulated fields (Atilla Bilgic) and an accessible, low-cost RAG project guide for folks who want to ship something without a data center budget (Atilla Bilgic and [Atilla Bilgic] again in separate posts). Handy.
Points of friction that kept popping up
A few themes repeated, annoyingly. Hallucinations. Economics. Agent safety. Platform lock-in. Legal risk. Here are the bits that kept turning up in different guises:
- Hallucinations and the trust problem. Authors kept circling back to the same problem: models lie, often convincingly. You can dress a lie in good grammar. That’s dangerous because people trust the dress. See Denis Stetskov and Simon Lermen.
- The agent-platform tug-of-war. Agents need open lanes to operate. Platforms like Google and Microsoft want control. That’s a long fight. See Dave Friedman.
- Money and infrastructure. We’re building costly things. Debt-financed datacenters, energy demand, hardware bottlenecks. The industry is not just code on a laptop. It’s bricks and turbines. Read Naked Capitalism and Paul Kedrosky if you like the economics angle.
- Regulation lagging user harm. Courts and regulators are trying to catch up. The Cohere ruling and the EU’s tentative backtracking on some AI rules show this. Aaron J. Moss and Michael J. Tsai dig into that.
I’d say these frictions are what will keep the conversation noisy for months. They’re not small. They’re practical.
Little detours and local color
Because this is the internet and not an academic paper, some fun, weird, or human pieces snuck in. A grandmother convinced AI was demonic (The Wise Wolf) — wild and a little sad. Aeroflot testing a humanoid flight attendant provoked some airline-job panic (Gary Leff). People trying to escape algorithmic noise — Josh Griffiths leaving YouTube, Sergey Alexashenko building a distraction-free editor — felt quietly heroic.
And yes, the Raspberry Pi animatronic chatbot felt like the most charming thing all week. A reminder that some people use this tech to make faces blink and smiles happen. That matters. The world’s messy. Tech is both dystopia and puppet show.
Where the argument lines up and where it breaks
On some things, there’s a lot of agreement: models are getting better at tasks, but not at judgment. Open-source matters for trust but loses on sheer compute. Agents are tempting but dangerous without guardrails. Junior devs get practical help, but architecture still needs humans. You’ll see those points echoed in posts across the spectrum.
Where people split is tone and prescription. Some say invest big, hard, and fast — the platform builders, the VC Hot-takers. Others say slow down, regulate, preserve jobs, build public goods. I would describe those divisions as classic in tech debates. It’s like arguing whether to widen the motorway or invest in trains; both have merits and both have downsides.
If you want to chase something next
If you're curious and want to follow through, here are a few threads worth reading deeper:
- Gems and hands-on model notes: Simon Willison, Nate, and JP Posma for practical prompts and UI tests.
- Agent infrastructure and the agentic economy: Dave Friedman, JP Posma, and Fatih Kadir Akın.
- Security and scams: Simon Lermen, Gunnar Peterson, and Bruce Schneier’s work for policy context.
- The money story and risk: Dr. Ian Cutress, Naked Capitalism, and Will Lockett.
If you want an easy one to skim: open Nate’s prompt lists. They’re like good fishing lures. Or, if you want to taste the future of UI, roll a few Nano Banana Pro prompts. It’s like trying a new bakery: risky, but delicious.
There’s a lot more to read in the links. People are arguing, prototyping, and sometimes panicking. That’s normal. It’s like watching a town build a new bridge while the river keeps rising. You watch closely. You ask questions. You keep your wellies handy.