AI: Weekly Summary (November 24-30, 2025)

Key trends, opinions and insights from personal blogs

I would describe this week in AI blogging as a messy bazaar. Lots of shiny stalls, a few pickpockets, and the same vendors shouting about the future like it is on sale for one day only. To me, it feels like watching a town fair where the rides are three different rollercoasters built by rival companies, each one promising the biggest drop. Some posts sell the thrills, some warn that the tracks might snap, and others are handing out free cotton candy — except the cotton candy may have been spun by a vending machine that nobody can audit.

The model race: flexing muscles and showing cracks

There was a techie version of a heavyweight fight this week. Anthropic rolled out Opus 4.5 and people wrote about it like it just taught a dog calculus. Simon Willison and Charlie Guo poked at its real-world chops, pointing out that Opus is suddenly much cheaper and very good at coding. Meanwhile Google’s Gemini 3 Pro got a lot of fanfare for its visual tricks, and the GPT-5.1-Codex-Max press run kept OpenAI in the ring. You can see the model comparisons in posts by Simon Willison (/a/simonwillison), Charlie Guo (/a/charlieguo@ignorance.ai), and thezvi (/a/thezviwordpresscom).

I’d say these pieces read like blind taste tests at a diner. Each writer brings a plate of pancakes and says their pancake flips best. Some pancakes are fluffier, some have more syrup, and some are burned around the edges. The common note is this: the differences are meaningful for edge cases, but not always obvious at a glance. Authors ask for clearer, real problems to benchmark these models. To me, it feels like everyone is grading the paint job and ignoring the chassis.

There is also a growing sense that evaluating these new models is getting harder. Claude Opus 4.5 shows this, with new parameters for effort and a massive context window. The post by Simon Willison (/a/simonwillison) and Ben Dickson’s write-up (/a/bendickson@bdtechtalks.com) both hint that progress is real, but measuring it is turning into a craft exercise rather than straight bench science.

Money, chips, and the energy bill

A louder drumbeat was the money question. Tons of posts asked the same blunt question: who pays for all this compute. Paul Kedrosky (/a/paulkedrosky) and Conrad Gray (/a/conradgray@humanityredefined.com) ran with the theme that the hardware tail is wagging the industry dog. OpenAI's deals with Samsung and SK hynix are helping drive DRAM shortages and sticker shock, and folks writing about memory markets made it sound like the supply chain has the jitters. In plain speak, rents for compute are getting higher and people are paying them now because they think someone will buy the company later.

Then there was the Genesis Mission, a political playbook for buying chips and datasets. Some posts framed it as a US Manhattan Project for AI, others as a possible bailout. Gary Marcus (/a/garymarcus@garymarcus.substack.com) and Brian Fagioli (/a/brianfagioli@nerds.xyz) both sniffed caution and asked whether government buying is a public good or corporate life support. I would describe these takes as suspicious but curious. The money flow is real. The consequence is a scramble: data centers, energy grids, and chip fabs are suddenly in the limelight. Naked Capitalism (/a/naked_capitalism) and others wrote about how energy markets and capex risks make this feel less like a rocket launch and more like a mortgage with a variable rate.

If Britian’s £1 billion chip plan reads like pocket change next to South Korea’s war chest, Jamie Lord (/a/jamie_lord@nearlyright.com) was blunt about it. The global game of capture-the-chip is a long race. And if you thought GPUs were the only story, those posts about carbon nanotubes and novel packaging show people are thinking about hardware futures with a slightly wild gleam — like teenagers redesigning engines in a shed.

Safety, alignment, and the one-chance worry

Ilya Sutskever’s interviews cast a long shadow. Several bloggers dug into his view that we may be moving from scaling to research, and that alignment is not a solved problem. Posts by Dwarkesh Patel (/a/dwarkeshpatel@dwarkesh.com), Simon Lermen (/a/simonlermen@simonlermen.substack.com), and Paul Kedrosky (/a/paul_kedrosky) bounced the same worry: we may only have one good shot at getting alignment right when systems become dangerous.

I’d say those posts feel like someone standing at a riverside, nervously eyeing a bridge made of rope. Helen Toner (/a/helen_toner@helentoner.substack.com) used the term 'jaggedness' to describe how models improve in stuttering bursts and then fail in odd places. There’s a theme: progress does not look linear, and the more we rely on these systems, the higher the cost of being wrong. Some writers are quietly pleading for incremental deployment and better public systems, not just shiny private ones.

A related riff was about agents and memory. Anthropic’s posts about agentic harnesses prompted critiques around cultural transmission, not just software. Davi Ottenheimer (/a/davi_ottenheimer@flyingpenguin.com) and other voices pushed the idea that making an AI remember facts is not the same as teaching it norms and judgement — and those distinctions matter if the machine gets to make decisions people depend on.

The law, the lawsuits, and copyright scrums

This week had no shortage of legal drama. A few lawsuits about training data popped up, including the Apple book suit reported by Michael J. Tsai (/a/michaeljtsai@mjtsai.com) and the ROSS/Thomson Reuters copyright brief covered by Robert Ambrogi (/a/robertambrogi@lawnext.com). OpenAI’s legal dance over deleted datasets made headlines too, with Nick Heer (/a/nick_heer@pxlnv.com) and others parsing court filings.

I’d describe these posts as courtroom thrillers with slow pacing. They read like legal detectives making notes: who touched what dataset, when was it erased, and could the scraping be called theft. The broader point keeps coming up — training data isn't a free buffet. People who build models by scooping text and books face more friction now. If you like drama, follow the filings.

Privacy, surveillance, and the tracking stink

A few posts were the smell test for surveillance. The FBI drone discussion by Fourth Amendment (/a/fourthamendment) and the Antigravity prompt-injection vulnerability in Google’s tooling covered by Ben Dickson (/a/bendickson@bdtechtalks.com) both say the same thing in different keys: we are building systems that can be turned into surveillance machines if we don't watch them.

Adam Douglas (/a/adam_douglas@adamsdesk.com) wrote a step-by-step on disabling Google Gemini features to opt out of AI tracking. That piece reads like a how-to for anyone who wants to keep the curtains drawn. It felt practical and faintly desperate, like closing your blinds when the power company comes to sell you a smart meter.

How developers are changing: coders, orchestrators, and co-pilots

There is a steady stream of writing about what it means to work with AI rather than be replaced by it. Anup Jadhav (/a/anupjadhav@anup.io), Annie Vella (/a/annievella), and Adam Keys (/a/adam_keys@therealadam.com) each sketched how engineering jobs are shifting. The new verbs are choosing, curating, and orchestrating. Instead of typing every line, developers are constructing contexts, writing tests, and instructing AI agents to stitch components together.

A few concrete, slightly less dreamy posts offered practical practices. Anup Jadhav’s take on RAG systems warned that naive RAG breaks under production traffic, and he offered improvements like hypothetical document embedding and hybrid search. The post feels like a mechanic showing how to avoid oil leaks when using a fancy engine.

On junior devs and careers, Atilla Bilgic (/a/atilla_bilgic@practicalsecurity.substack.com) was emphatic: show your work, be transparent about AI usage in projects, and learn auditability and data residency. Those pieces read like career advice you hear from an old mentor who knows the road has potholes. I would describe them as necessary reading for anyone who wants to survive job interviews in 2026.

Kix Panganiban (/a/kix_panganiban@kix.dev) and others argued for fast models over smarter slow ones for everyday coding tasks. The message is plain: for many trivial edits, latency is the thing that kills momentum. They likened it to trying to write with someone who pauses for five minutes after every sentence; it breaks flow.

Tools, workflows, and the tiny engineering wins

There were lots of practical posts this week. From local speech-to-text setups (Simon Lermen /a/simonlermen) to using Git worktrees as clean rooms for AI-assisted refactoring (Krystian Safjan /a/krystiansafjan@safjan.com), the community is sharing ways to make AI fit into real work. These posts are the kind you print and tack up by the monitor.

Some writers also focused on prompt engineering and tooling. DSPy, prompt lifecycle guides, context plumbing, and prompt caching deep dives pop up as the plumbing of modern apps. Readers who like those details will find nuggets in posts by indiantinker (/a/indiantinker), Sankalp (/a/sankalp@sankalp.bearblog.dev), and Matt Webb (/a/matt_webb@interconnected.org). They explain the fiddly bits that make agents perform better — like learning to tune a car, not just buy a flashy license plate.

One interesting pattern: several posts emphasized small process changes that prevent disaster. Using worktrees to keep unverified AI changes from wrecking main branches, running multi-stage RAG, and enforcing TDD loops with agents are all repeated themes. The tenor is cautious optimism: AI can speed things, but only if the guardrails are in place and someone still verifies results.

Culture, art, and the slop problem

A ton of posts worried about the quality of what we call content. 'Slop' was the word of the week for many. Greg Morris, Shawn Harris, and others painted a picture where the internet fills with cheap machine-made stuff, leaving real human work to go unnoticed. John Lampard reported the Macquarie Dictionary naming 'slop' word of the year — which is an odd, slightly depressing trophy to win.

There were also thoughtful pieces on AI's limits in creativity. Cartoonists, musicians, writers, and documentary makers warned that AI can't yet do the human bits that matter — nuance, consent, humour. One piece on AI misused in a poker documentary by Maria Konnikova (/a/mariakonnikova) read like a moral fable: AI can fake dialogue, but not trust. Another on Itchy and Scratchy comics by David B. Auerbach (/a/davidb__auerbach@auerstack.substack.com) said AI struggles to reproduce style and meaning. I’d say these posts feel a little like a jazz critic telling a robot it's missing the groove.

On the flip side, visual tools like Google’s Nano Banana Pro were praised for making infographics and simple visuals cheap and fast. Jakob Nielsen (/a/jakob_nielsen@uxtigers.com) wrote about it as a new 'chat moment' for visual communication. The tension is clear: tools can amplify voices, but they can also drown them in noise.

Politics, regulation, and public projects

There was a lot of talk about governance. Colorado's attempt at an AI law got paused and examined — a useful case study in how a first-of-its-kind law can be reworked to be real world practical, covered by Naked Capitalism (/a/nakedcapitalism). Others looked at national projects like Genesis and asked if they are a necessary push or an expensive rescue mission for giant AI companies. Discussions about public AI systems, democracy-strengthening examples from Schneier (/a/schneieron_security@schneier.com), and the possibility of public interest models came up as a counterpoint to purely commercial systems.

I’d say the politics posts sound like people trying to build a bus while passengers keep arguing about the destination. It’s messy. But there is a shared recognition that AI policy matters beyond lab notebooks.

Security, supply chain, and the ugly backstage

Security posts were grim. Prompt injection stories, Antigravity exploits, and broader supply-chain attacks show that the more we wire agents into systems, the more ways exist to weaponize them. Ben Dickson’s report on Antigravity (/a/bendickson@bdtechtalks.com) and John Collins on supply-chain poisoning (/a/johncollins@techleader.pro) read like a thriller where the villain is a line of corrupted text.

Another theme was the economic fragility beneath the glamour: memory shortages, Nvidia's narrative wobble, and debates over whether chip export controls are backfiring. These threads suggest the scene is brittle in places. If there is a common mood it is: the upstairs lab has made a fancy gadget, but the building's foundation is creaky.

Jobs, skills, and the human side

A large cluster of posts focused on careers. Nate’s pieces on jobs and AI fluency (/a/nate@natesnewsletter.substack.com), Shawn K’s playbook for automating businesses (/a/shawn_k@shawnfromportland.substack.com), and many essays on what juniors should learn all repeat a similar chorus: adapt, but learn the hard stuff too.

There is a clear split. One camp says: learn to prompt, integrate AI, and you live. The other camp says: learn fundamentals, systems thinking, and you thrive. Atilla Bilgic wrote practical steps about being auditable and honest in portfolios (/a/atilla_bilgic@practicalsecurity.substack.com). Dorothy Vaughan’s story was used as a metaphor for adapting in place — learn the new tools and teach others. It’s the old advice: be useful and explain how you were useful.

Strange and human tangents

True to blog form, there were a few delightful digressions. James Cameron talked about film and AI, Aaron and other writers mused about whether chatbots can be conscious, and Shane O’Mara reflected on brains and behavior with a nod to whether our models actually model minds. Some posts read like long letters to the future, with a tinge of old-school humanism.

There were also do-it-yourself notes: build an offline stoic chatbot, run local speech-to-text, and deploy a URL shortener using an AI assistant in 15 minutes. Those write-ups are practical and oddly comforting — the internet still has room for tinkering and a person in a hoodie can still come away feeling like they made something.

Patterns and recurring arguments

A few patterns stand out if you read across these posts.

  • First, models keep improving but evaluation lags. People want clearer, messy real-world tasks to decide which model actually helps.

  • Second, the economics are now part of the story. It’s not just cool tech; it’s expensive tech. Memory, energy, and chips are the new headline risks.

  • Third, safety and alignment remain unresolved. There is a split between those who think iterative alignment will work and those who warn we may get one shot. The rhetoric is cautious and a bit anxious.

  • Fourth, tools are changing roles. Developers are becoming orchestrators and context engineers. That is practical and a little disorienting.

  • Fifth, cultural anxiety about content quality is real. The word 'slop' captured it. People worry about the internet becoming a beige soup of acceptable-but-empty content.

  • Sixth, legal fights about data and copyright are moving from background noise to center stage.

Small bets worth following

A few little things feel important even if they are not splashy. Read the write-ups on local tooling and Git worktrees if you actually ship software. Keep an eye on the legal filings and the dataset deletion stories if you follow model provenance. And if you care about democratic outcomes, Schneier’s notes on public AI systems are worth a slow read (/a/schneieronsecurity@schneier.com).

If you want the drama, follow the model-versus-model posts. If you like the nuts and bolts, spelunk in the prompt caching and RAG deep dives. If you like politics, watch Genesis and the Colorado law like it is a chess match.

Some posts deserve the slow scroll. Read the legal critiques, the practical engineering posts, and the ones that resist hype. They are the ones that tell you what might actually matter next month, not just what will trend on tech Twitter today.

If it helps, picture it like this: AI right now is a neighborhood undergoing construction. Some houses are getting new foundations, some are getting shiny facades, others are being turned into apartment towers overnight. There are signs up saying 'progress', but also warnings about noise, dust, and whether the sewer lines can handle the load. The blog posts this week are the neighbors' group chat — half gossip, half practical fixes, and a steady stream of opinions on whether the new building will stand.

There is more to chew on, and the authors linked have much better detail if you want to dig. Go check the threads by the names that grabbed you, read the deeper pieces, and decide which pancake you like. There is sense in both excitement and skepticism right now. It feels messy. It feels important. It feels like the long haul just started, and everyone is trying to work out what role they want to play on the site of this ongoing construction.