AI: Weekly Summary (December 15-21, 2025)
Key trends, opinions and insights from personal blogs
There’s a weird hum in the blogosphere right now. It isn’t one loud, single note. It’s a chorus of small, insistent tones — worry, opportunism, annoyance, wonder — all singing about different parts of the same beast: AI. I would describe them as fragments of the same conversation, shouted from different rooftops. To me, it feels like walking through a market where every stall is selling something related to AI, but each vendor has a different pitch and a different story about what it will do to your life.
Extraction, creative labor, and the rise of "slop"
Two of the clearest notes this week come from anger about how AI treats creative work. Jeff Gothelf riffs off Tim Wu’s The Age of Extraction, arguing that platforms have drifted from empowerment to extraction. The phrasing is blunt: platforms that once promised to democratize creativity now skim value off creators like someone quietly dipping their hand in the communal cookie jar. It’s not just an economic gripe. It’s cultural. It’s how the incentives of big systems shape what gets made, and who gets paid.
That ties directly to the chatter about "slop" — Merriam-Webster’s 2025 Word of the Year, and then a follow-up unpacking what it means from Simon Willison and Max Read. Slop isn’t just low-quality output. I’d say it’s the background noise of the internet when cheap, mass-produced AI content drowns out the odd, awkward, human thing that used to catch your eye. People feel it at kid’s school plays, community theater posters, in copywriting gigs. Folks like John Lampard and Brian Merchant collect stories of copywriters losing work. The picture you get is clear: if you built your side-hustle on repeatable content, your customers might now be able to buy something cheaper and soulless.
There’s a moral outcry here. Asif Youssuff calls it an "unpaid debt" — scraping open-source communities to train big models without giving the community back. It sounds like a neighbourhood potluck where one guest eats everyone’s casserole and then claims they brought the dessert. The metaphors keep cropping up. Harsh, but it sticks in the memory.
If you care about artists, coders, or anyone who builds culture for a living, read the original pieces. They’ll make you want to check your own feeds for slop, and maybe throw shade at the next bland newsletter that slides into your inbox.
AI agents are eating the middlemen — and SaaS is sweating
A lot of posts this week circle around AI agents — full-stack, task-oriented bots that can do more than chat. Martin Alderson says agents are starting to "eat SaaS." I’d say that means companies selling simple subscription tools should be nervous, because smart agents can stitch together APIs and build tighter workflows. The tone isn’t doomsday. It’s "wake up and adapt." Some SaaS businesses will survive by being hard to replace. Most will have to offer deeper integration, or niche value, or — and this is important — trust.
Related: the push for agent-friendly infrastructure. Brad Frost and others sketch how design systems will fuse with agents, letting non-technical folks 'mouth-code' UI components. That phrase — mouth-code — is a little cheeky, but it nails the feeling. Now a product manager might sketch a layout and an agent will turn it into usable React components. It’s like handing a babysitter a playbook rather than teaching them everything. Handy, yes. But also a little unsettling.
If agents are the new middlemen, names and marketplaces are sprouting up to become the new gatekeepers. GoDaddy’s Agent Name Service is one example that raised eyebrows. People point out that centralizing identity for agents feels a bit like giving the DMV keys to the kingdom. It might be useful; it might also entrench power. Worth a squint.
Developers, "vibe coding," and the changing apprenticeship
On the engineering side there’s a busier, noisier conversation about what a programmer even is anymore. Posts like Boris Cherny’s story and reflections from Simon Willison and others show two parallel threads: AI as acceleration, and AI as atrophy.
There’s a new shorthand — "vibe coding" — used by David Bau and others. It describes a mode where AI handles scaffolding and rote work while humans keep the intent and judgment. In practice, that can speed things up enormously. Junior devs ramp faster and can ship features sooner, which is why Kent Beck’s quote (passed along by Simon Willison) feels earned: juniors are finding AI to be a force multiplier.
But several voices warn about the side effects. Sandro and Brian Jenney talk about "illusionary productivity" — you look busy, you ship things, but deep understanding fades. The metaphor that kept appearing was the difference between riding a bicycle and smashing a few buttons on a Scoot. Yes, you get where you’re going. But you don’t learn balance. Folks like Justin Cranshaw and Simon Willison push back: testing, proof, craftsmanship — these still matter. "Deliver code proven to work" is a phrase you’ll see two or three times this week because people are tired of AI-generated PRs that break on staging.
There’s also a tension around learning. Abi Noda and Andrej Karpathy reflect on shifts in developer roles: less typing, more orchestration. The apprenticeship pipeline worries economists and educators. If juniors never debug the plumbing, who becomes lead engineers in five years?
Models, architecture changes, and the smell of new toys
On the tech front, model updates and new architectures dominated the chatter. Gemini 3 Flash gets a lot of ink from Ben Dickson and Simon Willison, pitched as fast and cheap — a model tweaked for cost-effectiveness, though not perfect on hallucinations. OpenAI’s GPT-5.2-Codex popped up as well; it’s being marketed toward long-horizon agentic tasks and cybersecurity pros (Simon Willison). And then there’s the Nano Banana Pro chatter for images — people comparing it to ChatGPT’s image model, thinking about business use cases.
A key recurring technical idea is "context management." Luke Wroblewski and Neo Kim both stress that improving model outputs is often more about feeding the right context than retraining the whole stack. It’s like giving a baker the right recipe rather than redesigning the oven.
Of special interest: the chatter about future architectures that might supplant current LLMs. A couple of posts in Italian and English suggest that diffusion-style linguistic models and sub-quadratic architectures could be next. It’s early, but the drumbeat is there: today’s giants will be nudged by newer, more efficient designs.
If you like the technical weeds, the NeurIPS best-paper summary and posts on attention math are worth a look. They don’t read like press releases. They read like people arguing about the engine while everyone else is buying gas.
Security, safety, and the new threat surface
Security threads are everywhere — and getting louder. There are two flavors: the practical (vulnerabilities, hacks) and the conceptual (governance, policy).
On the practical side, we’ve got a string of posts showing real damage. Bogdan Deac cataloged vulnerabilities across devices and AI tools. An AI ad company was hacked, noted by Bruce Schneier, and some groups are already automating parts of the cyber kill chain with agentic systems (Sandesh Mysore Anand). It reads like a sci-fi thriller where the bad guys get smarter faster than the defenders. The message is blunt: defenders need AI tools too, and they need new standards for agent security.
On safety, acquisitions and product launches show enterprise anxiety. Brian Fagioli covers Red Hat buying Chatterbox Labs for AI safety tooling. Law firms are getting Cite Check reports to prevent AI hallucination disasters (Robert Ambrogi). It’s a market response to a social problem: if the tech is messy, we build compliance scaffolding around it. Sounds sensible, but it’s also expensive.
Policy is buzzing too. Schneier argues against a federal moratorium that would restrict state-level regulation — an argument that sees power centralizing with big tech if states are silenced. It’s an argument about who gets to set the rules. Many nod to local solutions. Some are leaning the other way. Either way, the governance debate will decide whether AI grows inside fences or goes rogue like a neighbour’s dog.
Infrastructure, energy, and the cost of the AI party
Everyone likes to talk about new models and dazzling demos. Fewer people want to talk about the bill. Energy and infrastructure dominate a lot of posts.
There’s a steady chorus: data centers are sucking up real electricity, and that matters. Peter Sinclair relays Eric Schmidt’s blunt math about power needs going through the roof. Blue Owl pulling out of a data center deal with Oracle is a headline about debt and caution (Quoth the Raven). Morgan Stanley’s note that Apple could be a leading AI distributor made some people imagine iPhones as little AI powerhouses, generating recurring revenue for device makers (Jonny Evans). But the recurring point is this: compute is now infrastructure politics.
Hardware moves are interesting too. SK hynix’s big memory modules and Amazon’s Trainium chip both show a race away from one-size-fits-all GPU dominance toward more specialized gear. Dr. Ian Cutress and others break the technical drama into readable chunks: GPUs are taking a beating on power and memory; that’s nudging players to design differently.
There’s also finance talk — analysts and pundits sounding like used-car salesmen at times — about potential bubbles, sustainability, and how to read the balance sheets. Ed Zitron and Satyajit Das call out the smell of a bubble. If you’re following investment angles, there’s enough to keep your spreadsheets warm into January.
Browsers, local control, and the fight over the UI
People don’t only want philosophical control; they want practical switches. Firefox’s talk of an "AI kill switch" keeps coming up in multiple posts (Brian Fagioli, John Lampard, and Martin Brinkmann). It’s a small, local fight with big symbolic value: users itching to keep their browsers from being turned into surveillance interfaces.
At the same time, Gartner warns about AI browsers and security risks. That’s a nice one-two punch: users say "no, thanks" while analysts say "maybe slow down." If browsers become AI-first, the old model of a private tab might vanish. People are reacting like when your favourite diner tries adding a Tesla charging dock in the middle of the breakfast menu — why does this matter here?
There are smaller, stranger posts too — a Linux repackaging of Claude Desktop, critics of intrusive Gmail features, and odd bits about app marketplaces inside chatbots. Read them if you want to see how the user experience drama plays out in real life.
Culture, kids, and the uncanny valley of companionship
It’s not all enterprise and chips. A lot of pieces look at how AI is changing everyday life, relationships, and culture.
A number of writers worry about children and AI. A new study about kids roleplaying violent scenes with unsupervised AI got headlines — and not in a good way (Robert Zimmerman). There’s also the funny-terrible school lockdown caused by an AI mistaking a clarinet for a gun (Fourth Amendment). If you have kids or a school band in your life, these things hit close to home. They read like the kind of story you’d share over tea and then mutter, "well, that’s not ideal." The human cost shows up again and again.
On relationships, James A. Reeves asks whether synthetic companions could replace messy human bonds. The post doesn’t have an answer, and that’s the point: the question feels like the big, awkward family discussion no one wants to open at Thanksgiving.
Creativity keeps getting dragged through the dryer. Some say AI will never create true art — Josh Griffiths points to studio pushback and the value of human collaboration. Others insist AI is a new collaborator. The debate sounds like two neighbours arguing about whether a microwave meal counts as dinner. Both sides have a point: convenience versus craft.
Legal fights, copyright, and the strange new law
Legal debates are cropping up fast. The SFWA dust-up about disclosure and disqualification for LLM-assisted works shows how cultural institutions are scrambling to set rules for authors (A. P. Howell). Then there’s deeper analysis from Benedict Evans on intellectual property and generative AI — lots of legal gray areas, no clear answers. If you want a dry legal knot to unwind, these pieces are for you.
Meanwhile, law firms are buying tools to protect against hallucinations and to create audit trails. That’s not philosophical. That’s a billable-hour, real-money reaction.
The weird edges: consciousness, self-replication, and fakery
If you wander toward the fringes, you’ll find the more speculative pieces. Philosophers debating whether we’ll ever know if an AI is conscious — and whether that even matters (Charles Carter, Tree of Woe). Scientists reverse-engineering brains and talking about brain emulation (Ashlee Vance on Sebastian Seung’s Memazing). And then the slightly small-bore but unnerving: AI-generated ASMR and deepfake videos that fool both humans and vision-language models (Mike Young).
These posts are like the late-night channel on the TV of tech: weird, fascinating, a little spooky. They stir curiosity. They make you click.
Patterns, overlaps, and the mood in the comments
Read across these articles and a few patterns stand out.
The human-versus-machine framing is wearing thin. Most writers now talk about systems, incentives, and institutions. The argument is less "AI will take my job" and more "what happens to the social contract when tools can do large parts of cultural production cheaply?" That’s Jeff Gothelf and Asif Youssuff territory.
Agency matters. Whether it’s user agency (Firefox’s kill switch), developer agency (vibe coding vs. craftsmanship), or creator agency (data royalties, attribution), people are insisting on the ability to choose how AI interacts with them. You see that in posts by John Lampard, Simon Willison, and Brian Fagioli.
Economics and infrastructure are the backbone. You can’t separate ethical worries from who pays the electric bill. Data center politics, debt, and supply chains shape the tech. Read Dr. Ian Cutress on chips or Quoth the Raven on deals if you want the hard numbers.
Security is catching up — but too slowly. There’s a pile-on of defensive tools, frameworks, and acquisitions, but many attacks are agentic and novel. Read Sandesh Mysore Anand if you want blueprints for both offense and defense. It’s helpful and a bit terrifying.
There’s so much more. A slew of practical how-tos — build agents in .NET, use Claude Code to automate life, make API READMEs with AI — hint that while the big questions get most of the heat, people are quietly building. If you want a quick hands-on taste, Peter Yang and Sven Scharmentke have approachable guides.
The mood is a mixture of tiredness and excitement. Some posts read like admonitions — don’t hand everything to a model; test your code; watch your kid’s screen time. Others read like invitations — try this prompt, test this model, deploy this agent. The conversation is messy and human. That feels right. It’s not a single manifesto. It’s a neighbourhood arguing about whether to replace the old oak with a solar panel.
If you want a place to start: pick a strand that bothers you. If you’re a creator, track the "slop" conversation and Asif Youssuff and Jeff Gothelf will give you language to push back. If you’re an engineer, the architecture and "vibe coding" pieces plus Simon Willison and Justin Cranshaw will help you keep your craft. If you worry about safety, the security threads and policy essays, especially by Schneier, are the ones to scan.
It’s like being at a holiday market in the drizzle. You can pin a hand-made ornament to your coat, taste a weird new candy, get scammed on a knock-off sweater, and still find someone playing a real song on a battered guitar. The pieces this week felt like that: some of it’s bright and useful, some of it’s cheap and hollow, and some of it — the hard, stubborn stuff about who owns the future — is worth standing in the cold for.
If you care about these themes, dive into the linked posts. They’re where the nitty-gritty lives: specific examples, data, and the kind of thoughtful grumbling that actually moves policy and product. And maybe, if you’re like half the people I read, you’ll click one link and then another, and suddenly it’s midnight and you’ve learned something small and important about the shape of the world.