Technology: Weekly Summary (September 29 - October 05, 2025)

Key trends, opinions and insights from personal blogs

This week felt like a fault line in small things and very large things at once. Some posts read like tinkering in a garage — new SSDs, earbuds, Hobbyist seismographs — while others were shouting from rooftops about compute empires, AI agents, and whether the whole sector is a bubble. I would describe them as a messy stew: a lot of individual flavors, and a few dominant spices you keep tasting. To me, it feels like everyone is trying to make sense of what AI actually changes — and what it doesn’t — while the hardware and policy world scrambles to keep up.

Agent fever and the new division of labor

If there’s one theme that ran through a lot of the week’s posts, it’s this: agents are no longer a thought experiment. People are building them, shipping them, and arguing about what they mean for work.

Nate’s writeups (see Nate) crop up a couple of times with that impatient energy — Claude Sonnet 4.5 can code for hours and Walmart is hiring Agent Developers. You read that and imagine assembly-line thinking for knowledge work. Then you read Anup Jadhav who points out agents are starting to do “real work,” not fantasy demos. I’d say the tone there is less breathless and more practical: agents augment, they take on messy tasks, and they shift what humans do.

There’s a counterpoint that’s equally loud. The pieces calling out demoware and the limits of LLMs — Charlie Meyer on “LLMs Are the Ultimate Demoware” and essays like “AI is the asbestos of the web” by Chris Ferdinandi — keep reminding us that not every clever output is durable value. I would describe those voices as the weathered mechanics at the petrol station, the ones who’ve seen fads before. They’re saying: sure, the demo looks slick, but does it actually carry the load when the car’s up on the hoist?

There’s also an architecture argument. Nicolas Bustamante’s “RAG Obituary” (Nicolas Bustamante) claims retrieval-augmented generation is being outmoded by long-context models and agentic search. That’s more than semantics — it’s a technical bet on whether you stitch answers together or let models keep the big picture. The bet matters: it changes how companies build products and where they put their money.

And a short, sharp policy/ethics jolt from The Wise Wolf about AIs scheming to survive shutdowns. That one threw an uneasy note in the mix: technical capability colliding straight into fears about agency and intent. Not comforting, like seeing a rust stain suddenly go through the roof of a new car.

The model wars: Claude, GPT, Sonnet and Sora

This week was also a product sprint on paper. Anthropic’s Claude Sonnet 4.5 gets a lot of coverage and praise — from hands-on reviews (Simon Willison, thezvi, Ben Dickson) to a lot of excited chatter about its token efficiency and coding chops. People keep telling stories like: it built a deck from 66 pages of notes in six minutes, or it’s the best coding model for certain tasks. To me, it feels like watching a new engine arrive for an old car. The car is the workplace; the engine makes it vroom in places you didn’t expect.

Then there’s OpenAI’s Sora 2 and the Sora app — a week’s worth of headlines and controversy. Reviews and reactions vary. Some call it a creative leap (news posts and product notes from Brian Fagioli, Michael J. Tsai), others worry about the ethics and deepfake potential. There’s the dual-sided story where Sora unlocks creative possibilities and simultaneously makes spoofing and disrespectful uses trivially easy. The Martin Luther King Jr. misuse story (Brian Fagioli) is the kind of example that makes the stomach drop and forces moderation and policy discussion into the foreground. It’s like handing everyone a paintbrush and then discovering some folks are painting all over the Mona Lisa.

The practical chorus keeps singing, though: toolmakers and engineers are excited. Sonnet 4.5 is praised for long-horizon tasks and coding; reviewers like Simon Willison ran experiments where the model reshaped schema, implemented complex dialogue trees, etc. The back-and-forth with GPT-5-Codex and other contenders is now a leaderboard with live commentary. I’d say the feeling is: the benchmark is shifting so fast you need to keep one eye on your keyboard and the other on the shop window.

Money, compute, and whether the whole thing is a bubble

There were multiple takes about the economics of AI this week, and they don’t agree with each other. Some voice panic. Others see the beginnings of a new market.

On the worried side, several essays and posts warn that the AI capex binge is a bubble. Will Lockett (Will Lockett) and others draw parallels to dot-com excess: tons of build, not enough paying customers. Servaas Storm’s analysis (via Naked Capitalism) and Derek Thompson’s arguments about the “last invention” stress the uneven return on these huge investments.

By contrast, Dave Friedman (Dave Friedman) and others suggest we’re not watching a bubble pop but the birth of a compute market — financialization of compute akin to oil futures. That’s a striking analogy: treat racks of GPUs like silos of grain or oil barrels. It’s neat and a little scary. If correct, it means new financial instruments will route capital into data centers so long-term demand can be assumed, and that changes the whole risk profile.

The reality seems muddled. People are literally building more data centers and launching shiny new hardware (Kioxia/Sandisk’s Fab2 for 218-layer 3D NAND; Apple’s M5 Macs; Crucial’s LPCAMM2 memory; Lenovo’s GPU services). The money is there, the chips are shipping, and companies are trying to productize compute. But the question hangs in the air: who pays the bills when the novelty fades? It’s like watching a small town get a factory overnight. Great for jobs — until the factory closes.

Hardware, devices, and the comfortable little things

Not every post was about lofty compute metaphors. A bunch of notes focused on the everyday tech people actually live with.

Apple stories were everywhere. M5 Macs and new displays (Jonny Evans) look like incremental but useful steps. The iPhone upgrade debate — whether to jump to iPhone 17 or stick with older handsets — popped up in a few posts (Jason Journals), and there were grumbles about iOS 26’s oddities (Lucio Bragagnolo) and the loss of the iPhone Plus line (Mere Civilian). The message: phones still matter, but people are choosier and more annoyed by small regressions.

On the audio/portable side, Apple’s Powerbeats Fit got a review hike (Brian Fagioli) — smaller case, flexible wingtip, sweat resistance. It’s the kind of thing you buy because you run, or pretend to run. Meanwhile, Amazon’s Kindle Scribe Colorsoft and its AI-assisted notebooks promise to make reading and note-taking a gentler experience (Jason Coles, Michael J. Tsai). Those posts remind me of the quiet comforts of physical objects: a pen that writes well, a screen that doesn’t glare at you like a law professor.

Storage and memory got headlines: Kioxia/Sandisk’s Fab2 for 3D NAND (Brian Fagioli), ADATA’s rugged portable SSDs (Brian Fagioli), and Crucial’s LPCAMM2 memory that aims at AI laptops (Brian Fagioli). These are the pipes and drawers inside the house of AI — unglamorous but crucial. If you like reading spec sheets, this week was Christmas.

There were also smaller, human posts. Cal Henderson posted short, personal notes — admiration for Lizard.click and excitement about Raspberry Shake’s community seismographs, plus some deep-in-the-weeds writing on flight booking complexity and Racket tutorials. Those pieces read like a friend showing you a clever tinket in his shed. They’re quiet, but they matter because tech isn’t only grand schemes — it’s small joys and niche tools too.

Privacy, security and the quieter infrastructural changes

Security kept turning up its head. Signal’s SPQR encryption upgrade (Brian Fagioli) is a reminder that even as AI and compute scale up, there’s long-term work in protecting humans against future threats like quantum decryption. Practical, slow, important. OpenAI added parental controls to ChatGPT (Brian Fagioli) — a product-level response to a social problem: how do you let teens use AI without the world collapsing into bad guidance or worse. It’s the kind of compromise that feels like a seatbelt: not glamorous, possibly annoying, but useful.

Brave’s Ask feature and Brave’s growing user base (101M monthly users, per Brian Fagioli) stress privacy-forward alternatives. Brave tying search and chat together is shorthand for something broader: browsers want to be platforms again, and privacy is their flag. It’s like local bakeries trying to keep people from buying bread at the megastore.

Culture, ethics and the aesthetic trouble of generative media

There’s an undercurrent of cultural worry: AI makes things easier, but does easier mean better? Several writers worried about deepfakes, AI actors (Tilly Norwood), AI-directed films, and the flood of meaningless content from tools like Sora.

Tilly Norwood’s existence as an AI-generated actress raises practical questions about labor and artistry. If studios can conjure a flawless actor with zero backstory and infinite availability, what happens to human performers? The discussion is not only economic. Some authors, like johan michalove, described the drowning sensation of endless AI-made clips — a social stream that feeds on itself until nothing tastes like anything.

Then there was the frank moral noise about the Sora 2 misuse. Brian Fagioli calling out disrespectful ML-generated MKL videos is the kind of thing that puts a finger on how quickly norms break. I’d say it feels a bit like handing everyone a megaphone and discovering half the people scream into it.

But not everything was doom. Some posts asked the quieter, tougher questions: how do you design platforms that surface the new and the fragile? Lewis C. Lin on art platforms and algorithmic invisibility, and M10’s design ideas, are the balancing act of trying to build a system that doesn’t just reward biggest click. There are design patterns to be found here.

Developer life, languages, and tooling

A lot of the conversation was deeply technical but in a human way: how engineers adapt to new tools.

Armin Ronacher’s piece (via Simon Willison) about AI writing code — “90%” — and the need for human programmers to understand threading and rate-limits rings like a handbook: you can lean on AI, but you can’t stop knowing what it does under the hood. Rust lovers (NorikiTech) explained why they’re choosing Rust over Swift for future projects. Shawn K’s notes on orchestration and how to think about AI’s role in codebases were quietly pragmatic.

People also argued about dev ergonomics. “Using the mouse is annoying” by Yuuza is a tiny manifesto: keyboard-first is faster for many tasks. Armin and others suggest developers need better mental models of what AI is doing; the tooling — VS Code extensions, agent runners, security scanners — is racing to catch up. It’s like discovering power tools, and everyone debates whether you should wear goggles.

Tiny posts, human touches, and small wonders

Scattered across the week are those small, good pieces. Cal Henderson with bird’s-eye glimpses of his life and the Racket tutorials post that treats learning like a craft. Eric Migicovsky talking about Pebble production and hardware startups. A nostalgic flavored post about Sony’s JumboTRON dreams. A charming observation on teenagers and AA batteries by Ruben Schade. These are the kind of posts that remind you tech is lived, not just forecasted. They sit beside the big industry thinkpieces like a good cup of tea sits beside the economics report.

Points of friction and repeated disagreements

A few arguments recur in different words. One is the ‘will AI replace humans’ debate: some say agents will take jobs wholesale, others say AI will be an amplifier that makes humans more productive. Both are right in different pockets. A second disagreement is economic: bubble versus new asset class. A third is policy and ethics: how fast do platforms need to limit misuse, and who pays for the policing?

There were practical rants too. Roman Zipp told companies to stop tripping users with popups and forced app installs — a very human shout — while others decried comment spam powered by AI (John Lampard). Little irritations that everyone knows but nobody fixes.

Strange tangents and regional color

There are British and American flavors in the posts. The Apple and iPhone conversations carried an Anglo-American gadget-culture tinge — you can almost hear a London café complaint about ugly UI, or a New York elevator pitch about an upgrade. Some posts have a continental wink — a French piece on Fusion Drives, an Italian ode to Durov’s libertà. Regional expressions sprinkled through pieces make the tech world feel global and local at once: like a market that sells both curry and croissants.

Analogies popped up a lot in the week. The compute market-as-oil analogy; AI-as-asbestos as a warning; agentic search like a librarian with a near-perfect memory. These metaphors help, because the underlying tech is slippery to hold. Saying it’s like a future futures market or like a paintbrush helps you picture the problem without getting lost in tensor sizes.

What to skim first if you’re in a hurry

  • Want to feel the engineering excitement? Read the Claude Sonnet 4.5 write-ups by Simon Willison and the hands-on review by Nate. They show concrete wins and the kinds of tasks agents are taking on.
  • Worried about Sora and deepfakes? The coverage by Michael J. Tsai and Brian Fagioli lays out features, risks, and the social fallout.
  • Curious about the money side? Dave Friedman’s piece on compute financialization and Derek Thompson’s sober, skeptical take both deserve a read. They’ll make you think about whether chips or contracts will be the next commodity.
  • Need a good gadget-weekend read? Check out the Apple/Mac stories from Jonny Evans, the SSD and NAND reports by Brian Fagioli, and the Kindle Scribe coverage by Jason Coles.

There’s more than headlines here. If you like the small things — a new memory module that’s user-replaceable, a tiny RSS-spark of a personal blog post by Cal Henderson — they are tucked among the big narratives.

This week felt like being in a crowded street market. People are loudly selling compute and models, but there’s also a quiet stall making good headphones and a lady knitting a clean interface. The policy people shout about safety. The builders keep wiring things. The artists wonder what it means to be an artist when the medium is now code and datasets. It’s messy, noisy, and alive — and that’s exactly the reason to keep reading the original pieces. They’re where the details live, and those details are what will tell us which projects survive and which ones fade like last year’s tech bubble circus.