Cybersecurity: Weekly Summary (October 20-26, 2025)

Key trends, opinions and insights from personal blogs

I’ve been scrolling through a pile of blog posts this week about cybersecurity, and it felt like sitting at a crowded café where everyone’s shouting about different parts of the same problem. Some folks are worried about the shiny new AI toys. Others are yelling about old, rusty plumbing in the network or in the hardware. To me, it feels like a messy kitchen: new gadgets everywhere, and nobody quite knows which cord will trip the lights.

AI, prompt injection, and the browser circus

There’s a lot of nervous energy around AI doing things it wasn’t supposed to do. The phrase that kept popping up was “prompt injection”, and several writers took it in different directions. Joseph Thacker introduced a twist called “Metanarrative Prompt Injection” — basically talking directly to the AI as a way to steer it. I would describe it as whispering the directions into the person’s ear, instead of knocking on the door. It’s clever, and a bit unnerving.

Then you’ve got the whole fuss about OpenAI’s new browser. Simon Willison and Jim Nielsen both dig into how browsers that give AI autonomous access to the web open a huge attack surface. To me, it feels like giving a teenager the keys to the house and hoping they don’t host a party. There are mitigation ideas — logged-out modes, watch modes, watchmen of various kinds — but a lot of voices say these are partial fixes, handkerchiefs for a leaky ship. Dane Stuckey (OpenAI’s CISO, discussed by Simon and others) admits the problem is “unsolved”. That phrasing alone makes my stomach sink a bit.

I’d say the pattern is clear: AI systems are blurring lines between data and instructions. A text can be both a fact and an order. That drives people like Henrik Jernevad to warn that established security lessons are being forgotten. It’s like trying to learn to swim with new pool rules every week. The old lifeguards aren’t sure which whistle to blow.

A related thread: Schneier and Raghavan (quoted by Simon Willison) talk about token privileges and poisoned states in LLMs. It’s technical, but the image that stuck was memory contamination — like a diary where someone else rips out pages and writes over them. Or like trying to keep track of family recipes when a mischievous cousin keeps swapping the sugar for salt.

There was a spicy post claiming OpenAI’s CISO actions and background were causing broader problems. Davi Ottenheimer didn’t mince words, comparing the situation to corporate scandals. That one reads like a political column more than a lab report. Either way, the tone shows how raw the debate is — technical worry mixed with distrust of institutions and their people.

If you want a taste of the technical show-and-tell, Joseph Thacker gives examples of the technique hacking real systems. Read his piece if you like seeing how the trick is pulled. If you want the industry reaction and the cynical side, check out Davi Ottenheimer.

Cloud outages and why concentration hurts

A different cluster of posts read like a horror story of centralization. An AWS DNS snafu in Virginia knocked out big swathes of the internet for hours. Jamie Lord frames it as the result of market forces — too much power sitting with a handful of cloud giants. It’s true: when the big provider sneezes, a lot of the web catches a cold.

Jamie Lord calls it a paradox: the internet was built to be resilient, but economics shoved everything into tiny silos. I’d say that metaphor fits — like building a village that depends on one tap. If the tap breaks, everyone’s thirsty.

Then Davi Ottenheimer piles on with a critique that the real issue was not just the tech but the people running it. He called some of the explanations “patently absurd” and dug into the idea of data integrity as the neglected child of cloud design. To me, it sounds like saying the engine is fine but the driver is sleep-deprived. Both matter.

These posts together underline a trend: resilience isn’t just about redundancy; it’s about governance, staffing, and incentives. If you like reading about how the sausage gets made — and how it goes wrong — both Jamie Lord and Davi Ottenheimer are worth the click.

Hardware-level nasties: fault injection and CIP hacks

On the hardware front, there’s a proper manual out: a 98-page guide to fault injection attacks from Cristofaro Mune and Niek Timmers, discussed by Denis Laskov. This is the kind of thing that makes people who love tinkering feel a prickly thrill and makes everyone else nervous. The guide walks physics, architecture, and setups — it’s practical and old-school low-level mischief.

There’s also talk about Common Industrial Protocol (CIP) from Denis Laskov again, who points out how easy it is to discover devices and get critical info. The idea that a simple request reveals much of a device’s secrets is like walking into the factory and finding employee records on a sticky note by the photocopier. It’s not glamorous, but it’s effective.

These posts remind me: not every exploit needs elegant social engineering. Sometimes a blunt hammer and a crude instruction do the trick. And that’s worrying because these protocols run power plants, factories, and other things that people depend on.

Consumer devices: locks, TVs, and password managers

There’s a sticky thread about everyday devices that we trust with our lives, or at least with convenience.

  • Smart locks: Denis Laskov covered research showing Bluetooth replay attacks and other design flaws in Master Lock’s Deadbolt D1000. The attacks let researchers open doors, forge logs, even brick devices. It’s a bit like buying a car with a key that works on any model in the car park. Scary.

  • Smart TVs: HbbTV is a blast from the past, apparently, and Denis Laskov reports how unencrypted URLs can be injected via broadcast. Think of it as leftover old plumbing that still feeds into your living room. The standard uses dated web tech. TVs are less isolated than we like to think.

  • Password manager extensions: Michael J. Tsai flagged a clickjacking bug in iCloud Passwords for Firefox that also impacts others like 1Password and LastPass. The gist is attackers could make a site auto-fill credentials and siphon them off. I’d describe it as a sticky note on the monitor with your passwords — but done in code.

These pieces together make a clear point: convenience and legacy decisions are a toxic mix. We glue different systems together and then act surprised when the whole thing falls over.

Malware and the persistent nasties

There’s a fresh variant of XCSSET hitting macOS in new, modular ways. Michael J. Tsai outlined how it targets browsers, hijacks clipboards, and even spreads via Xcode projects. The novelty here is the infection path — messing with developers’ build systems. It’s a reminder that the chain is only as strong as its weakest stage. A compromised dev machine can be a supply-chain disaster.

This one made me think of infection as gossip: it starts in a quiet corner and, if no one stops it, suddenly it’s at the wedding. Same idea, different setting.

Risk, compliance, and the paperwork side of security

On a less thrilling note but still important: Burkhard Stubert walked through a webinar on risk assessment under the CRA (Cybersecurity Risk Assessment). There’s a newsletter series planned about the risk steps, and an emphasis on Security Decision Records (SDRs) — essentially documenting why you made certain security choices. That’s the sort of dull, useful remedy that makes audits easier.

I’d say the tone here is practical: if you’re a manufacturer, you’re asking “What must I show regulators?” Burkhard’s post felt like a handbook for getting your paperwork in order before the inspectors show up. It’s dry, but you’ll sleep better if you follow it.

Medical devices and industrial IoT — a whole other ballgame

The Internet of Medical Things (IoMT) review (2020–2025) reported by Denis Laskov is dense and a little grim. The authors of the review mapped vulnerabilities across perception, network, application, and cloud/edge layers. The attacks range from physical tampering to MITM to privacy violations. This isn’t just an academic worry: patient safety can be at stake.

If you like heavy reading, that paper collects five years into one place. It’s like a compendium of things you’d rather not think about: insulin pumps, monitors, all talking on networks with holes in their flotation devices.

Defensive tools, hiring, and the merit badge idea

A couple of more upbeat posts offered tools and hiring ideas. Jamf beta-launched an AI tool for executive protection. Jonny Evans wrote that the tool gives quick forensic insights and reduces the need for top-tier expertise. It’s helpful; it feels like training wheels for incident responders. Not a cure-all, but something that gets you moving faster.

Then there’s the “Cybersecurity Merit Badge” idea on Schneier on Security. The idea: prove skills with verifiable badges. Recruiters would see real competencies, not just claims. That struck me as practical and necessary. It’s like showing certificates for plumbing instead of trusting the bloke who says “I’ve done a bit.”

There’s a theme here: automation and tooling can help, but only when paired with verified skills. A new tool won’t save you if the person using it doesn’t know what to look for.

Where the writers seem to agree — and where they argue

Agreement shows up in a few places. Most writers accept that:

  • Prompt injection is real and dangerous. People used words like “unsolved” and “open” more than once.
  • Centralization of infrastructure (cloud concentration) raises systemic risk. The AWS incident made that point loud and clear.
  • Legacy tech and convenience features (old protocols, browser integrations) keep biting us in the backside.

Disagreements are more about tone and blame. Some authors point fingers at corporate leadership and poor governance (Davi Ottenheimer), while others focus on the inevitable trade-offs of complex systems (Jamie Lord). Some writers are more skeptical about AI fixes and think security practices are being sidelined (Henrik Jernevad), while others are open to hybrid approaches like JAMF’s tooling or merit badges as partial remedies (Jonny Evans, Schneier on Security).

It’s like a group of neighbours debating whether to fix the fence or buy a new watchdog. Some want nails, some want teeth, and some just want a plan.

Little details that bug me — and should bug you too

A few recurring little things kept gnawing at the edges:

  • Browser extensions having too much privilege. The password manager clickjacking story is a perfect small-bore example. We let extensions into our browsers like houseguests and then forget who’s staying in the spare room.

  • Dev tools as an infection vector. The XCSSET spreading via Xcode projects points to the same theme: developers are a high-value target because they have keys.

  • Unencrypted broadcast channels and legacy protocols. TVs and industrial gear still using old tech tells me we keep building on top of weak foundations.

These are not glamorous, headline-grabbing flaws. They’re the kind of messy, practical vulnerabilities that let real attackers do real harm.

A few tangents — small detours worth a peek

  • A post on how to document risks (Burkhard’s SDRs) makes me think of tax paperwork. Nobody enjoys it, but if you do it right you don’t get surprised. These aren’t sexy, but regulators and customers will ask for them.

  • The merit badge idea reminded me of Scouts badges — which, yes, feels oddly wholesome in the middle of crypto-wallet theft and browser worms. But it’s practical: show me the badge and I’ll believe you can patch a buffer overflow without breaking the coffee machine.

  • The hardware fault injection paper is dense; it’s the kind of thing that makes me nostalgic for academic days — if I had any. It’s heavy-duty reading for people who like their security at the transistor level.

Believe me, those detours loop back. Documentation matters when you’re blamed for a cloud outage. Developer hygiene matters when a supply-chain worm goes wandering. Hardware matters when an industrial protocol is easy to enumerate.

Who to click if you’re curious

If you want to chase particular threads: pick one author and follow their links.

  • For prompt injection and AI/browser risk, read Simon Willison, Joseph Thacker, and Jim Nielsen. They break down the what and show how scary it is.

  • For cloud outage thinking and systemic critique: Jamie Lord and Davi Ottenheimer are good reads. One looks at the market structure, the other at the staffing and integrity side.

  • For low-level hardware and industrial protocols: Denis Laskov is doing the heavy lifting. His posts collect practical guides and slideware that you can actually run with.

  • For consumer-level threats: Michael J. Tsai writes short, pointed pieces about real-world bugs like XCSSET and the password manager flaw.

  • For compliance and documentation: Burkhard Stubert gives a plain, useful path to get your records in order.

Final thread — a small worry that’s been growing

One unease threads through many posts: we’re building faster than we’re thinking. New AI browsers, automated tools, and central platforms are lovely until they’re not. The pileup of legacy tech, developer trust, and concentrated infrastructure makes a perfect storm. It’s not doom, but it’s a fragile, expensive mess.

To me, the week’s posts felt like a town meeting where half the people want to slap on a bandage and half want to rebuild the bridge. Neither is wrong. Both are needed. And the people who keep saying “document your decisions” are the dull-voiced ones who’ll be quietly right when the auditors come knocking.

If anything, the common wisdom I picked up is this: treat convenience as a suspect. Treat documentation as an ally. Treat new tech with curiosity and suspicion — like you would a flashy neighbour with a trailer full of fireworks. Read the original posts if you want the receipts; they’re full of details you won’t get from a headline. Happy digging.