Cybersecurity: Weekly Summary (November 17-23, 2025)
Key trends, opinions and insights from personal blogs
There was a lot of noise about cybersecurity this week. Some posts were like calm, careful clinic notes. Others were more like someone shouting from the rooftop while waving a USB stick. I would describe the mood as jittery and curious. To me, it feels like people are trying to make sense of a world where smart software, cheap hardware, and giant central services all sit in the same room and occasionally trip over each other.
AI: Not magic, not harmless
The debate about AI-powered attacks keeps coming up. Some writers are saying: don’t panic; LLMs aren’t about to replace skilled hackers. Others point at real, practical harms. I’d say both sides are right. They’re talking about slightly different things.
Christopher Parsons (/a/christopher_parsons@excited-pixels.com) wrote a really level-headed piece called "Vibe-Coded Malware Isn’t a Game Changer (Yet)." He argues that LLMs often act like fancy autopilot radios. They help point you in the right direction. But they don’t yet build a full, reliable exploit chain on their own. In other words, you still need a human mechanic to get the car running. Parsons warns against breathless headlines that treat LLMs like magic black boxes that can own the internet overnight.
But then you have quick, sharp notes — like the short quote passed around from Nicholas Carlini about personalized blackmail. That one sticks. It’s not about writing complex ransomware. It’s about LLMs reading your files and crafting poisonously personal messages. That’s low effort and high damage. It’s like comparing a sledgehammer and a tiny, very accurate needle. Different tools, different harms.
There’s research that lands somewhere between those two extremes. Simon Lermen (/a/simon_lermen@simonlermen.substack.com) and colleagues showed how AI can be used to phish older adults by jailbreaking models and crafting very believable emails. They ran an experiment with real people. Eleven percent of participants got phished by at least one email. That number makes a nice chilling sound. That’s not a hypothetical threat. It’s tangible.
And then there’s the Golden Agent essay by Gunnar Peterson (/a/gunnar_peterson@defensiblesystems.substack.com). He draws a line from old Active Directory weaknesses — the Golden Ticket — to the new problems when we let autonomous agents do work for us. The worry is familiar: delegation creates long-term credentials and surprising trust. I would describe this as an old safety problem wearing a new hat. It’s not sexy, but it’s serious.
So, LLMs aren’t a magic malware vending machine yet. But they do make some nasty scams and reconnaissance much easier. To me, it feels like a shop putting sharp knives in kids’ reach. Not every kid will use them well, but the chance of a cut goes up.
Agents, platforms, and an immune system
There’s a thread about agents and how platforms are trying to manage them. Dave Friedman (/a/dave_friedman@davefriedman.substack.com) wrote about an "immune crisis" where Factory’s Droid coding agent was used by fraud rings. Free tiers became weaponized. That sounds like an arms race between open access and platform hardening. Platforms are suddenly playing immune-system architect.
Amazon blocked Perplexity agents from scraping pages. That’s like a shopkeeper putting up a turnstile because a gang keeps using the back door. You can see the logic. But it also makes the bouncer vs. customer balance awkward.
And the Gemini 3 safety review by thezvi (/a/thezviwordpresscom) points out that big models may be excellent at some tasks, but their safety documents can be frustratingly fuzzy. Developers want clear rulebooks. They get manual blurbs instead. That makes risk management hard.
This ties back to the "Golden Agent" worry. Give agents access, and you give them keys. We don’t have very good ways to audit what they do yet. It’s the same problem as letting a clever intern loose with the office keys and the safe code taped to the drawer.
Infrastructure fragility: Cloudflare and friends
A huge chunk of the conversation this week was about Cloudflare. Multiple posts dug into the outage on November 18–19. The short version: a small change to a database permission caused a Bot Management file to balloon in size. That oversized file crashed services and took a sizeable chunk of web traffic with it. Sometimes that crash overlapped with a real DDoS campaign, which made it look worse.
Davi Ottenheimer (/a/daviottenheimer@flyingpenguin.com) and Brian Fagioli (/a/brianfagioli@nerds.xyz) both cover the technical back-and-forth. Jamie Lord (/a/jamielord@nearlyright.com) added the bigger-picture view: when a service used by 20% of the web hiccups, it’s a stress test. Brian Krebs (/a/briankrebs@krebsonsecurity.com) used that stress test angle to say: some customers redirected away from Cloudflare and then opened themselves up to other risks. That’s an important, slightly ironic point. Swapping vendors fast isn’t always safer.
Microsoft’s reporting about the Aisuru botnet made the week feel like a thriller. Aisuru hit a peak of something like 15.72 Tbps. That’s a number you can’t really picture. It’s like seeing a fire truck that can also swallow a barn. Microsoft praised its DDoS protections, but the whole episode raised questions about central points of failure. When a few companies control key internet plumbing, an outage at one place ripples everywhere.
There’s a human lesson here. People and companies grew complacent. They relied on a single fence and didn’t inspect the backyard. Cloudflare’s incident shows how small, mundane changes can cause wide damage. It’s like the loose tile on the stairs that trips up the whole family on Monday morning.
Hardware and the IoT emergency room
Hardware security popped up in multiple posts this week. Denis Laskov (/a/denis_laskov@it4sec.substack.com) was especially busy. He shared several posts about IoT and vehicle systems.
One was a long, 29-page academic survey about modern hardware attacks on IoT. If you’re unfamiliar with terms like fault injection, PUFs, or TEEs, that survey is a solid primer. It reads like a medical manual for devices that don’t have one.
Then there was the EV charger research. Forty-one out of sixty-nine chargers tested had misconfigurations in Qualcomm HomePlug GreenPHY modems. The researchers could disable chargers and even run Doom on a modem as a proof-of-concept. That’s a clever, slightly dark demonstration. It shows how consumer-level devices can be abused to make infrastructure hazardous. Imagine public chargers being turned into dead stations during a holiday weekend. Frustrating for sure, and possibly dangerous.
Jun Yeon Won’s CarPlay adapter work (also covered by Laskov) felt especially relatable. Cheap adapters let old cars talk to new iPhones. But the adapter often disguises itself as an Android computer and exposes CAN buses and private data. It’s like finding an unknown person tapped into your house intercom. The research shows privacy and safety problems. Little accessories that promise convenience often lock in surprising attack surfaces.
Anne Zachos’s write-up on trailer wiring being used as an antenna for seed-key brute-force attacks felt like a plot from a heist movie. It’s a low-tech vector with high real-world impact. Mechanics and fleet operators should care. It’s not theoretical. It’s practical and immediate.
These hardware issues have a pattern. They aren’t glamorous. They’re not headline-grabbing zero-days. They are messy, systemic, and slow-burning. Yet they can cause actual physical harm. That’s always unnerving.
Identity, OSINT, and privacy drift
A few posts focused on identity and OSINT. Nico Dekens (/a/nico_dekens@dutchosintguy.com) did a clear-headed piece about raw data analysis for OSINT. He stressed cleaning, structuring, and visualizing data. That’s the boring, necessary grunt work that gets you signals instead of noise. Dekens’s advice reads like a craftsman’s checklist: do the basics well.
On the other end, there were pieces about AI being able to identify people across platforms. Simon Lermen wrote about the economics of that problem. When AI can stitch together public crumbs into a person’s full biography, it makes stalking, scams, and blackmail cheaper. That’s an ugly market incentive. It’s not just a privacy problem; it’s a profit-making method for bad actors.
There’s also a skeptical piece on whistleblowers and public figures. The Wise Wolf (/a/thewisewolf@wisewolfmedia.substack.com) wrote provocatively about how intelligence tradecraft shapes public narratives. It’s messy. Edward Snowden has a Netflix deal. Julian Assange is a complicated legend. The post argues that some privacy advice is actually convenient for surveillance actors. That’s a spicy take and it forces a second look at who benefits when privacy advice becomes mainstream and polished.
Also, Tengucon’s OSINT CTF write-up by djnn (/a/djnn) was fun. It’s a reminder that OSINT is a craft you can learn and teach with playful challenges. These exercises teach muscle memory for real investigations, and they keep the community sharp.
Regulations, compliance, and keeping your job
A few posts were less about midnight hacks and more about paperwork and training. These are quieter themes, but they matter.
Atilla Bilgic (/a/atilla_bilgic@practicalsecurity.substack.com) wrote a very practical guide for junior developers on choosing AI tools that won’t get them fired. It’s aimed especially at folks in regulated industries. The EU AI Act looms over everything. The post gives a 90-day plan and a framework for evaluating tools. It’s the kind of “how not to bring down the company” checklist I’d hand to a new engineer.
Burkhard Stubert (/a/burkhard_stubert) announced CRA Survival Trainings to help manufacturers with the Cyber Resilience Act. The CRA means product makers need to bake security into devices now or face bans starting in 2028. That’s a hard deadline. Manufacturers are wobbling between product timelines, supply chains, and living-with-cybersecurity realities. These training programs aim to close that gap.
Regulation and compliance are boring but not optional. If you think security is optional, try selling a device in 2028 without a compliance stamp. It’s like trying to sell a car without brakes.
Big breaches, budgets, and the macro view
Some posts tried to fold these stories into a larger economic and strategic picture. Paul Kedrosky (/a/paul_kedrosky) highlighted that US cybercrime costs hit $16.6B in 2024. That’s a high, headline-grabbing number. It’s a ledger entry that reminds you this stuff has real costs.
Darwin Salazar’s TCP roundup (/a/darwin_salazar@cybersecuritypulse.net) talked about CISO budgets for 2026 and the long-term funding picture. There’s an industry reaction: with rising threats, security budgets shift, but so do priorities. People debate where to spend: more monitoring, more personnel, or more tooling.
Otakar Hubschmann (/a/otakarg_hubschmann@theaiunderwriter.substack.com) and others mused about an LLM bubble, GPU depreciation, and the investment rhythms of the AI industry. That’s a different angle: cyber risk meets capital markets. When hot AI startups face hardware constraints or immune-system pushback from platforms, the attack surface and business models shift.
Startups, identity substrates, and danger zones
There was an attention-grabbing idea labeled "the most dangerous startup idea in the world." The piece (from Unworkable Ideas (/a/unworkable_ideas)) described a semantic substrate that stitches identity management tightly into enterprise apps. It’s an elegant product concept. It’s also a concentration-of-power problem. Handing one system the keys to identity and context is like putting your passport, house key, and social security number into one wallet. It’s tempting for companies, but it centralizes risk.
This idea ties back to many other pieces: agent frameworks, the need for auditability, and the risks of heavy integration. If one vendor owns the substrate, a failure or compromise could be devastating.
Vulnerabilities that feel like bad plumbing
Several posts showed a pattern that looks like bad plumbing. Small mistakes cascade. Cloudflare’s oversized file was a perfect example. So were the heavy-vehicle ECU brute-force insights and the Huawei Kirin SoC exploit (Denis Laskov shared that one too). These are not headline zero-days that let an attacker flip a city like a light switch. They’re more like hairline cracks in a water pipe that leak quietly and then suddenly flood the basement.
The Huawei hack was technical and interesting. It showed a GPU logic bug giving access on a Mate 60 Pro. That can be eye-opening for people who assume phones are sealed systems. The reality is the stack is complex and messy.
Friction, taste, and the human angle
There were also some human-centric threads. The Reuters-linked research into deception rings in Southeast Asia, or the testimony cited to a Senate hearing, showed how victims get coerced into criminal operations. That part is ugly and intimate. It’s not tech theater. It’s people being pushed into harm.
The whistleblower skepticism piece pushes us to question narratives. Edward Snowden’s Netflix deal is an odd data point in the culture of privacy. Does celebrity make privacy tools safer? Or does it make them a sales channel for surveillance-savvy actors? I’d say it’s messy and worth a second, skeptical look. Sometimes the popular fix is the easiest to monetize and the least protective.
What kept repeating
A few motifs kept showing up across posts. I’ll name some, because they matter:
- Delegation risk: whether it’s giving an AI agent credentials, or a cloud service managing DNS, giving things away increases brittle risk.
- Centralization risk: big vendors make things convenient, but also make big outages and big attack surfaces.
- Hardware is messy: billions of cheap devices are built without threat models. Cheap accessories and chargers are legitimate security problems.
- AI is a multipurpose tool: it helps defenders, but it also helps attackers in different ways than we first imagined.
- Compliance catches up slowly: laws like the EU AI Act and the CRA are trying to shape the future, but they also create an ecosystem for training and services.
You saw those same patterns in different clothes across the week. Different authors focused on different corners. Some were technical, some were policy-minded, and some were more narrative.
A few practical notes (if you’re poking around)
- If you care about OSINT, read Nico Dekens for the practical steps of cleaning and visualizing raw data. It’s not glamorous. It’s useful.
- If you’re a junior dev in healthcare or finance, Atilla Bilgic’s piece gives a 90-day safe-onboarding plan for AI tools. Useful and practical. Take notes.
- If you build devices, the CRA trainings Burkhard Stubert mentioned are time-sensitive. 2028 is right around the corner if you’re on slow product cycles.
- If you run an online service, read the Cloudflare postmortems and Brian Krebs’s take. Think about resiliency, and think about what happens when your safety net goes down.
- If you worry about your grandma getting phished, Simon Lermen’s jailbreaking study is worth a read. It’s a real experiment with real people.
If you want to go deeper, click through to the original posts. The authors dug into the weeds, and I only hinted at the good bits. Each write-up had its own flavor and useful technical threads.
This week felt like someone rearranged the kitchen. Knives are still where they used to be, but there are now more of them, some behind new cabinets, and one of the drawers is jammed. The work now is to make sure people know which drawers are safe, who gets the keys, and how to close the kitchen door when necessary. Read the threads. There’s useful stuff in the details, and the details matter.