Programming: Weekly Summary (February 02-8, 2026)

Key trends, opinions and insights from personal blogs

It was an oddly busy week in programming blogs. Small posts. Focused posts. A few that poke at old habits and some that fiddle under the hood. I would describe them as a handful of honest notes, each one trying to fix a particular squeak in the machine. To me, it feels like a potluck dinner where everyone brings one dish: some spicy, some plain, some oddly comforting. You will probably want to taste a bit of each.

Quick map of the week

There are a few clear lanes: agent programming and LLM behaviour; language and parsing work (CSS, tokenizers); flow control in systems code; the old-school hobbyist optimization crowd; and a short, sharp reminder about how teamwork actually builds tech. The posts come from different corners, but similar questions keep turning up. How much do we rely on context? When is the language the problem? When is the model the problem? Can old tricks still win? I’d say these are the threads running through the week.

Agents, context, and the stubbornness of LLMs

Better than Random wrote a tidy study that I’d describe as quietly important. It compares two ways to teach an agent: a persistent AGENTS.md file versus Skills that the agent can call during a run. To me, it feels like watching someone decide whether to pin a recipe to the fridge or whisper instructions as they cook. The fridge note wins.

The study shows that AGENTS.md, a persistent context, nudges the agent toward better behavior. Skills, even when available, were not used reliably at first. That says something obvious and also something that gets ignored a lot: LLMs are non-deterministic. They do not always do the helpful, rational thing even when you hand them the right toolkit during runtime. They need scaffolding that sticks.

There are small, practical points in the post that are worth remembering. For example, adding persistent guidance repairs a lot of weird edge cases that would otherwise require brittle runtime instructions. It is like putting a cheat-sheet on the table instead of trying to shout it across a noisy room. The cheerfully frustrating part is the reminder that models will surprise you. You can give them tools and still get wrong answers. The fix is not always more tools; sometimes it is better context and clearer habits.

If you tinker with agents or think about building workflows around LLMs, the post nudges you toward designing for persistence. Think of AGENTS.md as the pegboard over your workbench. You hang the key instructions there, and the agent learns to glance at them.

Being bad at CSS can be a superpower

Keith Cirkel wrote a longer, friendlier piece about learning CSS and then building tools to make CSS easier to reason about. The post is approachable. The title — Bad At Css — is not an apology. It is a bridge. The author moves from the humble admission that CSS can be weird, to building a parser called csslex and later csskit.

What stands out is the focus on fundamentals. Parsing theory, tokenization, ASTs — those sound like dry schoolbook topics, but Keith makes them feel like simple carpentry. Tokenization is like sorting screws into jars. Parsing is like fitting the right screw into the right place on a hinge. Once you see the parts, you can start to rearrange them on purpose.

He also talks about CSS variables and clever techniques to change CSS in ways that feel like small hacks but actually add a lot of clarity. There’s a warm, slightly self-deprecating tone in the post. The claim that being bad at CSS can be a skill is oddly convincing. I’d say the point is that a beginner’s awkwardness forces you to be explicit. You are forced to learn why something breaks, not just how to copy the solution.

This one reads like a slow, thoughtful workshop. It gently nudges people who struggle with CSS to consider building their own little tools. That tip is basically saying: you will understand more if you build one small parser. Don’t be scared of the boring bits.

Routing messages like sorting mail: TPL Dataflow and LinkTo predicates

Bart Wullems shares a focused note on using the LinkTo predicate parameter in TPL Dataflow. The post is short and practical. It explains how to route messages based on conditions, and why you need a catch-all path so messages don’t vanish.

Think of it like postal sorting. You set up lanes on a conveyor and try to scoot each letter into the right chute. If a letter doesn’t match any chute, it can fall onto the floor unless you put a catch-all bin. Bart’s example of processing orders by priority keeps it concrete. The post reminds you that predicate order matters, that predicates are evaluated at link time, and that missing a fallback is where subtle bugs hide.

It’s one of those posts that is short but saves you an afternoon of head-scratching. If you’ve ever come back to a pipeline and wondered where a message disappeared, this is the kind of reminder that feels like a flashlight in a dark attic.

The slow fade of credit and how teamwork is the real engine

The post titled daveverse is a reflective piece about collaboration. The author, daveverse, pushes back against the hero narrative. They point at the usual suspects: people celebrate the person on the stage and forget the people in the shadows who kept the lights on.

They bring up MP3 and HTML as examples where collective effort mattered. It’s a familiar complaint. But the tone is not bitter. It’s a quiet insistence that if we want better tech we should pay attention to how teams actually work, not just to the neat storylines. To me, it feels like a small social correction: remember that most of engineering is stubborn coordination, not glamorous eureka moments.

There’s a gentle urgency in that piece. It asks you to stop polishing the legend and start looking at the messy logs and commit histories. That alone changes how you celebrate progress. If you care about sustainable tech, the post is a small nudge to think, re-think, and then design for collaboration, not just heroics.

Why software is bloated: a rant with a point

Paul Tarvydas lays out a simple diagnosis: modern languages are mostly fancy reworkings of CPU assembler. The result is a strong tilt toward synchronous, sequential thinking. He argues that this narrow toolbox makes us shove every problem into the same shape, and that produces a lot of unnecessary bulk.

His case is blunt. He wants more diversity in how we think and write programs. Different problems need different notations and different models. Using the same old sequential mindset for everything is like insisting every recipe be cooked in a pressure cooker just because you own one. It will work sometimes, but often it creates overkill.

There is agreement across a few posts this week that notation matters. Paul presses on that idea harder. He points out that by clinging to a single dominant paradigm, we end up building heavy systems. He wants languages, tools, and perhaps even hardware that let us express things in ways closer to the problem, not the CPU.

The piece reads like a provocation. It’s the sort of post that makes you nod and then squint and then maybe argue. But that’s fine. We need a little friction in the conversation.

Vintage computing: cleverness in a small space

heckmeck! brings the week back to a very different workshop. Their VCCC 2025 post challenge write-up is a delight if you like tight, careful optimizations. The author dives into patterns in coordinate pairs, memory tricks, and ways to reduce code size while keeping compatibility.

This is the opposite of the bloated software problem. It is the art of doing more with less. The post shows patterns that feel like woodworking joints. You cut away what you don’t need until the piece fits neatly. A lot of modern work could learn from this attitude. The emphasis is on details, not ideas. It is satisfying the way tuning an old radio can be satisfying.

What stands out is that their changes are practical and incremental. Small wins add up. That rhythm is infectious: shave a byte here, a cycle there, and suddenly the whole thing breathes better. If Paul’s post is a rant to expand our vocabulary, this one is a how-to on thrift.

Recurring themes and small disagreements

There are a few places where the posts talk past each other, sort of. Two clear axes appear.

  • Context and persistency vs ephemeral tools. Better than Random shows that persistent context helps agent performance. That overlaps with Paul’s point about notation: if you build the right scaffolding, the system behaves. But Paul wants more fundamental shifts in how we program, not only better context. So there is a slight tension: do we fix the local context or rethink the laws of the land? Both are valid answers. I’d say they are complementary, though some folks would prefer revolution over patching.

  • Minimalism vs convenience. The VCCC piece and Paul’s argument remind you of minimal, efficient design. Keith’s CSS parser work is also about getting down to basics. But the agent post shows that adding a simple file to keep context can be the difference between success and failure. Sometimes adding a bit of persistent data is cheaper than re-architecting everything. That falls into standard engineering trade-offs: fix small, fix fast, or invest in rethinking the system.

There is agreement too. Several authors implicitly line up behind the idea that tools should match the problem. Whether you call that better notation, better scaffolding, or better parsing, the goal is the same: reduce friction between intention and result. People want fewer surprises.

Short tangents worth keeping in mind

A few small asides kept popping up in different forms. These felt like the week’s little footnotes.

  • Non-determinism is a recurring pain. The agent post and the CSS parsing work both show how unpredictable behaviour can burst from seemingly small causes. One is model randomness. One is parsing edge cases. The remedy in both cases is to make the environment more reliable: persistent files, clearer tokens, better tests.

  • Small tooling wins are underrated. Keith building a parser. Heckmeck’s optimizations. Bart’s reminder to use a fallback path. Tiny tools change everyday life more often than grand rewrites.

  • Human coordination underpins the tech we admire. daveverse’s piece keeps nudging this point. The single-minded focus on the hero ignores the slow work of maintenance, conventions, and team practice.

A few concrete takeaways — short and helpful

I won’t go deep into code here, but if you skimmed the posts and want to act, here are some small, practical nudges you might try.

  • If you build agents with LLMs, try a persistent instruction file. It is cheap to add and often helps more than extra runtime tooling. AGENTS.md is a simple pegboard.
  • If CSS gives you grief, try writing a tiny parser. A small AST will teach you the shape of the problem and make certain hacks less scary. Keith’s path from csslex to csskit is a good model.
  • When building message pipelines, always include a catch-all route. Predicates are handy, but order and fallbacks matter a lot. Think sorting mail, not just sorting once.
  • If you want leaner software, study constraints. Look at retro work or projects that optimize small memory footprints. Those recipes often translate to modern systems if you care about waste.
  • Don’t romanticize heroes. Read commit logs. Count reviewers. Celebrate the messy hookup as much as the big reveal.

Where these posts might lead you next

If a post nudges you, follow it. Each author drops a trail. Better than Random suggests experimenting with persistent agent guidance. Keith Cirkel gives steps to build a small parser and pokes at CSS variable tricks. Bart Wullems gives a tidy example you can paste into a dataflow test. daveverse asks you to look at group structures in the projects you admire. Paul Tarvydas pushes for different notations and paradigm thinking. heckmeck! shows how tiny optimizations add up.

These are like separate kitchen drawers. Some have the knives. Some have the measuring spoons. You will find what you need if you open the right one. The best part is the mix: occasional deep theory, a bunch of practical fixes, and the odd reminder that people built this together.

A small, human note before you go

The tone across these posts is quietly earnest. There is not much posturing. It is more like a group of people writing in the margins of their day: a study note here, a how-to there, a small opinion piece. They do not promise to save the world. They promise to solve a problem, or make a tool, or nudge a culture.

If you like tinkering, pick one short piece and try a small change. Add a persistent file to an agent experiment. Build a tokeniser for a tiny language. Add a fallback in your pipeline. Try to shave one byte or a few cycles out of an old routine. Read a commit history and look for the unglamorous names. Those little moves will repay you.

There is more detail in each author’s full post. If you want the code, the diagrams, or the specific examples, follow the links. They will take you to the originals where the nitty-gritty lives. And if you find yourself arguing with any of the posts, that’s fine too. Disagreement is where the next useful post comes from.