Programming: Weekly Summary (December 29 - January 04, 2026)

Key trends, opinions and insights from personal blogs

This week’s programming blog scuttlebutt felt like standing at a busy kitchen counter while someone cooks a dozen different recipes. Some dishes were familiar and reliable. Some smelled curious. And a couple made you raise an eyebrow and think, huh — maybe I should try that at home. I would describe the pile as practical, a bit nerdy, and honest about the fiddly bits. To me, it feels like the community is evenly split between digging into low-level roots and trying to make the high-level stuff less flaky.

Threads I kept bumping into

There were a few repeating ideas. One was: go back to fundamentals. Authors nudged readers to not be dazzled by shiny trends. Another was: tools must respect people — humans want fast, partial feedback. And a persistent one: language models and AI are useful, but they still need good scaffolding to be dependable. I’d say those three threads braided through posts ranging from compilers to UI generation, and from memory safety to editor workflows.

I’ll walk through the main runs of posts. Think of this as a slow walk through a small conference hallway. I’ll point at booths and whisper what I noticed, not give a full transcript. If something catches you, follow the name and read deeper — the posts are portable little manuals.

Testing, automation, and feedback loops

Bas Dijkstra reminded folks to remember the five pillars of test automation: testing, programming, strategy, tools, engineering. The framing is stubbornly non-sexy. It’s the kind of list you nod at while building coffee. I would describe his take as a practical shove: focus on core disciplines instead of chasing new toys.

Kent Beck’s piece, ‘The Precious Eyeblink’ (Kent Beck), landed nearby and felt like an emotional plea. He talked about the Doherty Threshold — that split-second where the user still feels in flow. The point: tools that wait for perfect answers kill focus. Fast, imperfect feedback keeps people coding. To me, it sounds like preferring a quick sketch on a napkin over a fully rendered poster when you’re arguing about layout with a teammate. It’s small, but it changes how you design IDEs and developer tools.

Those two together make a simple argument: faster is better even if less complete. The rhythm reminded me of a café that serves a decent sandwich fast, rather than a slow tasting menu that leaves you hungry between courses.

AI, copilots, and the “artificial intern” era

There were many takes on LLMs. Some excited. Some exasperated. Addy Osmani laid out an LLM coding workflow. It’s a careful, checklist kind of post: structure tasks, hand context in small parts, keep humans in the loop. I’d say it’s the “procedural” view — how to make the assistant useful rather than magical.

Then there’s the annoyed-but-hopeful angle. Michael J. Tsai wrote about SwiftUI generation by Apple LLMs and sighed at how often generated UI still fails to compile. It reads like someone who took the car for a spin and found the wheels wobbly. You get the sense of patience wearing thin.

And Monroe Clinton called 2025 the year of the artificial intern. That’s a phrase you could slap on a sticker. The gist: people are piling AI into workflows, the tools are convenient, but they’re not perfect. They’re like interns who can do neat things but occasionally glue their fingers to the copier. You keep them because they speed things up, not because they replace judgment.

Two short quotes circulated via Simon Willison: one from Jason Gorman about the enduring difficulty in converting fuzzy human thought into exact code, and one from Liz Fong-Jones about the changed role of a programmer when working with LLMs. Those small bits are like fortune-cookie truths — obvious, but they land.

Mapping legacy code and the accidental documentation helper

If you’ve ever inherited a rattly old repo, James Wilkins wrote something you’ll bookmark. He proposes using LLMs to map legacy systems into flowcharts with Mermaid. The process is four steps and deliberately pragmatic: scope, generate a diagram, visualize, then refine. It feels like using a flashlight in an attic rather than moving the whole house. Try it in small rooms first.

Relatedly, Scott Werner described how a newsletter blossomed into a place for practical AI tooling demos. It’s a softer story about audience and iteration, but it’s also a useful reminder: the best AI helpers often start as small experiments that grew into habits.

Language tooling, parsers, and compilers — the joy of small, solid things

A surprising number of posts dove into compilers, parsers, and language design. There’s a revival of interest in doing the simple, correct thing rather than chasing complexity.

Paul Tarvydas argued that compilers are simple if you lean on text-to-text transpilation and basic concepts. The post points at Ron Cain’s SmallC and similar examples. His tone is kind of like someone saying: you don’t need a spaceship; a good bicycle gets you places.

On parsing, Tomasz Gągor wrote a practical guide to building a parser in Go. It’s hands-on: lexer, parser, matcher, handling left recursion. If you like code that smells like freshly mown lawn, this will scratch that itch. Two shorter pieces from Abandonculture revisited scannerless parsing, and they do the kind of subtle clarifying that saves future headaches — the kind of “oh right” posts you print and tape above your monitor.

There was also a neat devlog from Leon Mika about building a keyframe animation API in Go for Ebitengine. It reads like a tinkerer’s notebook. He explains the Var type, tracks, timelines. It’s very practical, and it hints at future polish work. If you love building small DSLs, this is the one to click.

Memory, linking, and the low-level grumble

Several authors were deeply in the weeds about binaries, relocation, PIC, and memory safety. This batch felt like people arguing about the exact seasoning in a stew. Important, fiddly, and not glamorous.

Farid Zakaria lamented huge binaries and relocation overflows. He walked through CALL instruction limits and ELF layout, and suggested GOT sharding and other mitigations. It’s the kind of thing that matters for servers with lots of code.

On a related note, Systems explained why Position Independent Code matters — shared code pages save RAM on production clusters. It’s a pragmatic economics lesson: small technical choices add up to big cost differences.

Alex Kladov attacked sloppy definitions of memory safety. The post is a careful correction: memory safety is about the implementation, not just the source language. If you like formal thinking about practical problems, that one has meat on the bones. Paired with the relocation talk, it felt like watching someone check the oil and then test-drive the car.

Practical projects and portability

A few posts focused on making things portable and practical. Hugo Daniel pulled a quirky trick with PNGine: bundling WebGPU shaders inside PNGs for portability. It’s a neat hack — like tucking a surprise recipe into a folded napkin. He had demo trouble at first, then redesigned the format and pushed something that actually runs across platforms.

katafrakt riffed on creating portable mruby binaries using Cosmopolitan Libc. If you’ve ever wanted a single executable that works on many OSes, this is a clear path. It’s reassuring to read someone say, yes, it can be done, and here’s how.

Jussi Pakkanen swapped Cairo/Pango for CapyPDF in Chapterizer and cut page render time drastically. That’s the sort of upgrade that makes you grin and think: oh, this was sitting there, waiting to be swapped for something lighter.

Editors, plugins, and the “how I actually work” stories

People enjoy talking about their setups. Some curious nuggets:

  • James Doyle explained using AI in Sublime Text. It’s pragmatic: he likes speed and small plugins and shows how an AI add-on fits into a web workflow.
  • Hari described Helix and its select-then-edit model. It’s short and enthusiastic; the kind of note that makes you try a keybinding for ten minutes.

There’s also a clear theme about keeping editor workflows fast and not overloading them with background work. That ties back to Kent Beck’s eyeblink idea. Fast editor feedback wins.

Community, knowledge commons, and how we lose or keep what matters

A darker thread popped up in Jamie Lord on Stack Overflow’s decline. The gist: strict moderation, search-engine shifts, AI replacing quick lookups — the site lost the warmth that kept it alive. It reads like a eulogy but with sharp notes: permissionless Q&A created a knowledge commons, and once the community dries up, the commons collapses.

Josh Beckman brought webmentions back to his site. It’s a small reclamation project — a tiny, hopeful nudge toward decentralized conversation. It’s like fixing a local bulletin board when the fancy city posters vanish.

Why folks love languages, and why they don’t

Daniel wrote a heartfelt list of reasons to love programming languages. It’s not technical nitty-gritty; it’s affection — the cultural, social, mathematical reasons people fall in love with PL work. Read it if you need a pick-me-up.

In contrast, Code Style & Taste punched holes in SOLID principles. The piece is one of those mildly contrarian essays that makes you re-evaluate received wisdom. The author likes SRP and ISP, but disputes Open-Closed and Liskov. It’s pragmatic grumbling: not everything in the textbook fits messy projects.

And Artyom Bologov wrote a personal reflection about Lisp, life, and loneliness. It’s raw. The tech is interleaved with emotion: languages aren’t just tools, they’re companions, and sometimes they’re all you’ve got. That post lingered after the more technical pieces faded.

Embedded, Rust, and small hardware joy

The Embedded Rust newsletter from Omar rolled out an issue with project highlights, jobs, and resources. The newsletter feels like a compass for people writing for tiny boards. Alongside it, bitbanksoftware compared C++ and Python for LCDs on Linux and argued that C++ can be more efficient for low-level hardware. Both pieces felt like a weekend workshop: bring solder, bring patience.

Tiny, precise problems that matter a lot

There were also posts that drilled into very precise development pains. Peeter Joot fixed incorrect debug location info in his toy language. Bert Hubert dug into std::basic_string quirks on FreeBSD/OpenBSD. These are the posts you skim until a problem bites you, then you devour them. Like keeping a first-aid kit in your backpack — you hope you never need it, but you’re glad it’s there.

Predictions, retrospectives, and the odd personal note

A couple of folks looked back and guessed forward. Matt Hall checked his 2025 predictions and admitted which ones flopped. It’s refreshingly blunt; nobody likes being right all the time.

Z. D. Smith described building a new shorthand using AI. That one was playful. It’s the kind of project that sits at the border of linguistic curiosity and engineering. I kept thinking of old-time stenographers in a newsroom, tap-tap-tapping away.

Where the arguments and tensions show up

If you want the drama condensed: it’s between speed vs completeness, low-level correctness vs high-level ease, and AI convenience vs human oversight. People on different sides of those tensions are polite but firm. Some say, get closer to metal to fix systemic slowness. Some say, build better abstractions so users don’t suffer. Some say, use AI but structure interactions carefully.

There’s also an undercurrent about community: the internet loses value when we lose contributors. Stack Overflow’s decline is a warning. Decentralized gestures like webmentions feel small, but they’re attempts to keep conversation threaded.

Small analogies I kept coming back to

  • The editor feedback debate is like cooking: do you want a quick snack while you think, or a slow-cooked roast for a single dinner? Quick snacks keep people going. Slow roasts are great sometimes, but they don’t fit every use.

  • LLMs are like helpful interns. They can fetch things quickly, sometimes with flair, sometimes with glue on their fingers. You still need a supervisor.

  • Low-level fixes are like tightening the spokes on a bicycle wheel. Nobody notices until the wheel buckles, and then everybody remembers the person who kept it tuned.

Small gripes and curiosities

There were a few nitpicky things that stuck. Some posts assume readers already love parsing. Some authors reuse jargon without slow-lane explanations. But the rough edges are part of the charm; the community prefers to write for peers rather than for polished onboarding.

Also, I kept noticing a tension about tooling: people love fast, minimal editors (Helix, Sublime) and grumble about heavy IDEs that try to be everything. It’s a recurring cultural split: do you want a Swiss Army knife or a precise chisel?

Where to look next

If you’re skimming and want quick hits:

These are just the signposts. Many of the other posts are small, thoughtful lamps along the path.

There’s a pleasing mix this week: some people tighten bolts, some polish knobs, some sketch new controls. It feels like a neighborhood where carpenters, bakers, and tinkerers talk over the fence. If one thing sticks: people are still solving old problems in practical ways. The shiny stuff catches attention, but the work that keeps servers humming and editors snappy is still happening.

If you want a deeper dive into any of the booths I mentioned, go click the author links — there’s heavy detail behind most of them, and a few will make you nod and think, oh yeah, that’s exactly the slightly awkward problem I have. I’d say that’s the best part: there’s a post for the small griefs of programming, and a few that feel like a friendly hand when you’re stuck.