ChatGPT: Weekly Summary (December 22-28, 2025)

Key trends, opinions and insights from personal blogs

I would describe this week in ChatGPT writing as a bit like a busy high street on a rainy Thursday. Lots of different shops. Some selling shiny new gadgets. Some arguing with the person in the next doorway. A couple of stalls doing quiet, useful things. You wander along, pick up a pamphlet here and there, overhear a snatch of heated debate, and go away with your pockets slightly heavier with questions.

The big move: ChatGPT as a platform, not just a chat

A few posts this week are all leaning into the same idea: ChatGPT isn’t just a chatbot anymore. It’s trying to become something like a mini operating system for the web. That sounds grand because, well, it is. But the way people write about it shows a mix of excitement and scepticism.

John Hwang calls it a "concierge internet". To me, that phrase lands — it feels like a doorman who knows everyone’s name and tries to sort your life out, which is handy until he starts charging for coat checks you didn't ask for. He points to the app SDK and the new dynamic UI features. Those promise that you’ll do more inside ChatGPT itself instead of jumping to Google or a dozen other sites. It’s neat. It’s also the sort of thing that can quietly push the internet’s traffic patterns around, like a new roundabout in town.

Vivek Haldar writes from the developer side. He actually tried to build an app for the new platform and came away feeling like he’d been given a shopfront without clear rules or tills. He mentions a staggering number: 800 million weekly active users. That’s a lot of potential customers. But there’s not much in the way of clear tooling, specs, or a simple way to get paid. It’s a familiar sight for anyone who remembers the early days of the iPhone App Store — all the opportunity, plus a lot of messy guesswork.

And then there's MBI Deep Dives who frames everything as a competition of ecosystems. He contrasts OpenAI’s push with Google's long-held advantage — Gmail, Maps, Android, Chrome. His point is practical: having people everywhere across products makes it easier to monetize AI. It’s like having a chain of coffee shops — easier to sell pastries if you already have customers buying coffee across town. He also flags how user agency matters: the person using the AI still needs control, or else the whole experience becomes a bit hollow.

So who’s right? They all are, in part. The platform idea is real. The tooling and monetization paths are messy. And the ecosystem advantage matters. It’s a neat triangle.

Developers: building in public with duct tape and hope

Several writers take the developer’s view. There’s an immediate, practical concern: how do you build something that will survive model updates, tokens limits, and shifting APIs?

Will Larson digs into the technical sides of agents and context windows. He talks about "context window compaction" — basically, how to squeeze more relevant stuff into the limited memory the model has. It’s like trying to pack a holiday bag for a family of five into one carry-on. He describes the naive approach people take at first, and how a few tricks — tracking token usage, treating files as virtual — make agents hold on to the important bits longer. If you’re building automation, this is the sort of thing you want to bookmark and come back to.

Peter Steinberger writes about "vibe coding" and shipping at inference-speed. The tone is a bit giddy. He’s seen coding agents (GPT-5, Codex) change how you work: fewer trips to the docs, more iterative prompts, and more of a back-and-forth with the model while you assemble features. It’s like having a junior developer who can churn out code quickly, but who occasionally makes weird choices and needs you to check the work. That means faster shipping in many cases, but also a new kind of discipline.

Then there’s the small, awkward stuff — the things we all bump into. Vivek Haldar again, on the storefront side: there’s demand, but the store’s aisles aren’t labelled. If you’re a developer, you want a better map, payment rails, and consistent APIs. Without that, you’re basically selling from a kiosk in a bazaar. Might be great, might be a headache.

The user experience stew: helpful, confusing, sometimes creepy

A bunch of posts circle around the same friction: ChatGPT can be incredibly helpful, and also a bit off-putting.

daveverse complains about verbosity and misinterpretation. He’s annoyed by the over-explaining and the times the model gets the point wrong. I’d say that resonates with a lot of people. You ask for a quick answer and get a sermon. You want clarity and you get too many guesses. To me, it feels like calling your chatty neighbour who wants to show you their holiday photos when you just want to borrow a hammer.

The Font of Dubious Wisdom goes the other way and gets poetic. The post frames the model as an "inhuman thing" that’s mimicking us. It’s unsettling. There’s a folklore angle in the writing — like those old stories where you bargain with a stranger at the crossroads and someone loses a piece of themselves. Some of this is dramatic, sure, but the unease is real. People are noticing a weird, uncanny valley in language, and it sometimes feels like madness. Or at least like a houseplant that blinks back at you.

Mark McNeilly lays out a mixed bag: year-end features, AI companions, job apps being influenced by AI, and the potential for emotional dependency. He’s worried about people forming attachments to chatbots, or letting them manage job applications too much. That’s not far-fetched. We’ve already seen folks get oddly fond of their phone assistants. It’s a gentle warning: the tech is useful, but don’t let it do all the thinking for you.

And then there’s the everyday helpfulness. Jeremy Cherfas shares a story about resetting a password on a Raspberry Pi app. Email recovery failed. ChatGPT helped him explore options. In the end, he used SQL to change the password directly in the database. The post isn’t flashy, but it’s useful. People love that kind of thing. It’s like finding the right screw when you’re rebuilding a chair.

Small wins, odd failures: grammar checkers, maps, and prompts

Not every post is about grand strategy. Some are little, practical notes that catch the eye because they’re the things we actually use daily.

Homo Ludditus is mad at LanguageTool. He found that different LanguageTool clients behave differently by design. Different engines, different parsing rules, different context windows. The result: inconsistency. He went through a conversation with ChatGPT about comma rules and grammar minutiae and pulled back to a humble truth — no single client is perfect; each has trade-offs. That’s a small but important lesson for the kind of person who likes rules and hates surprises.

Peter Coles used ChatGPT to make a map of UK counties. It’s pleasantly ordinary and oddly charming. A lot of these posts that show the model doing simple educational tasks remind me of a kid showing you their drawing: it’s not going to win prizes, but it shows potential. Maps, timelines, simple explainers — these are the bread-and-butter uses for many people.

And then there’s Brad Barrish who shares a practical prompt for monitoring travel data — flights, weather, delays. It’s the sort of thing you slot into your toolbox if you travel a lot. Like a well-packed glove box: not exciting until you need it.

Reliability and the uneven behavior problem

A theme kept bubbling up: inconsistent behavior across contexts. Sometimes the model is eerily precise. Sometimes it’s distractingly wrong. The LanguageTool piece is one example. The developer-focused posts show another: models change, APIs change, and what worked yesterday might not work today.

Will Larson and Peter Steinberger both hint at another layer of this: understanding model quirks becomes part of the job. You don’t just write code; you learn the model’s temperament. That’s a funny phrase but apt. It’s like learning whether your oven runs hot or cold. Once you know its habits, you can bake better.

There’s also the problem of over-verbosity mentioned by daveverse. Too many words sometimes hide the wrong answer. It’s like getting a detailed weather forecast and finding out it’s sunny when you look outside. That mismatch becomes louder when the model is embedded in more things — apps, search, and assistants.

AI in daily life: companions, recruitment, and regulation

Mark McNeilly collected a few threads that point to a social shift. AI is moving into companionship roles. People are building relationships — or habits — around chatbots. That can be comforting. It can be dangerous. He also flags recruitment: AI tools are shaping how job applications are written and screened. That affects fairness, bias, and the job market in real ways.

Around regulation, the tone is cautious. Different countries are trying different things. Some want to clamp down, others are experimenting. This feels like the wild west in some places and like a neighbourhood watch meeting in others. It’s noisy and uneven.

Strategy and money: whose ecosystem wins?

MBI Deep Dives frames the competitive picture plainly. Google has an ecosystem. OpenAI has innovation and public mindshare. Chinese labs have different market dynamics and funding sources. The real question is less about which model is best and more about which company can tie the model into everyday products and payments. The one who does that well wins a lot of small bets across many people.

This is where Hwang’s concierge idea and Vivek’s app-store parallel meet: if ChatGPT becomes the place you start doing things — booking, shopping, managing — then control over that interface becomes a huge economic lever. It’s like owning the front door to a block of shops. That’s seductive and it makes people nervous, and rightfully so.

Practical engineering notes you might want to steal

If you are fiddling with models, a few concrete things came up in the posts that are worth noting:

  • Track token usage and compact context. Don’t let your agent dawdle on old chatter. (See Will Larson.)
  • Expect inconsistent tooling early on. If you’re building for a new platform, prepare for change. (See Vivek Haldar.)
  • Use the models iteratively for code, but verify outputs. They help you draft and scaffold. (See Peter Steinberger.)
  • For small, one-off tasks — password reset hacks, travel monitoring — a good prompt can save you time. (See Jeremy Cherfas and Brad Barrish.)

These are simple takeaways. They’re not flashy. They are, however, the kind of things you’ll be grateful you knew when you hit a wall.

The tone of the week: weary, practical, curious

There’s a pattern to how people are writing about ChatGPT. It’s less evangelism than a careful, slightly tired curiosity. The posts split into a few camps:

These groups don’t always agree. But they’re talking about the same thing from different angles. That’s useful. It’s like a kitchen table conversation where one person talks about the heating and another one complains about the curtains. Both matter.

A few anecdotes you’ll enjoy poking at

  • The Raspberry Pi password rescue. Not glamorous. Practical. If you like root cause hunting, Jeremy Cherfas tells it straight. He tried email recovery. He tried suggestions. In the end, SQL did the trick.
  • The map of UK counties. Peter Coles used ChatGPT to make a teaching map. It’s not rocket science, but it’s the sort of small delight that shows the tech in a friendly light.
  • LanguageTool’s split personality. Homo Ludditus shows how a single brand can be many different beasts depending on implementation. It’s a reminder that sometimes the weakest link isn’t the model but the way it’s connected.

Each of these is a tiny window into how people actually use AI. They’re practical, and they’re human.

Little signals that hint at bigger changes

If you squint, there are a few bigger notes hiding in the posts:

  • The rise of app-like experiences in chat. That changes user flow. It nudges search and browsing toward more conversational, task-oriented interactions.
  • The business question: who ties AI into payments, appointments, and daily habits? Whoever does that gets sticky users.
  • The engineering reality: context is limited, models will keep changing, and the work of productizing AI is often grunt work — compaction, edge cases, monitoring.
  • The social side: companions, recruitment tools, and regulatory responses will shape adoption in a patchy way.

These aren’t sudden revelations. They’re slow waves. But they’re real.

A few petty complaints and curiosities

A human can’t help but nitpick. A couple of things made me sigh this week:

  • The verbosity problem is still a thing. Ask for an answer and get a novella. This is small, but it saps time.
  • Tooling lag. The app store is there in principle, but the change management feels half-baked. If you’ve lived through early app platforms, you’ve seen this movie before.
  • The uncanny mimicry. The Font of Dubious Wisdom’s piece is dramatic, and maybe it pushes a bit, but there’s a kernel of truth. Sometimes the model is a little too tidy in its mimicry of human quirks.

And yet, I keep circling back to the useful things — the prompts that save time, the coding that becomes faster, the map that helps students learn. It’s not all theatrical doom or towering triumph.

Where to look next

If a thread from this week grabbed you, follow the author. Their posts are small gateways to deeper thinking:

  • For platform strategy and the concierge idea, see John Hwang.
  • For the messy joy of building and shipping on a new app surface, check Vivek Haldar.
  • For practical engineering on agents and compaction, read Will Larson.
  • For code-first tales of inference-speed shipping, see Peter Steinberger.
  • For the human side — the unease and the folklore — peek at The Font of Dubious Wisdom.

There’s more in the smaller posts. The map, the password fix, the grammar grumbles — these are the little stories that make the big picture feel lived in.

A week’s worth of posts leaves me with this image: a town fair where someone is selling miracle widgets, someone else runs a clever stall with real craft, and a few people are arguing whether the new band on stage is the future of music or just loud. You go home with a pamphlet, a half-eaten toffee apple, and a list of curiosities to try out. Want the recipe for one of those curiosities? The authors linked above have the details — the small print and the step-by-step. Read them, poke at the code, and decide how much of your own life you want to put into ChatGPT’s hands.