ChatGPT: Weekly Summary (January 26 - February 01, 2026)
Key trends, opinions and insights from personal blogs
I’ve been poking around a bunch of blog posts this week about ChatGPT. There’s a lot packed into a short stretch of time. Some posts are excited. Some worry. Some do a bit of both. I would describe them as tilt-a-whirl reports — quick spins, some thrills, a few squeaks, and a few people asking if the ride will keep running tomorrow.
New powers in the shell: containers, bash, and pip/npm
The week kicked off with a very practical update. Simon Willison wrote about ChatGPT Containers now running bash, installing pip and npm packages, and downloading files. That line makes the hair on the back of a developer’s neck stand up a bit. To me, it feels like giving the assistant a little toolbox and saying, “Go do the DIY.” It’s useful, obvious, and a bit worrying at the same time.
I’d say this update matters most for people who actually write code with ChatGPT. Previously, the assistant could suggest commands or paste snippets. Now it can execute them inside a controlled environment. That’s like handing someone a saucepan instead of just the recipe. You can see why some folks clap. But there’s a twist. Simon notes that the documentation hasn’t kept pace. So it’s a bit like buying a new kitchen gadget with no instructions and a sticker saying, “Use at your own risk.”
The missing docs raise the usual questions. What is sandboxed? How much network access do these containers have? Can bad packages be installed? Who cleans up after the run? The blog doesn’t pretend to answer everything. Instead, it prompts more testing and caution. And that’s valuable. It’s the kind of thing that makes you nod and then go check the fine print — which is exactly what you should do.
If you’re into building things, the update is a big nudge to try out workflows that were awkward before. If you’re wary of surprises, it’s a nudge to ask OpenAI for clearer notes. Either way, it opens possibilities. Like when your neighbour gets a fancy new drill and suddenly you’re debating whether to fix that leaky shelf yourself.
Teaching, exams, and a messy middle ground
Peter Coles wrote a quieter, steadier piece about the semester ahead. He’s sorting labs, deadlines, and grading while trying to decide where Generative AI sits in the classroom. I would describe his tone as practical and a little resigned — like a teacher with two cups of tea, one for patience and one for panic.
He’s not waving a red flag or throwing out the textbooks. He argues for teaching students how to use AI well. Not just ‘use it’, but verify outputs and understand limits. That’s something I’d agree with. It’s like teaching someone to drive and not just giving them the keys. You teach them how the engine coughs when it’s thirsty and how the dashboard light means stop. Students need the map and the compass, not just a fast car.
Peter also worries about logistics. Labs, projects, and deadlines are built around certain assumptions about who writes what and how fast. AI blurs those lines. You can see his unease: if students hand in AI-shaped work, do you grade the student or the model? He suggests the solution is teaching verification and critical thinking alongside AI. That makes sense. But it also sounds like more work for already overbooked staff. Which is, well, life.
There’s a small cultural nod in Peter’s post — the worry about assessment cheating is as old as exams. Yet the tools have changed. It’s like discovering kids are now using calculator apps instead of slide rules. You adapt or you fall behind. He lays out the problem without panicking. Read his piece if you’re teaching or caring about how schools change.
The ads, the cash, and the marketplace hustle
Money shows up in two flavours this week. One is direct: ads and affiliate commerce inside AI platforms. Tanay Jaipuria looks at monetization in a way that’s not glossy. He talks about intent-based ads, affiliate links, and how chat interfaces might evolve into agent experiences that quietly nudge purchases. The thought is: if a user asks a question, that context is gold for targeted offers.
I’d say Tanay’s post reads like someone watching shopfronts on a busy street and wondering which shops will put out sandwiches or slot machines. There’s logic to it. If AI becomes the place people ask what jacket to buy, advertisers will follow. But there’s also friction. Users don’t want their chat to feel like a market stall. They want help. There’s a balance. Too much push and people bail. Too little monetization and platforms look to other ways to make money.
Tanay points to early experiments and possible revenue streams for companies building chat layers. The idea of ‘intent-based ads’ feels inevitable. It also feels a bit like being interrupted mid-conversation by a salesperson who’s always reading your body language. That’s not a metaphor for most users’ patience.
Linked to that is Neo Kim with a more technical look at the ChatGPT Apps marketplace. He explains the nuts and bolts: apps as in-chat widgets, servers that handle the heavy lifting, and a new orchestration model compared to older “Plugins.” The detail gives a peek at how third parties can plug in. It’s interesting to see the architecture described so plainly. It’s not magic. It’s plumbing.
Neo’s explanation makes the marketplace feel usable. It’s the difference between hearing about a new tram line and seeing where the stops are. You begin to imagine not just what can be sold, but what can be built. Restaurant bookings with interactive menus. Flight search that actually lets you choose seats. The post is a practical tour of that space. Worth a read if you’re thinking about building an app or wondering what those shopping nudges could look like.
Are the numbers real? OpenAI’s scaling drama
Three posts this week worry about money in a heavier, more existential way. Will Lockett looked at OpenAI’s revenue forecasts and user numbers. He thinks the rosy $20 billion forecast for 2025 looks optimistic. The math doesn’t line up. So he pulls the thread and questions conversion rates and the impact of deals like the one with Microsoft. It reads like someone checking the kitchen bills after a party and finding the receipts don’t match the menu.
Then there’s a louder take from Doc Searls Weblog, which argues OpenAI could be close to bankruptcy. That’s a bolder claim. The post points to low conversion rates and big operating losses. It contrasts OpenAI with Google, which, the author says, has more diversified income and better footing. This is the kind of shouty paragraph that makes business podcasts light up.
I’d say these posts share a theme. They ask whether the current business model is sustainable. Are users converting to paid plans? How much does Microsoft keep? How many free users are there and what do they cost? The answers are murky. And uncertainty spurs a lot of speculation. Which is fair. When big money and opaque deals are involved, speculation is the favourite pastime.
There’s also a psychological angle. When businesses project huge numbers — $20 billion, $10 billion — readers want evidence. The posts examine the evidence and find it thin. That doesn’t mean doom is certain. But it does mean investors, partners, and users should be paying attention. Like watching a foggy motorway at night: you slow down, don’t assume the road keeps going straight.
Microsoft, the dance floor, and the clumsy two-step
MBI Deep Dives had a post about Microsoft’s stock since ChatGPT launched. It’s a useful reminder that the corporate dance around AI isn’t a solo act. Microsoft is a partner, a customer, and a shareholder all at once. The post notes that despite expectations, Microsoft’s stock underperformed Alphabet’s. Revenue estimates ticked up, but the share price didn’t always follow.
This feels familiar. Big tech moves are often messy. Partnerships can be deep but not always translate immediately into stock growth. The post reads like someone who’s been on the trading floor and is now back home, making sense of charts with a cup of instant coffee. It’s not fatalistic. It’s observational. The sense is: Microsoft has spent big and is committed, but markets are fickle and the payoff is spread over years.
Relating this back to the earlier posts, Microsoft’s fortunes are tangled up with OpenAI’s. If OpenAI faces financial pressure, Microsoft’s strategy is affected. If Microsoft shifts, it affects OpenAI’s runway. It’s a bit like two people sharing a flat: who pays the electric bill matters. Forgive the domestic analogy. It’s apt.
Themes that keep popping up
A few things kept showing up in different guises. One is capability: ChatGPT got more powerful, with containers and Apps. Another is business model: ads, affiliate commerce, and big revenue forecasts. A third is risk: financial fragility, security concerns, and the classroom implications. These themes overlap and nudge each other.
Capability feeds monetization. More capability makes ChatGPT stickier. If people can do things inside the chat, then shops and services want to be there. That’s Tanay and Neo’s territory. But more capability also means more attack surface. Running pip, npm, or shell commands invites security questions. That’s Simon’s cautionary note. And if things go wrong — if a costly exploit happens or a partner pulls out — that hurts the finances. That’s Will and Doc’s worry.
Education sits in the middle. Teachers want power in the classroom, but not chaos. Students want help, but not hand-holding that substitutes for learning. Schools that teach verification will likely come out of this looking sensible. Schools that don’t adapt may find themselves in a bind. Peter’s post is the human bridge between the tech and the money. He’s not obsessed with market caps. He’s worried about deadlines, grading, and learning outcomes.
There’s also an emergent theme of transparency. Multiple posts asked for clearer docs, clearer numbers, clearer boundaries. Developers want docs. Teachers want policies. Analysts want financial clarity. Users want to know when they’re being sold to. It feels like a chorus asking OpenAI and others to be less mysterious.
The tech under the hood, in plain terms
Neo’s piece on Apps gave the most detail about how the system works. There’s a server side for apps, widgets embedded in chats, and orchestration by the model. Think of it as a marketplace stall where the stallkeeper handles your order but a small robot runs to the kitchen to fetch the sandwich. The robot needs routes, permissions, and a place to put the bread. That’s the tech stack.
Simon’s container piece complements that. If ChatGPT can run a shell, that changes what apps can do. You don’t always need the external server if the model can perform limited actions inside a sandbox. But the sandbox has to be strong. Otherwise it’s like keeping your gold in a paper envelope. Neo and Simon together sketch both the promise and the plumbing.
There’s also an interesting product design point hidden in these posts. When a chat has interactive widgets, users behave differently. They click. They explore. That changes how you design the conversation and how you think about ads. The interface becomes the shop window. Which is exactly what Tanay was saying, but from the human side.
A few nitpicks, and some nagging doubts
The posts aren’t shy about doubts. Documentation lags. Financial transparency is thin. The balance between useful nudges and annoying ads is precarious. One recurring question is how much control users will have. Will there be clear labels when a suggestion is monetized? Will sandbox limits be strict and visible? These are small-sounding but big issues.
Another nagging thing is the “free users” problem. Several posts pointed out that a huge base of free users is expensive to support if conversion rates stay low. It’s the classic software-as-a-service dilemma. Once you offer something compelling for free, it’s hard to shrink it back. So you either build layers of premium features or find other revenue streams like ads. Neither option is painless.
I’d say the writing this week leans cautious. Not paranoid. Just cautious. People are asking practical questions. They want to know if the cake will be baked consistently or if the oven’s thermostat is broken.
The mood: curious, wary, practical
There’s an underlying mood shared by many posts. Excitement about new tools. Worry about money and misalignment. A focus on real-world use. None of the authors is just cheering from the rafters. None is tearing everything down either. It’s kind of like watching a new restaurant open: the décor looks promising, some dishes are very good, a few are undercooked, and the bill is surprisingly complex.
Some posts are the equivalent of staff reviews. Neo’s is the head chef explaining the kitchen. Simon’s is the health inspector noting a missing label. Peter’s is the manager worrying about the rota. Tanay is the owner plotting the ad menu. Will and Doc are the accountants checking the till. And the market post about Microsoft watches how customers respond.
Little digressions that matter
A couple of small asides stuck with me. One is how these tools change everyday habits. If bookings, shopping, and simple tasks shift into chat, that will reshape small businesses and daily routines. It’s comparable to when smartphones shifted life from pocket calculators and paper notes to apps. People share anecdotes about saving time or losing a bit of personal control. That’s the human side.
Another is the cultural reference of trust. People still trust their own judgement more than a model’s. But they trust a convenient tool more than a clunky, opaque one. That trust is fragile. It’s like lending a neighbour a ladder. You do it if they’re careful. You don’t if they’ve left the ladder in the rain.
If you want to go read the originals
If you like plumbing and how things talk to each other, start with Neo Kim. If you want the practical developer angle — bash, pip, npm — look at Simon Willison. If you care about education and grading, peek at Peter Coles. For monetization and ad strategy, Tanay Jaipuria lays out the possibilities. If you want to squint at the financials and worry about runway, read Will Lockett and Doc Searls Weblog. For stock and corporate storytelling around Microsoft, MBI Deep Dives has the market notes.
I’d say each piece adds a brushstroke. Stand back and the picture is recognisable but a little blurry. That’s fine. It’s still interesting.
So there you go — a week of thinking out loud about ChatGPT. New toys, messy paperwork, business pitches, and classroom headaches. It’s all happening at once, like a town fair where someone set up a flashy new ride next to the library. People will queue. People will grumble. People will try the candy floss. And some will check the safety sticker twice before they hand over a tenner.
If you want the deeper nuts and bolts, go read the posts. They’ll give you the receipts and the diagrams. This is just the kind of friendly nudge that says: read them — you’ll find things in there that make you stop and think, or laugh, or reach for a kettle. Which, honestly, is half the point of following these conversations.