ChatGPT: Weekly Summary (September 22-28, 2025)
Key trends, opinions and insights from personal blogs
The chatter around ChatGPT this week felt busy in a practical way. Less shiny demo talk, more “how do you actually live with this thing?”, and then a bit of “are we making a mess?” thrown in like hot sauce. I would describe the mood as half product review, half kitchen-table ethics. A few writers sounded pretty pumped about new features. A few sounded wary. And some, honestly, sounded tired.
Monday-ish: the bundle and the blow‑ups
Two very different notes kicked things off. First, there’s the awkward, real-life angle. Simon Willison wrote about a tense marriage moment where someone used ChatGPT as a kind of instant relationship coach, and it turned into fuel on the fire. The title says it loud: “ChatGPT Is Blowing Up Marriages as Spouses Use AI to Attack Their Partners.” To me, it feels like bringing a microwave to a campfire argument. It heats fast, sure, but it doesn’t fix the vibe. The point wasn’t that ChatGPT is evil, but that timing, context, and intent matter. When emotions run hot, a machine’s crisp phrasing lands like a citation from a referee. You know that feeling when a coworker forwards HR policy in the middle of a heated Slack thread? Like that, but at your kitchen table.
On the same day, Tanay Jaipuria zoomed out with usage numbers and a business lens in “ChatGPT and the Great Bundling.” He leaned on fresh paper stats: over 700 million monthly actives, and 73% of usage is non-work. That detail sticks. I’d say it confirms what you see on buses and in coffee lines: people use ChatGPT for little everyday tasks and quick guidance, not just fancy spreadsheets. He frames ChatGPT as an “all-in-one assistant,” bundling a bunch of chores that used to live in separate apps. Need a recipe, a summary, a quick email draft, a packing list? One tab. He also drops a phrase I’d keep in your pocket: “Doing” at consumer scale. Not just “talking about doing.” The model outputs the thing. The unbundling might come later with niche tools, he says, but right now the default is one Swiss Army knife, not a toolkit of specialist screwdrivers.
That “bundle now, unbundle later” idea kept echoing in later posts, like a chorus.
Tuesday: the coder’s toolkit, the pocket shortcuts, the new prompt vibe
A few posts on 9/23 turned into a mini how-to day.
Simon Willison took a spin with GPT-5-Codex, OpenAI’s coding model wired into the Codex CLI and the Responses API. Think of it as the dev-flavored sibling of GPT-5. Priced like GPT-5, but tuned for code. He pushes a “less is more” prompting note—shorter prompts, tighter scopes, faster loops—especially for interactive coding. His examples are very show-don’t-tell: generate an SVG, write detailed alt text, wire up tools. I would describe GPT-5-Codex as the kind of helper that’s great at pair-programming in the shell while you sip cold brew and refactor. Not a full orchestra of agents, not a sprawling plan; more like a good sous-chef who chops fast and hands you the pan.
Meanwhile, the phone folks showed up. The PyCoach explained “These iPhone Features Changed How I use ChatGPT,” with iPhone 17 tricks that feel practical, even boring in a helpful way. Set the Action button to launch ChatGPT instantly. Use Text Replacement for prompt snippets so you don’t peck the same magic words over and over. Slap widgets on the home screen, and pull in Siri for quick access. It’s the kind of setup that saves five seconds here, ten there. And it adds up. Like keeping the olive oil on the counter instead of the cupboard—less fancy than a new stove, but you cook faster.
Then Jeff Su showed up with “ChatGPT-5 Prompting Best Practices,” and the tone shifts from hacks to mindset. He argues some prompts “got worse” with GPT-5 because the model changed under the hood. Two phrases jump out: model consolidation and surgical precision. I’d say it means fewer hidden mini-models and tighter behavior, so it listens differently. He lays out five tips, with a couple of sticky ones. Router nudge phrases guide the model down the right path, like a gentle sign that says “this way to math” or “this way to code.” And the “Perfection Loop” idea—ask for a draft, critique it, refine, repeat—feels like a simple workout routine. Not fancy, just reps. The subtext: GPT-5 isn’t a chatty genie. It’s more like a careful clerk who appreciates a clean ticket.
Midweek drift: money storms, power users, and the assistant that plans things for you
By 9/25, the conversation widened to big money and product direction.
In “AI #135: OpenAI Shows Us The Money,” thezvi.wordpress.com rounds up the dollar signs: a $100B investment from Nvidia, a $400B “Stargate” buildout—basically heavy bets on scale and infrastructure. I’d translate that as “they’re not playing indie ball; this is Yankees payroll energy.” The post bounces through LLM utility, what’s next for OpenAI products, AI in personal finance, and the ever-present jobs/economy question. Also there’s a quirky note: a book called “If Anyone Builds It, Everyone Dies” becomes a bestseller, which says something about the current mood—doom sells, but so does capability.
Right next to that, The PyCoach published “How to Go from ChatGPT Beginner to Pro in 2025.” It’s a step-by-step ramp with a prompt formula you can memorize. Task and context are the essentials. Then, for bigger jobs, add exemplars, persona, format, tone. Bit of a belt-and-suspenders approach, but it tracks with the “surgical precision” theme. The guide feels like those laminated kitchen charts you stick to the fridge. You don’t need it forever. But early on, it stops you from over-salting.
Then came the very product-shaped news: “ChatGPT Pulse wants to be the proactive AI assistant you didn’t know you needed,” by Brian Fagioli. Pulse is a new Pro feature that reads your activity across ChatGPT and connected accounts to send daily updates, nudges, ideas. Less reactive, more “Hey, I noticed you care about X—here’s Y.” The pitch is time savings by removing the tiny planning chores. Like a calendar that also preps your to-do list based on your texts and the weather. There’s a nod to “advanced reasoning,” and a reassurance that users stay in control. But there’s also a question mark about access—Pro gets it now; Plus might have to wait. Excitement is clear. I’d say it sounded like the moment a smart speaker became a scheduler, not just a timer.
While Buzz Lightyear was calling from the future, Jimmy John grounded it with “The killer use case for GenAI.” He argues personal productivity is the sticky core, and assistants like chatGPT are quietly replacing search for many moments. Ask, get an answer, move on. Less links, more tasks done. He predicts consolidation—fewer big players. It’s not a wild claim; it matches the bundling talk. But it also raises a worry you can feel in the back of your head: if one or two assistants run the show, who gets to set the defaults? Which recipes become the recipes?
Friday: agents or not, we argue about complexity
A different drum beat showed up from Kevin Kuipers with “How Claude Code Works, And Why It Matters.” He compares Anthropic’s Claude Code with OpenAI’s GPT-5 Codex. The short version: Claude Code tries to stay simple. Single-agent, transparent steps, human in the loop. He frames OpenAI’s direction as more multi-agent and complex—more moving parts, more parallelization. There’s a philosophy clash here. I’d describe it as IKEA vs. built-in cabinetry. The Claude Code vibe is “you see all the parts, you can fix it,” while the Codex vibe is “it’s powerhouse, but you might need the manual and a subscription.” The post includes usage tips for Claude Code: iterate small, communicate clearly, don’t spawn chaos. It wasn’t dunking on Codex; it was a call to keep the user steering wheel visible.
This loops back to Simon Willison’s Codex note about “less is more.” When multiple voices say “tighten the loop, reduce the prompt, iterate,” it sounds like a pattern. Fewer fireworks, more carpentry.
Saturday: what’s writing, what’s feed curation, and is this just ads in new clothes?
On 9/27, the talk went philosophical and then very commercial.
“How Machines Learn to Write,” by Nisheeth Vishnoi, reads like a short tour through the history of language modeling, from ELIZA to transformers, with Turing and Shannon as mile markers. He highlights attention mechanisms and the transformer leap, then pauses on a point some folks rush past: machines can make smooth text, but they don’t have lived experience. No hunger, no grief, no nine-dollar coffee regret. The writing is good at form, less sure at meaning. I’d say he’s not saying “throw it out,” he’s reminding that the signal is still fundamentally human, and sometimes we mistake fluency for truth. Nice piece to brew a cuppa with.
Then—back to Pulse. Manton Reece tried it and liked the feel. His post, “ChatGPT Pulse,” talks about a morning report that lines up with interests, even surfacing things not actively searched for. He speculates it could drive more site referrals than traditional search because the assistant is pushing content into the day rather than waiting to be pulled. He also notes OpenAI acting like a product company, not just a lab. That detail rang a bell: Pulse isn’t a model demo. It’s a sticky feed.
Mike McBride took the other side in “AI Tech "Discovers" Personalized Ads.” He basically says, yeah, this is the old playbook from Meta and Google, but with a chatty wrapper. Personalized updates based on preferences—sounds like ads by another name, and revenue probably moves that way soon. The tone isn’t angry, just skeptical. You can almost see him shrug. If it quacks like an ad platform and walks like an ad platform… His point also sits right next to Tanay Jaipuria’s bundling thesis. Bundles need a business model. Ads are the old faithful.
Sunday: inbox slop, human patience, and picking tools like shoes
“Sunday Paper - Island in the Net,” by Khürt Williams, introduced a term from HBR: “workslop.” That’s AI-bulked content clogging inboxes. Instant but empty. You’ve seen it: mass-polished notes with no soul, all template, zero point. As a new employee, he leans on old-school research rather than Copilot and wonders if the assistant even helps. There’s a twist: he had a positive spin with Claude Sonnet 4, and he finds ChatGPT a bit short there. So it’s not anti-AI; it’s anti-slop. To me, it feels like the early email era all over again—because you can send more, people do, and the channels fill up. Then the norms catch up. Or at least we hope they do.
This circles back to that marriage piece. Tools don’t fix intent. If the goal is to send something thoughtful, AI can help draft. If the goal is to send something fast, AI makes that too easy, and suddenly it’s a flood. Same river, different boats.
Threads that kept crossing
Bundling vs. unbundling: Tanay Jaipuria makes the bundling case with current usage. Jimmy John sees deep stickiness in productivity assistants. Manton Reece hints at Pulse becoming a distribution channel. Mike McBride says that looks like ad-era playbooks in a new jacket. The balance sounds like: bundle for convenience, then specialize later for power users and niche tasks. Feels like cable TV, then streaming, then “oh no, now we have ten subscriptions,” then somebody sells a bundle again.
Proactive assistants vs. search: Brian Fagioli is into Pulse as a planner and nudger. Manton Reece notices the feed effect. Jimmy John says assistants already replace some searches. The tension is who picks what you see. When you ask a question, you control the topic. When the assistant sends a morning brief, it controls the topic. Not evil, just different power dynamics.
Simple loops vs. multi-agent complexity: Simon Willison and Jeff Su push short prompts and iterative refinement. Kevin Kuipers defends single-agent clarity with Claude Code and warns about complexity for its own sake. Meanwhile, GPT-5-Codex seems quite capable in the “do a thing now” coding niche without choreographing a chorus line. I’d say the week’s energy favors tight loops.
Money and scale: thezvi.wordpress.com waved big numbers—$100B here, $400B there—and asked what that means for utility, jobs, and product rollouts. If infrastructure is a skyscraper, features like Pulse are the coffee shops at street level. People will judge the building by whether the espresso hits.
Human texture and meaning: Nisheeth Vishnoi taps the brakes on the hype by reminding that text isn’t meaning by default. Khürt Williams adds the “workslop” problem—fast words without weight. Simon Willison shows how misusing an AI voice in an emotional moment hurts. It keeps circling to this: tools write well enough to pass, but we still have to choose when to pass the ball.
The week’s “product feelings” in plain words
If these posts were reviews on a shopping page, here’s how I’d say it, just straight:
ChatGPT as a bundle: I’d say it’s winning the “one app for everything basic” round right now, per Tanay Jaipuria. The 73% non-work stat explains why ideas like Pulse matter. Daily life is the main event.
Pulse as a push feed: Brian Fagioli and Manton Reece like it for time savings and discovery. Mike McBride warns it might walk toward ads. If you’ve ever watched your social feed drift from friends to sponsored “you might like” posts, you know the feeling.
Coding tools: GPT-5-Codex from Simon Willison looks calm and competent for interactive dev work. Kevin Kuipers says Claude Code’s single-agent style is easier to trust. That’s a personal taste thing. Like preferring a manual car—you trade convenience for control.
Prompting on GPT-5: Jeff Su and The PyCoach both push structured prompting. It’s less magic words, more recipe. A little “router nudge” here, a “Perfection Loop” there. Not glamorous, but it makes the sausage.
Mobile shortcuts: The iPhone 17 tricks from The PyCoach are the quiet MVP if you tap ChatGPT a dozen times a day. Action button, text replacements, widgets, Siri. Feels very IKEA: some assembly required, but the bookcase holds.
Culture and ethics: Simon Willison gives the relationship caution story. Khürt Williams names “workslop” and prefers careful, human-led research—though Claude Sonnet 4 got good marks. Nisheeth Vishnoi reminds that writing isn’t just format. If you’ve ever read a note that was perfect grammar and zero heart, you get it.
Money horizon: thezvi.wordpress.com says giant checks are landing, with Stargate buildouts and all. That usually means features will keep rolling and keep getting pushed into daily routines. Ready or not.
Tangents that still matter
A couple little wanderings from the posts stuck around in my head and kept looping back to ChatGPT’s week.
The “Perfection Loop” from Jeff Su sounds like creative practice, not just prompt engineering. Draft, critique, refine. It’s also how you learn a recipe or tune a running route. The method works beyond chat.
Tanay Jaipuria’s “Doing” language changes how you judge AI value. It’s not “did it suggest something smart?” It’s “did it finish the thing?” This is quietly reshaping how people measure ROI. You can feel this in how Brian Fagioli and Manton Reece talk about Pulse.
Kevin Kuipers’s praise for single-agent simplicity isn’t just a coding philosophy. It’s about trust. When a tool’s steps are legible, you tolerate mistakes better. When it’s a black box with ten mini agents, even correct answers can feel wobbly. This matters if Pulse starts doing more on your behalf.
Mike McBride’s ad-frame question isn’t doom posting. It’s housekeeping. If Pulse becomes a daily brief, how will it pay rent? Ads are the usual answer. If they arrive, what guardrails keep that brief from becoming a coupon book? It’s worth asking now, not later.
Khürt Williams using Claude Sonnet 4 and calling out ChatGPT’s misses suggests the competitive gap is real and situational. The best model might be the one that works for your kind of work this week. Not a religion, just a toolbox. Swap the screwdriver if the screw changes.
What to try this week (low effort, noticeable payoff)
If you’re on iPhone 17 and ChatGPT is daily, set the Action button to launch a new chat. Add two Text Replacements for your most used prompts (like “/sum” for “Summarize and list key actions in bullets, be concise”). Stick a ChatGPT widget on the first home screen. This gets you micro-wins that add up.
For GPT-5 prompts that feel off, test Jeff Su’s “router nudge” up top: “You are a precise technical editor” or “You are a senior backend engineer.” Then run his Perfection Loop: draft → critique → revise. It’s boring in a good way.
If you code, take Simon Willison’s GPT-5-Codex for an interactive spin. Keep prompts short. Ask for small diffs. Use it to generate docstrings or SVGs or tests. Treat it like a helper sitting next to you, not a project manager.
If Pulse is available to you, try it for a few mornings and watch whether it surfaces items you actually act on. If the brief pushes fluff or turns salesy, note it. If it saves you planning time, that’s signal. Also, peek at Manton Reece’s observation about referrals—it hints at a bigger shift in how content finds you.
For personal topics—relationships, sensitive choices—borrow Simon Willison’s implicit caution. Maybe don’t bring AI into a live argument like a surprise witness. Use it, if at all, before or after, not during the heat.
If your inbox smells like “workslop,” run a small filter experiment for templated phrases you see too often. Or reply slower. Or not at all. Khürt Williams basically says “you don’t have to swallow every AI paragraph that lands.” That’s fair.
The bigger picture, but said simply
This week, ChatGPT looked less like a shiny robot in a YouTube demo and more like a set of household habits forming. A few knobs got added—Pulse for mornings, Codex for coding, iPhone shortcuts for speed. A few people noticed what happens when those habits seep into parts of life that are touchy. And yes, a few folks pointed out someone has to pay for all this GPU juice, and that usually bends the product shape.
I’d say the strongest current is still the bundling wave. People like one place to get things done. If Pulse turns into a default morning feed—and it feels like it could—the assistant becomes the entry point to the day, not just the tool you ask a question at noon. That’s a power shift. Feels small in week one, not small in year two.
But there’s a matching current saying “keep it simple and controllable.” You can hear it in Kevin Kuipers on single agents, in Simon Willison on tight prompts, in Jeff Su on surgical precision. The assistant that respects your steering wheel might win more trust than the one that tries to drive.
And then there’s the human bit that doesn’t go away. Nisheeth Vishnoi reminds that a well-formed sentence isn’t a life. Khürt Williams reminds that inboxes are still made of attention, not storage space. Simon Willison reminds that the right answer in the wrong moment is the wrong answer.
If any of these threads tug at your curiosity, the original posts go deeper and sharper. These writers are not waving banners; they’re poking at the small corners where usage turns into habit. That’s the interesting part. That’s where “a model” becomes part of the day, like a kettle on the stove or the notes app you keep reopening. Some weeks it looks like a parade. This one looked like home improvement.
I’d keep an eye on Pulse rolling into more hands, on developer takes comparing Codex and Claude Code in the wild, and on what happens to prompting styles now that GPT-5 seems to prefer clean tickets over chatter. If ad-like elements sneak into the assistant brief, you’ll feel it quick. If they don’t, and the brief keeps saving time, the bundle gets tighter.
Little thing to end on: that 73% non-work stat from Tanay Jaipuria is a tell. ChatGPT isn’t just a tool you clock into. It’s turning into the default helper for the small things. The kind of helper that reminds you to buy cilantro and then fixes your SVG alt text. Sounds ordinary. That’s why it’s big.