AGI: Weekly Summary (November 24-30, 2025)
Key trends, opinions and insights from personal blogs
I would describe this week's blog pile-up on AGI as a small, noisy crossroads. To me, it feels like a market square where a few folks shout about the map, some whisper about the compass, and others keep asking whether there’s even a map at all. The pieces collected between 11/24 and 11/30/2025 circle a handful of shared worries: which path gets us to true generality, how we should think about values if AGI ever actually cares, who should bankroll the mad science, and whether the whole thing is decades away or next year. They don't agree much, and that disagreement is the interesting part — it shows where the hard questions actually live.
Moving from scaling to research: the Sutskever conversation
There’s a clear mood shift in the conversation that starts with Dwarkesh Patel hosting a talk with Ilya Sutskever. The headline is familiar — “we’re moving from scaling to research” — but the meat is worth chewing on. I’d say the takeaway is simple and stubborn: we’ve leaned on bigger and bigger models for a while, and now the bet is on understanding generalization, learning from deployment, and getting clever about reinforcement learning. The phrasing in the write-up makes it sound like the AI community is shifting from pushing the gas pedal to opening the hood and asking what parts actually make the car run.
Sutskever’s point about models learning from deployment is worth remembering. It’s like teaching someone to cook by sending recipes and then watching them in a real kitchen. You learn different things when there’s heat on the pan and a crying toddler in the next room. The blog says he thinks AI should be able to learn in the field, not just in a lab. There’s also this bigger, messier idea that as systems get more powerful they might have to ‘care’ about life in some sense — not just execute commands. That’s a heavy line. It’s a mix of technical planning and existential anxiety all rolled into one.
Economic implications show up too. The post suggests deployment will change incentives. If models learn while deployed, their value hinges less on a one-time training sprint and more on long-term, real-world feedback. To me, that flips how companies might compete. You don’t just sell a pre-trained engine anymore — you sell a mechanic and a maintenance plan.
If you’re the sort who likes the nuance, read the conversation. It feels like the industry’s moment of practical humility: scaling won’t by itself answer why intelligence generalizes.
Values talk — etymology, hermeneutics, and the slow grind of meaning
There’s a two-part essay by tsvibt that takes a very different route. It’s not about model sizes at all. Instead it wanders through the history of words about wanting, caring, and pursuing. At first it seems like a language nerd’s meandering, but then it sets up something sharper: if we want aligned AGI, we need a way to interpret human values that isn’t brittle.
Part one — “Words about values” — digs into etymology and argues for what the author calls a ‘hermeneutic movement.’ Translation: understanding values is an interpretive act. Values don’t land fully formed. They float in a sea of history, context, and habit. I would describe this as a reminder that alignment isn’t just programs and math. It’s also a humanities problem. If you like the smell of old books and slow thinking, this resonates.
Part two — “Relating values and novelty” — pushes further: values aren’t static. They’re diasystemic. Fancy word, but the idea is clear — values are spread across systems, contexts, and times. They should empower human operators, not replace them. That’s a practical tilt: if AGI gets clever, it should expand our choices, not narrow them. The essay treats values as promises and references — things that point to more than themselves. I’d say that’s useful because it resists the urge to pin human meaning down like a butterfly.
Both pieces press the same, slow question: how do we teach a machine to notice why humans value some things and not others? The result is a kind of intellectual patience you don’t see in every AGI post. It asks us to slow down, and that’s rare in a world addicted to faster benchmarks.
Safe Superintelligence (SSI): a company without a product roadmap
Dave Friedman writes about Ilya Sutskever’s company, Safe Superintelligence (SSI). This post is the kind of financial/strategic reading I always enjoy because it asks the blunt question: does this thing make any sense as a business?
Friedman’s answer is both yes and but. Yes, it makes sense as a coherent bet: deep pockets, top talent, and a mission to aim at safety instead of straight product-market fit. But it’s also a high-variance play. SSI doesn’t want to ship typical consumer products. The company seems to be buying time and brainpower to solve foundational issues. That’s not the usual startup playbook. It’s more like a research institute with a war chest.
The strange part — and Friedman calls this out — is the lack of real-world feedback. Startups usually live or die by customers. SSI looks like it’s intentionally avoiding that, at least for now. That raises sensible doubts: how do you know you’re on the right path if you’re not getting smacked by the market a little? It’s like trying to perfect a car engine without ever driving it.
There’s also a neat point about coherence. SSI’s approach follows from Sutskever’s views. If you buy his prioritization of safety and the idea that classical product cycles aren’t the right fit for this problem, then SSI is a consistent move. But consistency isn’t proof. It’s a bet with a long tail.
Networks, recommendation systems, and where value actually sits
Philipp Dubach asks a market-oriented question that’s easy to miss when everyone’s debating architecture: even if we build AGI, who profits, and how? His piece on LLMs and recommendation systems suggests something I’d been mulling over — models might be great at pattern matching, but recommendation quality often needs huge datasets and user signals. That’s not the same as general reasoning.
He’s skeptical of a quick jump from advanced pattern completion to fully general reasoning. I’d say the analogy is handy: pattern matching is like knowing every grocery item by sight. Reasoning is figuring out which groceries will make a meal together when you realize the oven’s dead. The former can be phenomenal; the latter requires a different kind of understanding.
Dubach also imagines the market effects. If several players hit AGI around the same time, price competition could drive down the pure model value. So then where’s the rent? It might move away from models and toward applications, integrations, and customer relationships. That matches the Sutskever thread on deployment: once systems learn in the wild, the business advantage might be who keeps teaching them better, not who trained them first.
That shifts strategy. Instead of owning the biggest model, firms may compete on who can make the model useful in a messy, human world. That’s less glamorous, maybe, but it’s where money actually lives. It’s also where ethics and governance become baked in — the “how” of deployment will matter.
Skepticism and timelines: LeCun, cautious voices, and “a decade at least”
A few posts — mainly from The Independent Variable — keep circling back to the idea that AGI is further away than hype suggests. They quote Yann LeCun’s line that we need new approaches and that a realistic timeline for AGI is at least a decade. That conservative, methodical view pulls against the rapid-scaling narrative.
The Independent Variable repeats it in a couple of pieces that otherwise discuss other things like humanitarian policy and the impact of restructured aid. The fact that these posts link AGI skepticism with broader policy concerns is telling. It’s a reminder that AI debates are not in a vacuum. If political attention and funding drift toward speculative AGI, other concrete systems — like foreign aid — might get neglected. That’s a cheap pivot, but an important one: technology fever has downstream human costs.
I’d say the skeptical beat is useful. It forces the more optimistic camp to explain which steps are left and why those steps aren’t seconds on a stopwatch but years of research. A decade sounds long in a Twitter thread, but in research, ten years can be modest.
The odd twin theme: humanitarian fallout mentioned alongside AGI
Several posts from The Independent Variable repeat the same report on USAID dismantling and its toll. It’s jarring to see those statistics — hundreds of thousands of deaths — listed beside contemplations about AGI timelines. The juxtaposition is useful, actually. It’s like reading a cookbook and then finding a section on food deserts. Both are about systems that fail humans, but one is immediate and measurable; the other is speculative and complex.
These reminders pull the conversation away from ivory-tower abstractions. They say, in effect: while you debate AGI timelines, people are dying because of policy choices today. That’s a moral compass check. It’s not merely rhetorical. It’s practical: where funding and attention go matters now. The posts force a kind of triage question: do we spend vast sums chasing hypothetical future care or fix the real care systems that already exist? The authors don’t answer it cleanly, but the question is loud.
Points of agreement and the gaps that keep nagging
Reading all of these together, a few patterns pop up.
Research over blind scaling: There’s a clear move toward understanding rather than merely scaling. Sutskever voices it most directly, but you see it echoed in other pieces. The sense is that brute force has limits.
Deployment matters: Several posts note that models learning in the field will change incentives. That’s a practical, non-philosophical point. It’s also a business point: who runs the deployment pipeline might own the customer relationship and the data loop.
Values are hard: The tsvibt essays make the case that values are interpretive, evolving, and messy. That pushback against any simple ‘value vector’ is important. Alignment can’t just be a one-time setting.
Economics will surprise you: Dubach and Friedman both highlight that where the money ends up might not be where the research is. Models can become commodities. Services, integration, and safety guarantees might capture more economic value.
Timelines diverge: Some people see fast moves; others — LeCun and the Independent Variable’s reporting — see a long, uncertain road.
Those are the agreements. The gaps are sometimes larger: how, exactly, do you build interpretive mechanisms into AGI? What does deployment learning look like safely? How do we fund long-horizon safety research without starving current humanitarian needs? These are not toy problems. They’re the kind that take years to suss out.
Small notes and tangents (because people like tangents)
Reinforcement learning keeps popping up. It’s treated as a tool that might get us from pattern completion to something resembling learned habits. Think of it like training a dog. Feed it right, reward correctly, and habits form. But dogs aren’t philosophers. So we still need to figure out what the reward looks like at scale.
The language thread — etymology and hermeneutics — feels almost quaint in a world obsessed with metrics. But sometimes you need quaint. Values are the soft underbelly of alignment. If you ignore them, you can produce a very obedient machine that still does harm.
Business models matter. I keep thinking of car manufacturers. Some build the engine; others build the service network. The company that owns the roadside assistance and replacement parts often ends up with steady cash, even if the engine-makers get the headlines.
The human cost tangent keeps poking me. It’s like listening to a band warming up and hearing sirens in the background. The two things are different, but you can’t unhear the sirens. Policy and tech funding are tied together.
What kept sticking with me
There’s an odd balance in these pieces between grand statements and small, practical worries. The grand statements — “we need to move from scaling to research” or “AGI might need to care about sentience” — are attention-grabbing. But the small worries — deployment loops, funding choices, the way companies actually behave — are the ground-level things that will decide whether those grand statements mean anything.
I’d say the most useful mood is modesty. When writers admit that values are interpretive, or that models might not equal reasoning, or that companies could misdirect funding, the conversation gets more grounded. There’s a certain stubborn, sensible humility in insisting on field-testing, on permissive research, on economic realism. That humility doesn’t make for clickbait, but it does make for better planning.
If you want to go deeper, check the original pieces. Dwarkesh Patel captures Sutskever’s technical unease. tsvibt gives you slow philosophical work on values. Dave Friedman asks whether SSI is a rational bet. Philipp Dubach pulls the market thread taut. And The Independent Variable tosses in a reminder that policy choices right now have lives on the line.
Read them if you like nuance and the kind of slow, slightly annoying questions that don’t go away. I’ll say it again because it mattered to me: models alone won’t carry us. The deployment, the interpretive work on values, and the money and attention decisions will. That is where this week’s conversation lives — not in a single paper, but in the messy space between research lab, boardroom, and hospital waiting room.
If you’re only skimming, remember this: AGI isn’t just engineering. It’s hermeneutics, markets, politics, and a bit of hopeful guesswork. Like a potluck dinner where everybody brings one dish, the whole meal only comes together if people actually share the food. Some posts felt like bringing a casserole; others brought a complex sauce that needs slow simmering. Both matter. Both, in different ways, will tell us whether the kitchen holds up when the real guests arrive.