Five Things

Five Things

Artificial Intelligence

Five Things AI: Oppenheimer Moment, Most Disruptive Company, Catching up with Claude Code, Better Agentic Coding, Intenser Workloads

Read this to know what matters in AI right now.

Nico Lumma's avatar
Nico Lumma
Mar 13, 2026
∙ Paid

Heya and welcome back to Five Things AI!

Yet another week where almost nothing happened in AI. Well, except for these topics here: Anthropic under Dario Amodei stands out with its intense focus on AI safety amid explosive growth, drawing Manhattan Project parallels while pushing utopian visions in essays like Machines of Loving Grace, yet clashing hard with the Pentagon over red lines on autonomous weapons and domestic surveillance, leading to a supply-chain risk label that OpenAI quickly exploited. Claude Code has reset the bar for agentic coding, leaving OpenAI scrambling after missing the Cursor boat, while startups like Axiom tackle buggy AI code via math proof transfer learning, and data from a recent study shows AI ramping up workloads despite lightening some tasks, creating multitasking chaos. I have been reading a ton about Anthropic lately and think their principled stance, tight ant culture, and recursive self-improvement edge make them the most disruptive force right now, even as safety policies bend to the race.

Enjoy this edition of Five Things AI! And don’t forget to check out GRID!


Dario Amodei’s Oppenheimer Moment

A collage that includes the faces of Dario Amodei and J. Robert Oppenheimer

Amodei does not say that this utopian AI future is inevitable. To the contrary, among the chief executives at the top AI labs, he may be the one who worries most about the technology’s dangers. “Machines of Loving Grace” is an optimistic outlier in his larger oeuvre of published writing, much of which concerns the risks that will accompany the creation of a greater-than-human intelligence. Amodei seems to think of today’s AI researchers as comparable to Manhattan Project scientists, and has been known to recommend The Making of the Atomic Bomb. In his telling, superhuman AI could be even more dangerous than nuclear weapons, which is why AI needs to be developed the right way, by the right people, so that it doesn’t overpower humanity or tip the global balance of power toward autocracies.

I have been reading quite a lot about Dario Amodei and Anthropic in the last few weeks and I think the way he tinks about what he is doing is quite remarkable.

(…continue reading.)

The Most Disruptive Company in the World

As it grew, Anthropic was determined to preserve its founding values and tight-knit culture. Employees call themselves “ants.” Many maintain a digital “notebook,” a Slack channel where they share their hopes, fears, and insights in stream-of-consciousness fashion. Dario Amodei writes his own lengthy entries, Daniela says. Dario also gives biweekly company-wide lectures known internally as “Dario vision quests,” Daniela says. Managers are fixated on maintaining a shared sense of purpose. Potential recruits must pass a highly selective “cultural interview,” which is designed partly to screen out people who aren’t in it for the mission. (A sample question: Would you be willing to lose the value of your stock if Anthropic decides not to release models because it can’t guarantee they’re safe?) Anthropic’s competitors contain fiefs “that all care about different things and are low-key at war with each other,” says Daniel Freeman, a member of Anthropic’s frontier red team who used to work at Google. “I’ve absolutely never felt that at Anthropic.”

The fight with Hegseth shows clearly how advanced AI has become - and how dangerous it can be when in the wrong hands.

(…continue reading.)

Inside OpenAI’s Race to Catch Up to Claude Code

Inside OpenAI’s Race to Catch Up to Claude Code

In early 2024, Anthropic was training Claude Sonnet 3.5 on some of those messy code repositories. When the model launched that June, many users were impressed with its coding abilities. This was particularly true at a startup called Cursor, founded by a group of twentysomethings, which let developers code with AI by asking for changes in plain English. When the company incorporated Anthropic’s new model, Cursor’s usage began rocketing upward, according to a person close to the startup. Within months, Anthropic would begin internal testing of its own version: Claude Code. As Cursor took off in popularity, OpenAI approached the startup about an acquisition. The founders declined the offer before talks ever reached an advanced stage, people close to the startup told me. They saw the potential of the coding industry and wanted to stay independent. At the time, OpenAI was training its first so-called reasoning model, o1, which could work through a problem step by step before delivering an answer.

Claude Code sets the standard for agentic coding right now and it is really interesting how OpenAI missed the boat on this.

(…continue reading.)

A.I. Writes Buggy Code. A Silicon Valley Start-Up Wants to Fix It.

Rows of people sit at larger desks with computer monitors.

Although Axiom’s technology learned its skills by analyzing proofs of math problems, the company recently said it had achieved high scores on a standard benchmark test that judges whether A.I. systems can verify computer code. A.I. researchers called this “transfer learning” — when a system learns one skill (like proving math problem) and can successfully transfer that skill to a different task (like verifying computer code).

As they begin to train their systems for code verification, Ms. Hong and her colleagues said, they can further improve the quality of A.I.-generated code.

But experts warn that these methods have limits.

Right now I let ChatGPT Codex audit the code that Gemini and Claude generated with the help of Antigravity, which is an interesting way of getting the code better and secure. Coding agents only get as this far right now and the next wave will be so much better.

(…continue reading.)

AI Isn’t Lightening Workloads. It’s Making Them More Intense.

Frustrated female entrepreneur touching her temples while sitting at her desk in the office.

Examining AI users’ digital activity 180 days before and after they began using such tools on the job, ActivTrak found AI intensified activity across nearly every category: The time they spent on email, messaging and chat apps more than doubled, while their use of business-management tools, such as human-resources or accounting software, rose 94%.

Meanwhile, the amount of time AI users devoted to focused, uninterrupted work—the kind of concentration often required for figuring out complex problems, writing formulas, creating and strategizing—fell 9%, compared with nearly no change for nonusers.

I think it is doing both at the same time. It make some workloads so much lighter, but at the same we end up doing so much more in the same amount of time that we will have a hard time multi-tasking on this.

(…continue reading.)

Read on, my dear! Here come’s my analysis you won’t want to miss!

User's avatar

Continue reading this post for free, courtesy of Nico Lumma.

Or purchase a paid subscription.
© 2026 Nico Lumma · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture