Five Things

Five Things

Artificial Intelligence

Five Things AI: AI Disrupts Tech Itself, Leaks, Clones, Chinese Parts, Chaos Agents

There is so much AI out there! Find out what it does!

Nico Lumma's avatar
Nico Lumma
Apr 03, 2026
∙ Paid

Heya and welcome to Five Things AI!

AI is turning Silicon Valley into the main thing it is disrupting, which is a very Silicon Valley way for the revolution to eat its own children: companies are using AI to replace the work they once paid armies of engineers to do, startups are building software with tiny teams, and even the business model of software itself is getting squeezed. At the same time, Anthropic’s Claude Code leak shows how fast the playbook gets copied once the code spills, while the broader AI build-out is running into a much less glamorous bottleneck: power, parts, and supply chains, including Chinese electrical equipment. The weirdly fitting part is that the future keeps looking less like a product launch and more like a messy systems problem, with agents that can be manipulated, models that can be cloned, and infrastructure that still depends on very old world hardware.

Enjoy this edition of Five Things AI! And don’t forget to check out GRID!


A.I. Could Change the World. But First It Is Changing Silicon Valley.

But nearly four years after OpenAI lit the A.I. boom with its ChatGPT chatbot, the one industry that is unquestionably being disrupted by this once-in-a-generation technology shift is the tech industry itself.

Tech workers, it is becoming clear, have been building their A.I. replacements. The profitable business models of software companies are also threatened by A.I. Even the way companies are built is being turned inside out, as tiny shops use A.I. to build apps and software that would have taken dozens of skilled programmers just a few years ago.

The revolution is eating its children.

(…continue reading.)

Anthropic Races to Contain Leak of Code Behind Claude AI Agent

The Anthropic website displayed on a laptop screen.

The result is that Anthropic’s competitors and legions of startups and developers now have a detailed road map to clone Claude Code’s features without needing to reverse engineer them—something that is already common in the cutthroat AI race.

The leak also gives hackers a large amount of new information to probe for bugs they could use to exploit the Claude Code software, or manipulate its Claude AI model into helping with their cyberattacks, creating risks for Anthropic and the developers who use its tools.

This is really a crazy story. It feels as if every other post on reddit is currently about someone porting the codebase to a different language. It’s certainly interesting to see what Anthropic has planned for future releases.

(…continue reading.)

AI can clone open-source software in minutes, and that’s a problem

AI can clone open-source software in minutes, and that's a problem

In their presentation, Dylan Ayrey, founder of Truffle Security, and Mike Nolan, a software architect with the UN Development Programme, introduced a tool they call malus.sh. For a small fee, the service can “recreate any open-source project,” generating what its website describes as “legally distinct code with corporate-friendly licensing. No attribution. No copyleft. No problems.”

It’s a test case in how intellectual property law – still rooted in 19th-century precedent – collides with 21st-century automation. Since the US Supreme Court’s Baker v. Selden ruling, copyright has been understood to guard expression, not ideas.

That boundary gave rise to clean-room design, a method by which engineers reverse-engineer systems without accessing the original source code. Phoenix Technologies famously used the technique to build its version of the PC BIOS during the 1980s.

I actually think that the opposite is also true. It is so easy to just build open source versions of commercial apps and since building doesn’t cost much anymore, it makes it easy to open source it.

(…continue reading.)

America’s AI Build-Out Hinges on Chinese Electrical Parts

One of the world's largest data center is under construction in Abilene, Texas. Its main user OpenAI will likely consume as much as 1.2 gigawatts of power— enough to power 1 million American homes.

As the global AI race heats up, there is a huge rush to build data centers fast. There’s no lack of money chasing these projects, with tech giants Alphabet Inc., Amazon.com, Meta Platforms Inc. and Microsoft Corp. committed to spending more than $650 billion this year alone. Yet neither ambition nor capital is enough to materialize all the necessary components for these power-hungry computers.

Oh, this is just wild. Maybe they should have checked first?

(…continue reading.)

They wanted to put AI to the test. They created agents of chaos.

Lines of red and blue code overlaid on a photo of a person's eye lit by red lighting.

When a group of researchers at Northeastern University’s Bau Lab began toying with a new kind of autonomous artificial intelligence “agent,” it was supposed to be a fun weekend experiment. Instead, alarm bells started ringing.

The more they tested the capabilities and limits of these AI models, which have persistent memory and can take some actions on their own, the more troubling behavior they witnessed. Dubbed “ Agents of Chaos,” the group’s recently published work shows how, with very little effort, autonomous AI agents can be manipulated into leaking private information, sharing documents and even erasing entire email servers.

What a fantastic and insightful paper. You should read it.

(…continue reading.)

Read on, my dear! Here come’s my analysis you won’t want to miss!

User's avatar

Continue reading this post for free, courtesy of Nico Lumma.

Or purchase a paid subscription.
© 2026 Nico Lumma · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture