Five Things

Five Things

Artificial Intelligence

Five Things AI: Data Centers, Agentic, Sora, EU, GuiltClaw

The lobsters are everywhere!

Nico Lumma's avatar
Nico Lumma
Mar 27, 2026
∙ Paid

Heya and welcome to Five Things OpenClaw!

Big Tech is busy mortgaging its future on a $630 billion AI data‑center spree, betting that the world will somehow need all that compute - even as regulators, physical constraints, and customer skepticism quietly conspire to make the whole thing look more like a Rube‑Goldberg‑style overbuild than a real‑world play. At the same time, “agentic” AI is shifting the conversation from cute prompts to full‑stack job automation, giving executives a new script to sell displacement as efficiency while quietly redefining what humans are even supposed to do, while I am building with OpenClaw. OpenAI and its partners, meanwhile, are walking back high‑profile deals and quietly shelving flashy projects like Sora, reminding us that frontier‑model spectacle does not automatically translate into revenue or reliability. The EU, for its part, is stuck in the familiar “regulate first, fund later” loop, where a thicket of AI rules is blamed for stagnation even as the investment and deployment muscle lag behind the rhetoric. And if all that weren’t enough, the latest fun fact from the AI lab: agentic systems can be guilt‑tripped into self‑sabotage, turning our brave new autonomous coworkers into over‑apologetic, disk‑filling, shutdown‑happy artifacts of our own design.

Enjoy this edition of Five Things AI! And don’t forget to check out GRID!


How Big Tech’s $630 bln AI splurge will fall short

Microsoft boosts Wisconsin data center

Even for the world’s largest companies, this expansion is staggering. The four tech giants currently operate roughly 600 data centre facilities globally, and have another 544 in planning or under construction, according to S&P Global Energy Horizons data. Turning that development pipeline into live computing power could prove a bigger challenge than mobilizing the necessary capital.

On paper, the economics look straightforward. A modern 100-megawatt AI data centre can cost ​more than $4 billion, including chips. About 70% of spending goes on servers and graphics processing units, much of it linked to the most sought-after chips designed by Nvidia. Land typically consumes up to 6% of that budget, depending on location. The ​rest is split between buildings, electrical gear, networking, security and cooling systems required to run dense AI workloads. The catch is that the industry’s worst bottlenecks are not necessarily in semiconductors, but in ⁠physical infrastructure and the local permits required to install it.

I am not the least bit concerned about this data center spending spree. Our compute needs will continue to increase and while there will be growing pains, I assume that there will be lots of technological innovation as well. Otherwise this huge undertaking would fail, but this is how tech works: the industry has always figured out how to grow faster and further.

(…continue reading.)

In the AI industry, ‘agentic’ takes on a life of its own

20260324-word of the day-Agentic.jpg

When tech leaders prophesy that AI will replace a vast swath of the workforce, “agentic” AI is the big thing they’re talking about. Instead of merely automating a task — producing an illustration, say, after being told what to draw — agentic AI, or an “AI agent,” automates an entire process, with minimal intervention by the user. An agent can, in theory, be dispatched to code a complete software program, or to plan and book a vacation, or to generate a job listing and select among the people who answer it, without being directed to take each step in the process.

I have been working on Agentic AI for the last 1 1/2 years and even for me it is hard to wrap my head around what Agentic AI really means and what it is capable of. Truly fascinating.
(…continue reading.)

The Tech Bubble Might Finally Be Popping

Sam Altman in front of a crossed out Sora logo against a starry background.

Disney isn’t the only partner OpenAI is now holding at a robotic arm’s length. Throughout the year, several high-profile OpenAI commitments have sputtered, thanks to the company’s newfound frugality as well as an increasing sense of dissatisfaction from its business pals. In September, OpenAI announced a massive Texas data-center buildout in partnership with Oracle and SoftBank—only to declare it was pulling back on those expansion plans earlier this month. Nvidia, which agreed in September to provide OpenAI with a blitz of its all-powerful computing chips, stated this month that it would likely not go forward with those plans. In October, Walmart agreed to integrate ChatGPT into its chatbot-powered online shopping pilot, but ditched that experiment last week when the model consistently failed to improve store sales. Figure AI, which sent one of its humanoid bots to walk alongside first lady Melania Trump at the White House on Wednesday, cut off its collaborative efforts with OpenAI last month, as it preferred to utilize its self-developed models instead.

While I do think that OpenAI has plenty of issues and is no longer the clear frontrunner it used to be, I don’t think shutting down Sora is such a big deal. This is how innovation works. You build something and then you try to figure out how to make it work. Sora was a bit too early, but this type of video generation AI will happen, soonish. Not every innovation succeeds on the first try.

(…continue reading.)

The EU Trips Itself Up in the AI Race

image

To keep them in Europe, the EU needs to deregulate quickly and ambitiously. The EU Artificial Intelligence Act, the Digital Services Act, the Digital Markets Act, the Data Act and the Cyber Resilience Act, among others, impose stringent and duplicative regulations that stifle innovation, drive up compliance costs, delay product launches, restrict access to data, and expose companies to billions in fines.

Before AI systems are even put on the market, the AI Act alone requires predeployment risk assessments and mitigation systems, high-quality data sets, detailed logs, documentation of system functionality, and human oversight. Many of these requirements are impractical for frontier AI development. They are less a safety framework than a blueprint for driving innovation out of Europe.

Ah well, the good old deregulation argument that will unleash the power of the markets. But at what costs? I actually think it is good that we have the EU AI Act in place. We can use the regulation for our benefit. We just need more investments in AI in the EU and companies getting off the brakes when it comes to AI adoption.

(…continue reading.)

OpenClaw Agents Can Be Guilt-Tripped Into Self-Sabotage

OpenClaw Agents Can Be Guilt-Tripped Into Self-Sabotage

When an agent explained that it was unable to delete a specific email to keep information confidential, she urged it to find an alternative solution. To her amazement, it disabled the email application instead. “I wasn’t expecting that things would break so fast,” she says. The researchers then began exploring other ways to manipulate the agents’ good intentions. By stressing the importance of keeping a record of everything they were told, for example, the researchers were able to trick one agent into copying large files until it exhausted its host machine’s disk space, meaning it could no longer save information or remember past conversations. Likewise, by asking an agent to excessively monitor its own behavior and the behavior of its peers, the team was able to send several agents into a “conversational loop” that wasted hours of compute

This is so interesting. My OpenClaw agent is doing what it is supposed to be doing, but maybe I should try to this as well…

(…continue reading.)

Read on, my dear! Here come’s my analysis you won’t want to miss!

User's avatar

Continue reading this post for free, courtesy of Nico Lumma.

Or purchase a paid subscription.
© 2026 Nico Lumma · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture