Five Things AI: Creative Destruction, Business Vibe Coding, Land-Grab, Intelligence, War AI
Read this to know what matters in AI right now.
Heya and welcome back to Five Things AI!
AI is now visibly reshaping everything from stock prices to power grids and geopolitics, and we are still struggling to even define what kind of intelligence we are unleashing.
We are doing our part by building Agentic AI platforms like GRID - if you are a founder in Europe, you should check it out!
Wall Street has started to price in “creative destruction” at company level, with IBM punished for its COBOL legacy while firms like Block get rewarded for swapping headcount for AI productivity, a pattern we will see more often as incumbents hit their Kodak moment. At the same time, smaller firms are quietly building their own CRMs via vibe coding instead of waiting for Salesforce to finally match their workflows, which perfectly illustrates how compliance-heavy giants will lose out to faster, AI-native challengers.
Behind all of this sits a massive, underreported land-and-energy rush: specialized players like Cloverleaf are packaging “powered land” to feed data centers that alone will need more capacity than today’s entire US grid can deliver, which will inevitably accelerate wind and solar buildout and permanently change the landscape around us. Philosophically, we keep slapping the word “intelligence” on systems that are great at tokenizable knowledge but miss the tacit, lived, human part of knowing, so I am still unconvinced that AGI in the sci-fi sense is just around the corner. And while today’s frontier models are evidently not ready to steer drones or pull the trigger, the Anthropic–Pentagon fight and startups training explicitly military models show how urgent it is to decide, as a democratic society, where we insist on a human in the loop and where we do not want AI involved at all.
Enjoy this edition of Five Things AI! And don’t forget to check out GRID!
Wall Street Sees AI’s ‘Creative Destruction’ Coming For Entire Companies
When businesses can cut payroll costs because technology enables them to do more with less, that’s often good news for their profits and shareholders. Take Block, the fintech firm run by Twitter founder Jack Dorsey, which said on Feb. 26 it’s slashing almost half its staff in a bet on AI productivity. The shares are up more than 15% since then.
But last week also offered an example of how productivity gains can have a downside for investors – involving the storied International Business Machines Corp. The startup Anthropic said its AI tool can do something that once needed “armies of consultants”: modernize Cobol, a dated programming language run on IBM computers. IBM shares plunged the most in a quarter-century, before recouping most of the losses.
I am sure that we will see lots of Kodak moments soon, where once powerful incumbents just cannot fathom how to cannibalize themselves in order to stay in business and thereby killing the whole company.
Meet the Companies Vibe Coding Their Own CRMs
A number of small and midsize companies say they are vibe coding their own customer relationship management software in an effort to get more customized systems at a better price. So-called CRMs are a critical business system for tracking, analyzing and taking action on sales, marketing and customer data, and it’s an area Salesforce has dominated for more than a decade.
“We tried Salesforce and it was OK,” said Bill Schonbrun, chief operating officer of water treatment company CarboNet. “We did all kinds of customizations, but never got to the place where it did exactly what we wanted it to do.” Schonbrun ultimately opted to build his own custom CRM for the 65-person company with the help of AI.
That’s exactly what I have been talking about for a while: the smaller companies will be faster and more powerful while the big companies are stuck with all their compliance layers that will make it hard to benefit from AI.
Meet the A.I. Prospectors Tapping a Billion-Dollar Gusher
As the A.I. boom enters its fourth year, Mr. Janous and his team have become modern-day land men. They work at the intersection of utility companies and tech giants, securing the power and sites necessary for the hundreds of billions of dollars of data centers being built across the country. Their product — powered land, they call it — has become one of the nation’s most valuable commodities.
A.I. companies are seeking 85 gigawatts of power for new data centers by 2030, about a fifth more than the power grid can currently supply, according to S&P Global, a market research firm. The demand has tech companies scrambling to secure power and land as quickly as possible.
This new type of land-grab will lead to the inevitable realization that along with the data-centers lots of new wind turbines and solar parks need to be build. Together, this will transform the landscape more than we can imagine right now.
Don’t Call It ‘Intelligence’
Perhaps no one should be surprised that some of the world’s best scientists and engineers have defined intelligence the way they have. Even if the AGI champions’ motives were entirely altruistic, they would still be biased by their own way of seeing the world, by their own experiences and successes. Researchers at the forefront of AI are among the most brilliant and accomplished minds on Earth—and they make up a very narrow, self-selected group of people primed to understand certain kinds of knowledge better than others: explicit, well-defined, tokenizable knowledge; knowledge that forms the basis of our most far-reaching, wildly accurate theories of the universe; knowledge that has allowed us to create world-changing technologies. But that is only a small subset of all knowledge—the sliver that can be expressed symbolically, as language or mathematics.
The rest is what the philosopher Michael Polanyi called “tacit knowledge,” which makes up a much larger amount of data, and interacts in many more ways. His philosophy of knowledge can be summed up by: “We know more than we can say.”
For now, I do not thing AGI will be a real threat, I just think the frontier models will be getting better, but they will always miss plenty of what we’d define as being essential for the definition of intelligence.
What AI Models for War Actually Look Like
Military use of AI has become a hot topic in Silicon Valley after officials at the Department of Defense went head-to-head with Anthropic executives over the terms of a roughly $200 million contract.
One of the issues that led to the breakdown, which resulted in defense secretary Pete Hegseth declaring Anthropic a supply chain risk, was Anthropic’s desire to limit the use of its models in autonomous weapons.
Markoff says the furor obscures the fact that today’s large language models are not optimized for military use. General-purpose models like Claude are good at summarizing reports, he says. But they’re not trained on military data and lack a human-level understanding of the physical world, making them ill suited to controlling physical hardware. “I can tell you they are absolutely not capable of target identification,” Markoff claims.
We really need to do a real discussion about the limits of AI we as a society want to see. Where do we need a human in the loop, and where do we not want any AI at all? And why?







