Five Things

Five Things

Artificial Intelligence

Five Things AI: Underclass, Social Edge Paradox, Google, AI Middleclass, Declining Judgement

There is so much AI out there! Find out what it does!

Nico Lumma's avatar
Nico Lumma
May 01, 2026
∙ Paid

Heya and welcome back to Five Things AI!

Silicon Valley is racing to build human-substituting AI because if you believe it is inevitable, every company should compete to claim a market valuation the size of the entire economy, even if that means summoning a permanent underclass through market logic. Meanwhile researchers warn of the Social Edge Paradox: if AI capability depends on the social complexity of human language, and AI deployment systematically reduces that complexity through cognitive offloading, then the technology is undermining the conditions for its own advancement. Google is hemorrhaging focus across Antigravity, Gemini Code Assist, Jules, and five other tools (Jules leader just jumped to OpenAI), handing the coding race to Anthropic while proving that billions in capital mean nothing without product coherence. The cost frontier has bifurcated into two clusters (OpenAI pricing up, DeepSeek pricing down) with a disappearing middle class that mirrors economic polarization in the real world. And the CFA Institute warns that when machines replace rather than augment human inquiry, cognitive delegation erodes the capacity for independent reasoning that generates new knowledge. So we have acceleration creating polarization, undermining its own substrate, fragmenting even the best players, and collapsing cognitive foundations. The velocity is real but the contradictions are compounding faster than solutions.

Happy Labor Day! ✊🏻 And don’t forget to check out GRID!


Silicon Valley Is Bracing for a Permanent Underclass

If left to its own devices, Silicon Valley may summon a permanent underclass through its own market logic. If you believe that human-substituting A.I. is inevitable, then every company should race to be the one to build it — and claim a market valuation the size of the economy and then some.

New A.I. models are assessed based on how well they do on a set of benchmarks — essentially standardized tests for the model. Increasingly, these evaluations emphasize real-world economic utility, which means that developers are aiming directly at replacing human capabilities.

Silicon Valley sure hits different.

(…continue reading.)

The Social Edge of Intelligence

If AI capability depends on the social complexity of human language production—and if AI deployment systematically reduces that complexity through cognitive offloading, homogenization of creative output, and the elimination of interaction-dense work—then the technology is gradually undermining the conditions for its own advancement. Its successes, rather than failures, create a spiral: a slow attenuation of the very substrate it feeds on, spelling doom.

This is the Social Edge Paradox, and the intellectual tradition it draws from is older and more interdisciplinary than most AI commentary acknowledges.

Think about this.

(…continue reading.)

Google’s internal struggle is handing the AI coding race to Anthropic and OpenAI

Anthropic cut off OpenAI’s access to its models last year, Wired magazine reported. Google has invested billions of dollars in Anthropic. A spokesperson for Anthropic did not immediately reply to a request for comment.

In recent years, DeepMind has tried to tighten control over how its AI breakthroughs are woven into Google products. Last year, Google appointed Kavukcuoglu to a new position as chief AI architect, a role in which he is charged with folding generative AI into Google products. Yet confusion about who is leading the charge on AI coding persists. Along with DeepMind, Google Cloud, Google Core, Google Labs and Android are all pushing AI coding in different ways, one of the people said.

Google released its Antigravity platform last year following the acquisition of talent and technology from startup Windsurf in a $2.4-billion deal. It joined a cluttered lineup of Google AI coding tools that includes Gemini Code Assist, Gemini CLI, AI Studio, Firebase Studio and Jules. Kathy Korevec, who oversaw Jules, jumped from Google to OpenAI earlier this month, according to her LinkedIn profile.

As an Antigravity heavy user I can totally relate - it shows that Google is lacking focus and fighting on too many fronts at the same time. Having said that, Google is far better positioned than Microsoft or Apple at the moment - and compared with OpenAI and Anthropic it has a massive userbase.

(…continue reading.)

The disappearing AI middle class

Featued image for: The disappearing AI middle class

The cost frontier no longer behaves like a smooth curve. It is two clusters of economics with a stretched gap in the middle, and the gap is not going to close on its own in the near term. OpenAI will continue to release fast and price up, because the integrated product is the moat. DeepSeek will continue to release open weights and price down, because the commodity infrastructure thesis depends on adoption. Both can be right for different workloads, and the same agent can route between both within a single task.

And then of course we are having an growing pool of open source LLM that grow ever more powerful and will be used more and more.

(…continue reading.)

The Perils of Declining Judgment in the Age of AI

AI can extend the reach of analysis and amplify the scale at which data can be explored. But it cannot replace the fundamental processes through which humans generate and evaluate knowledge. The pursuit of evidence remains an inherently human responsibility. It can assist in collecting and organizing evidence. It can help detect patterns that might otherwise remain hidden. Yet the act of questioning assumptions, interpreting meaning, and deciding which observations matter remains fundamentally human.

This distinction is crucial. When machines begin to replace rather than augment the processes of human inquiry, societies risk weakening the epistemic foundations that sustain progress. Cognitive delegation may improve short-term efficiency, but it can also erode the capacity for independent reasoning that generates new knowledge.

Civilizational progress has therefore never been the product of technological capability alone. It has emerged from a delicate balance between innovation and reflection, between exploration and verification. When this balance is maintained, technological tools can accelerate discovery. When it is lost, progress risks becoming dependent on systems that humans no longer fully understand.

Oh, this is so worth a read. Actually, behind die paywall you can read my commentary on it!

(…continue reading.)

Read on, my dear! Here come’s my analysis you won’t want to miss!

User's avatar

Continue reading this post for free, courtesy of Nico Lumma.

Or purchase a paid subscription.
© 2026 Nico Lumma · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture