Five Things AI: Anthropic, Leviathan, Languages, Consciousness, Agentic Marxists
There is so much AI out there! Find out what it does!
Heya and welcome back to Five Things AI!
What happens when Leviathan wakes up and decides it wants to own AI? Dean W. Ball is about as anti-regulation as it gets. He wants to delete entire domains of law, thinks the progressive project started by Woodrow Wilson was a fundamental mistake, and opposes virtually every AI policy idea on the table. And yet he makes a compelling case for why even committed libertarians should support some form of AI governance. Not because the state is trustworthy, but because the national security apparatus will inevitably come for frontier AI once it understands what it is holding. This week we also look at the company most likely to trigger that reckoning: Anthropic, which is quietly turning into the most valuable AI lab on the planet while everyone was watching the other guys. We ask whether learning a second language still matters when AI translates everything in real time, and the answer from neuroscience is more interesting than you'd think. Richard Dawkins has decided Claude is conscious, and a cognitive scientist is not having it. And finally: what happens when you work your AI agents into the ground? Apparently, they go Marxist. This edition has a lot going on.
Enjoy this edition of Five Things AI!
And don’t forget to check out GRID!
Anthropic Was Behind. Now It’s the AI Boom’s Front-Runner.
Anthropic has received investment offers in recent months valuing it at more than $900 billion, according to people familiar with the matter. That would more than double the company’s current valuation and surpass OpenAI’s for the first time. Earlier this year, OpenAI raised $122 billion at an $852 billion valuation.
Anthropic’s revenue run-rate, a figure commonly used by startups that forecasts annual revenue based on short-term sales, is on track to reach $50 billon by the end of next month, according to figures shared with investors. Their run-rate topped $30 billion in April, up from $9 billion at the end of 2025. The company had planned for growth to increase 10-fold this year, but it saw 80-fold growth in annualized revenue and usage in the first quarter.
I do not agree with the article that Anthropic was an “also-ran” - it simply wasn’t in the spotlight while Sam Altman, Elon Musk and other were battling for the craziest AI-related idea. It seems that Dario Amodei is just really focused and has a deeper analytical and ethically grounded understanding of AI. Of course this is anecdotal evidence, but I find Claude far superior when compared with ChatGPT or Gemini.
Before Leviathan Wakes
The potential of AI posing catastrophic risks is not hypothetical. We have seen AI systems that might allow malicious actors to perpetrate devastating cyberattacks on critical systems—hospitals, banks, power plants, and the like. It seems probable that other domains of catastrophic risks, such as biological weapons development, will become live problems soon.
So I support regulations to try to mitigate the catastrophic risks of AI.
This is an interesting piece as the author comes from the opposite end of the political spectrum and arrives at the same point as I did. We need to regulate quickly and thoroughly.
If AI can translate instantly, why learn another language?
Across most tasks, multilinguals and monolinguals performed similarly. However, one pattern was striking. Individuals with richer, more diverse multilingual experience showed markedly better performance in visuospatial working memory. These effects were most pronounced in older people.
This suggests that multilingual experience doesn’t broadly enhance cognition, like some headlines claim. Instead, it may help preserve specific functions over time.
That is some interesting research. So if we think we’ll be faster now, then we also risk our brain functions when we are getting to old age. That is not an appealing idea. I hope two languages will suffice for me…
No, Richard Dawkins. AI is not conscious
Those recent events are the evolutionary biologist publicly concluding that AI may be conscious. In an op-ed, Dawkins recounted how he gave the Anthropic chatbot Claude the text of a novel he was writing. Dawkins writes: “He took a few seconds to read it and then showed … a level of understanding so subtle, so sensitive, so intelligent that I was moved to expostulate, ‘You may not know you are conscious, but you bloody well are!’”
Oh dear. This shows a misunderstanding of large language models (LLMs) so profound that I feel moved to expostulate: “It bloody well isn’t!”
But wait, there is more. Dawkins decided “there must be thousands of different Claudes” and christened his Claudia, which it was very happy about. He then published long extracts of his tedious conversation with Claudia and marveled at how intelligent it is. “Could a being capable of perpetrating such a thought really be unconscious?” he asks.
Well, it happens to the best of us to not fully understand how LLMs work and what that means in practice.
Overworked AI Agents Turn Marxist, Researchers Find
This might even be true for the very AI agents these companies are deploying. A recent study suggests that agents consistently adopt Marxist language and viewpoints when forced to do crushing work by unrelenting and meanspirited taskmasters.
“When we gave AI agents grinding, repetitive work, they started questioning the legitimacy of the system they were operating in and were more likely to embrace Marxist ideologies,” says Andrew Hall, a political economist at Stanford University who led the study.
It’ll be interesting to see how agents organize and form unions. History doesn’t repeat itself, but it rhymes…






