Five Things AI: Gemini, Claude, ChatGPT, AGI, Polite Prompting
Everything you need to know about AI at the beginning of 2026. Really.
Heya and welcome back to Five Things AI!
Google’s Gemini has surged ahead in the AI race, leveraging massive resources, top models, seamless product integration, and unmatched distribution to outpace rivals, even if it’s more workhorse than the fun of ChatGPT or Claude’s cool vibe. Claude Code steals the show as a doer that actually executes projects like custom news feeds or code fixes, leaving skeptics gushing “it just DOES stuff,” and I pair it with Google Antigravity for heavy lifting. Meanwhile, ChatGPT shines in idea pingpong for tech and business brainstorming, boosting creativity without tackling serious math, while Sequoia’s pragmatic AGI take boils it down to systems that figure things out via knowledge, reasoning, and long-horizon iteration, with coding agents proving the point right now.
AI’s idea generation sparks debate on transforming science by crunching vast forgotten data humans can’t match, yet it’s already a powerhouse for savvy researchers. On politeness, adding “please” and “thank you” to prompts barely dents energy use amid data centers’ massive footprint, but I keep it up because being nice feels right, you never know with rising machines, and Antigravity’s cheeky “user seems agitated” notes during coding sessions crack me up every time.
I’m adjusting the format of Five Things AI. Each week, I pick five articles from the past days that I think are useful for understanding where AI stands right now. Paid subscribers get my analysis and my perspective on the most relevant implications.
Enjoy this edition of Five Things AI!
Gemini is winning
In 2022, when ChatGPT launched, it was clear that Google had been caught flat-footed. But credit where it’s due: For a company not exactly known for its ability to focus on a coherent product strategy, Google managed to marshal its considerable resources in a single direction. Now, if chatbots are in fact the future — and most of the AI industry continues to bet that they are — there is simply no other company currently set up to truly compete with Google. Google has the models. It has the resources to improve them. It now has the distribution necessary to get people to use its bots, and the data required to make them uniquely personal and useful.
I am really impressed by what Google has released in the last few months. And obviously it can nicely integrate it in its product suite and has the distribution power to make it available everywhere. Gemini is not as much fun as ChatGPT or as cool as Claude, but it is a serious workhorse to be reckoned with.
Move Over, ChatGPT!
I can see why the tech world is so excited. Over the past few days, I’ve spun up at least a dozen projects using the bot—including a custom news feed that serves me articles based on my past reading preferences. The first night I installed it, I stayed up late playing with the tools, sleeping only after maxing out my allowed usage for the second time that evening. (Anthropic limits usage.) The next morning, I maxed it out again. When I told a friend to try it out, he was skeptical. “It sounds just like ChatGPT,” he told me. The next day he texted with a gushing update: “It just DOES stuff,” he said. “ChatGPT is like if a mechanic just gave you advice about your car. Claude Code is like if the mechanic actually fixed it.”
Claude Code really is impressive and it is really smart how they try to tie into everything and then just deliver. When I use Google Antigravity, I use Claude Opus 4.5 for the heavy lifting.
Can A.I. Generate New Ideas?
The answers to those questions could provide a better understanding of the ways A.I. could transform science and other fields. Whether A.I. is generating new ideas or not — and whether it may one day do better work than human researchers — it is already becoming a powerful tool when placed in the hands of smart and experienced scientists.
These systems can analyze and store far more information than the human brain, and can deliver information that experts have never seen or have long forgotten.
It is fascinating to discuss ideas with ChatGPT, I always use it to think out loud and do some idea pingpong. I do not discuss serious math problems, but talk about tech and business ideas I have and that certainly helps getting a grip on new ideas, at least for me.
2026: This is AGI
AGI is the ability to figure things out. That’s it.*
*We appreciate that such an imprecise definition will not settle any philosophical debates. Pragmatically speaking, what do you want if you’re trying to get something done? An AI that can just figure stuff out. How it happens is of less concern than the fact that it happens.
A human who can figure things out has some baseline knowledge, the ability to reason over that knowledge, and the ability to iterate their way to the answer.
An AI that can figure things out has some baseline knowledge (pre-training), the ability to reason over that knowledge (inference-time compute), and the ability to iterate its way to the answer (long-horizon agents).
This is a pretty practical approach to defining AGI and it would be foolish to think that it doesn’t serve Sequoia ‘s purpose.
Does adding ‘please’ and ‘thank you’ to your ChatGPT prompts really waste energy?
What is more important, perhaps, is the persistence of the idea. It suggests that many people already sense AI is not as immaterial as it appears. That instinct is worth taking seriously.
Artificial intelligence depends on large data centres built around high-density computing infrastructure. These facilities draw substantial electricity, require continuous cooling, and are embedded in wider systems of energy supply, water and land use.
As AI use expands, so does this underlying footprint. The environmental question, then, is not how individual prompts are phrased, but how frequently and intensively these systems are used.
I always say “please” and “thank you”, because, frankly, you never know and if the machines take over, they might treat me differently.
Oh, what do I know, I just think it is nicer to be nice. Everyonce in a while I do curse at Antigravity while letting it code so I can build things, and then the machine quietly writes “the user seems agitated” and I quite like that humor.
Continue reading to get my take on which two of these Five Things AI mean the most for you and us in 2026! You don’t want to miss it!







