By a Pragmatic CTO
I was at a dinner party in San Francisco last week. The host, a researcher at one of the big AI labs, tapped his glass to make a toast.
“To the arrival of AGI,” he said, his eyes misting over. “To the moment, coming soon, when we birth a digital superintelligence that will solve climate change, cure cancer, and perhaps—if we are lucky—explain the meaning of the universe to us.”
Everyone clapped. I clapped too, mostly to be polite.
But as I looked around the room at the nods of solemn agreement, I realized something: I didn’t care.
In fact, I realized I was actively annoyed.
Earlier that day, I had spent four hours trying to get GPT-5 to do something incredibly simple. I had a messy CSV file of 5,000 customer addresses. Some were in all caps. Some had the zip code in the state column. Some were just “New York” without the “NY.”
I prompted the “most intelligent system in human history” to clean it up.
It did a great job on the first 50 rows.
Then, around row 142, it decided to hallucinate a customer named “John Doe” living in “Atlantis.”
Around row 300, it got lazy and just wrote // … rest of data ….
And when I finally got it to process the whole file, it subtly changed the date format from DD/MM/YYYY to MM/DD/YYYY halfway through, corrupting my entire database.
I don’t need a digital god that can ponder the nature of consciousness. I don’t need a machine that can write a symphony in the style of Beethoven.
I need a machine that can clean a spreadsheet without screwing it up.
The industry’s obsession with AGI (Artificial General Intelligence)—a machine that can do anything a human can do—has become a massive, blinding distraction. It is diverting trillions of dollars and our best engineering talent toward a sci-fi fantasy, while the actual problems of the business world sit unsolved.

I don’t want a God. I want a better Excel. And here is why the “Boring AI” revolution is the one that actually matters.
The “Employee of the Month” Fallacy
The promise of AGI is that we will eventually create a “Digital Worker” that is fully autonomous. You will say, “Go build me a marketing strategy,” and it will go off for a week, research, write, design, and execute.
This sounds amazing. It is also a management nightmare.
Think about the best employee you have ever hired.
Was it the chaotic genius who questioned every order, spent three days philosophizing about the “why” of the project, and then delivered something brilliant but totally different from what you asked for?
Or was it the reliable senior engineer who listened to the requirements, pointed out the risks, and then executed the task perfectly, on time, exactly as specified?
AGI researchers are trying to build the chaotic genius. They are optimizing for Capability and Creativity.
But businesses optimize for Reliability and Predictability.
If I use an AI to process insurance claims, I cannot tolerate “creativity.” I don’t want the AI to “think outside the box” when deciding if a car accident is covered. I want it to follow the rules, every single time, with zero variance.
The “God-like” AGI that OpenAI and DeepMind are chasing is inherently unpredictable. A mind that can “think” is a mind that can wander. And in enterprise software, a wandering mind is a liability. We are building tools that are too smart for their own good, and definitely too smart for ours.
The “Better Excel” Manifesto
So, what is the alternative?
I call it the “Better Excel” philosophy.
Microsoft Excel is arguably the most successful piece of software in history. Why?
It is deterministic. (2+2 is always 4).
It is bounded. (It handles rows and columns, not poetry).
It is leverage. (It allows one person to do the work of ten accountants).
We should be building AI that follows these principles. We need Artificial Narrow Intelligence (ANI) on steroids.
Imagine a model that has been lobotomized of all its knowledge about Shakespeare and quantum physics, but has been trained on every single accounts payable discrepancy in history.
It doesn’t chat.
It doesn’t have a “personality.”
It doesn’t apologize.
It just looks at an invoice, looks at a bank statement, and matches them with 99.999% accuracy.
This isn’t sexy. You can’t put it on a magazine cover. You can’t have a podcast debate about whether it has a soul.
But it is useful.
Right now, we have models that are “Jack of all trades, master of none.” GPT-5 is a B+ student at everything. It’s a B+ coder, a B+ lawyer, a B+ poet.
But I don’t hire B+ generalists. I hire A+ specialists. I want a model that is an F in poetry but an A+++ in Python. I want a model that couldn’t tell you the capital of France if its life depended on it, but can spot a cancerous nodule on an X-ray better than any human doctor.
The Cost of “General” Intelligence
There is a hidden tax to AGI: Inefficiency.
When you ask a massive, general-purpose model like Claude Opus to extract data from a PDF, you are activating a neural network that contains the sum total of human knowledge.
You are lighting up neurons that know about the French Revolution, the lyrics to Taylor Swift songs, and the recipe for beef bourguignon.
All to parse a phone number.
It is like using a nuclear reactor to toast a bagel. It works, but it’s overkill, and it’s expensive.
This inefficiency is why AI is currently failing to scale in the enterprise.
Latency: Big “thinking” models are slow.
Cost: Running a 2-trillion parameter model for every customer support ticket is bankruptcy.
Hallucination: Because the model knows too much, it is prone to connecting dots that shouldn’t be connected.
The “Better Excel” approach advocates for Small Language Models (SLMs).
Give me a 3-billion parameter model that has only read Javascript documentation. Nothing else.
It will be fast. It will be cheap (run it on a laptop). And it won’t hallucinate about history because it doesn’t know history.
The Feature vs. The Product
We are confusing a Feature (Intelligence) with a Product (Utility).
AGI is the ultimate feature. It’s raw horsepower.
But horsepower doesn’t win races; cars do.
The “Better Excel” mindset focuses on the wrapper around the intelligence.
Excel: The product isn’t the calculation engine; it’s the grid interface that makes the calculation usable.
AI: The product shouldn’t be the “Chatbot.” The chatbot is the worst possible interface for getting work done.
Have you ever tried to build a complex app by chatting with an AI?
“Move the button to the left.”
“No, not that left.”
“Now change the color.”
“You undid the first change!”
It’s exhausting. The interface is the bottleneck.
A “Better Excel” AI would look like… well, an interface.
It would be a dashboard where the AI proactively fills in the fields, highlights the anomalies, and suggests the next step. I don’t talk to it. I just approve its work.
It’s Agentic UI. The AI is the engine under the hood, not the passenger sitting next to me chatting my ear off.
The “Boring” Future
I suspect that 2026 will be the year the AGI bubble bursts—not because the tech fails, but because the market gets bored.
We are seeing it already. The “Wow” factor of ChatGPT writing a poem has faded. The novelty is gone.
Now, the CIOs are asking the hard questions:
“Can this thing actually reconcile our ledger?”
“Can it migrate our legacy COBOL code without breaking it?”
“Can it guarantee 100% data privacy?”
The answer from the AGI crowd is usually: “Well, no, but look, it can reason about the Trolley Problem!”
The answer from the “Better Excel” crowd (companies like Harvey for law, or Sierra for support) is: “Yes. It does exactly that one thing, perfectly.”
The trillions of dollars flowing into NVIDIA GPUs are betting on a sci-fi future. But the trillions of dollars in the real economy—manufacturing, logistics, healthcare, finance—are waiting for a practical present.
I don’t want a machine that passes the Turing Test. I don’t care if the machine can trick me into thinking it’s human. In fact, I prefer if it doesn’t.
I want a machine that passes the “Friday Afternoon Test.”
It’s 4:55 PM on a Friday.
I have a pile of messy data.
Can I dump it into the AI, hit “Process,” and trust the output enough to email it to my boss and go to the pub?
Right now, the answer is “No.” I still have to check its work. I still have to babysit the “God.”
Until we stop chasing the fantasy of AGI and start building the reality of reliable, boring automation, we are just playing with expensive toys.
So, please, keep your superintelligence. Keep your consciousness. Keep your digital gods.
Just give me a spreadsheet that fills itself out, and I’ll be the happiest man in Silicon Valley.
