AGI is a meaningless term and a total distraction

AGI is a meaningless term and a total distraction

AGI is probably the most talked-about, least-understood topic in AI. The TL;DR here is that I think it’s bullshit and a huge waste of collective time. There is however, another topic we should be discussing.

4
min read time

I don’t want to spend a lot of time on this topic because the TL;DR here is that I think it’s bullshit and not really worth talking about.

Artificial General Intelligence (AGI) is a term that gets thrown around a lot. It’s often framed as the next frontier, the moment when machines reach human-level intelligence and a goal that many foundational AI companies are racing each other towards. The problem is that AGI is a poorly defined concept. Like the definition of pornography, it means different things to different people. Because it lacks a concrete definition, it’s easy to breahlessly claim we’re on the verge of AGI without any rigor or meaning.

For example, there has been much noise about AI’s cleaning up in benchmarks, such as the recent model ChatGPT 03 completing the ARC-AGI-1 test with an accuracy of 88%.

Impressive, sure. But have you seen that test? It’s essentially a series of visual puzzles that require moving or drawing little colored squares. I’m being a little unfair on purpose here—the point is that these benchmarks test a narrow slice of intelligence. Just because an AI can ace one test doesn’t mean it can do something genuinely useful in a real-world business context. Though, if your business is a colored-block puzzle-solving concern… wow, you’re in luck!

My team couldn’t care less about benchmarks. What impresses us is when models can be relied upon to synthesize, reason, and plan in real-world, business situations. We know that no matter how smart your AI is, every problem needs to be decomposed into a bunch of smaller, more measurable and controllable tasks in order to build a reliable, adjustable, observable system. We don’t want super-intelligent black-box models that “just work.”  

The real reason AGI comes up is that OpenAI and Claude want to generate hype and direct attention toward themselves. Every time a new model is released, Sam Altman or some other AI wonk proudly proclaims that we’re on the edge of reaching AGI. It’s the world’s biggest head fake. What would AGI even look like? Who would determine whether we really got there? And what difference would it make? It’s like saying you have the world’s best pizza. 

Cogito, Ergo Sum (“I Think, Therefore I Am”) 

René Descartes wasn’t the first or last to wrestle with the idea of consciousness. But he pretty much nailed it from the get-go. For him, life was defined by thought, consciousness, and the presence of a rational soul. He never figured out the definitive formula or parameters of human consciousness and the essence of the problem he was wrestling with still plagues us today. 

Since René’s time, we’ve had endless debates and at least two Blade Runner films about what intelligence means. In those films, the Voight-Kampff test distinguishes humans from artificial intelligence. One of the film’s themes and subplots is that the test has definition issues and can’t differentiate consistently between the two intelligences. That’s some nice AI (although a tad too murderous for my taste). 

If we struggle to define intelligence in humans, how can we confidently define it for machines? Intelligence isn’t just about problem-solving. It involves creativity, reasoning, intuition, and adaptability across different domains. It requires a constant interplay between short- and long-term memory. It’s an ever-evolving result of ongoing synthesis of internal and external stimuli. The reality is that most AI today is still highly specialized and very narrow. It’s excellent at specific tasks but completely useless outside its trained domain. And that’s fine! Even with that limitation, we are still struggling to figure out how to fully harness its power.

For instance, an AI that can process legal contracts with extreme accuracy isn’t suddenly going to become a world-class chef. That’s because intelligence isn’t just about raw computation but context, learning, and adaptation. And that’s something AI still struggles with.

The Real Concern: Autonomy

If there’s one thing we should pay attention to, it’s AI autonomy, not AGI. AI is already making independent decisions in critical areas: finance, healthcare, and even military applications. We have F16’s flying themselves, drones deciding who to target, AI systems controlling complex logistics, and even autonomous systems dictating who gets a loan. That’s the real conversation we should be having. When AI is given decision-making power, how do we ensure those decisions are fair, ethical, and aligned with our values?

AGI is an academic debate. Autonomous AI is a real-world issue that is happening right now. You’ve probably been showered by the frothy deluge of hype around agents. But what you don’t hear much about in those conversations is the one thing that makes agents compelling: their ability to reason and make decisions autonomously. Without that, they’re just automated workflows. 

We should be talking about philosophies or frameworks that help us build boundaries or controls around agency. What level of autonomy is appropriate for an agent? How do we measure the risk of poor decision-making? How do we build in observability or transparency so that we can understand where, when, and how mistakes were made? Should agents be certified somehow? This probably sounds crazy for the kinds of agents we have today that do simple things like route your email or organize your to-do list. But AI capabilities are accelerating very quickly.  

When those lines meet or surpass human-levels, we will start handing over control to the AI for simple efficiency reasons. And notice how the slope of the lines are increasing as time moves on. AI is getting better a lot faster. 

The Road to Nowhere

The AGI debate is, for the most part, a road to nowhere. It sparks excitement, fear, and endless speculation but doesn’t provide anything actionable. There is so much to be excited about with AI, but conversations about AGI are best reserved for scifi novels and cocktail party banter.

Let the futurists argue about AGI. The Voight-Kampff test and the lack of differentiation between human and AI intelligence are still fantasies. For you and me, there are more pressing AI opportunities (and risks) to address in the present. If it’s genuinely amazing opportunities to transform something about your business with AI you’re after, we’d love to chat with you.

about the author

Ed is a partner at Machine & Partners. He spends way too much of his free time trying to keep up with the news and advancements in AI. The rest of the time he's playing tennis, driving his teenage daughter around, or cooking with this therapist wife.

deep thoughts

Want to be the first to know when a new article drops? That’s easy.

Thank you! Look for a confirmation email (it might be in your spam)
Oops! Something went wrong while submitting the form.