When OpenAI’s board momentarily ousted Sam Altman from his post as CEO last November, the media obsession was… intense. Why so much fuss about a corporate drama? The public mania—and perhaps the ousting itself—came in no small part from a widespread belief that the company is well on its way to accomplishing its stated mission, the formidable goal of building artificial general intelligence, software capable of any intellectual task humans can do.

If it came true, AGI would be the whole enchilada. Today’s systems only accomplish narrow, well-defined tasks. For example, predictive AI—aka enterprise machine learning—draws from data to improve large-scale business operations, such as targeting marketing, fraud detection and various kinds of risk management activities. And generative AI creates drafts of writing, code, video and music. In contrast, since AGI would be as generally-capable as humans—across all jobs, including the performance of AI research itself—the implications would be earth-shattering.

But the belief that we’re gaining ground on AGI is misguided. There’s no concrete evidence to demonstrate that technology is progressing toward general human-level capabilities. Reports of the human mind’s looming obsolescence have been greatly exaggerated.

This mistaken belief should be tempered by AGI’s immense difficulty. How could we build it? The human species is flummoxed. Even if the human mind could be emulated or equaled by a computer program—even if human abilities are algorithmic—the real difficulty comes in how to program it in the first place. Just because computers are so general-purpose that they hold the potential to do almost anything does not mean they will do everything we envision. And just because computers will soon potentially equal the human brain’s sheer computational power quantitatively—in terms of the number of operations per second—that doesn’t contribute to the qualitative development needed to stand up to the brain’s functions. The power is in the instructions we give the device, not in the device itself.

But journalists, futurists and other purveyors of hype promote this myth with full immunity because it’s unfalsifiable. It may be hard to prove true, but it’s impossible to prove false. We could never be certain whether AGI is on its way until its alleged arrival.

Nonetheless, even if we can’t disprove the claim that AGI is nigh, we can examine its credibility as much as any other outlandish, unfalsifiable claim. To date, no advancements have provided clear insights as to how to engineer human-level competence that is so general that you could assign the computer any task that you might assign to a person. Speculating on that possibility is no different now, even after the last several decades of impressive innovations, than it was back in 1950, when Alan Turing, the father of computer science, first tried to define how the word “intelligence” might apply to computers.

That’s tough news for the AI industry, which has bet the farm on AGI. And for media darlings who enjoy a special sort of glamor and intrigue—as well as the occasional stock bump—for declaring that AGI is coming soon, including Altman (OpenAI), Bill Gates (Microsoft), Demis Hassabis (Google DeepMind), Jensen Huang (Nvidia), Elon Musk (Tesla) and Mark Zuckerberg (Meta).

Unfortunately, AGI is the last great hope for artificial intelligence. AI is a buzzword looking for meaning. It sounds like it must mean something particular, but it does not consistently identify any specific technology or cohesive idea. While some use “AI” to simply mean machine learning, it’s generally meant to convey something more. What exactly? There are two main camps. One accepts the vagueness of the word “intelligence” in their conception of AI. The other seizes upon a goal that’s unambiguous yet quixotic: They proclaim that AI is meant to be capable of no less than everything. Why not define it that way, if you believe AGI is around the corner?

Since the day the term was coined, AI has faced this identity crisis. It’s stubbornly nebulous, so proponents continually perform an awkward dance of definitions that I call the AI shuffle. Some say that AI is intelligence demonstrated by a machine, but that’s too vague to constitute a pursuable engineering goal. Some define AI in terms of an advanced capability, but that also falls short—when a computer drives a car or plays chess, it’s still only considered a primordial step rather than AI in the full sense of the word. Still others refuse to define it entirely; even some popular books on the topic offer no definition whatsoever. While some fans of the AI brand find its amorphous nature charming, that’s a bug, not a feature. If you can’t concretely define it, you can’t build it.

The best way out of this predicament would be to drop the term AI entirely (outside of philosophy and science fiction) and stick with machine learning, a well-defined, proven technology. But AI has become a powerful brand, riding strong on tremendous momentum. And so, to establish a concrete definition, it must eventually resort to the ultimate, grandiose goal, AGI. Doing so is divisive indeed—many insiders prefer more modest goals for AI as a field. But since those less anthropomorphic goals elude any clear, satisfactory definition, AI’s identity consistently reverts back to AGI.

AGI presents such a gargantuan requirement for technology that even just validating its existence would be impractical within the brief, decades-long time ranges that many bet on. For example, developers could benchmark a candidate system’s performance against a set of, say, 1 million tasks, including tens of thousands of complicated email requests you might send to a virtual assistant, various instructions for a warehouse employee you’d just as well issue to a robot and even brief, one-paragraph overviews for how the machine should, in the role of CEO, run a Fortune 500 company to profitability. You’d then have to wait and see how those companies fare over long periods.

Let’s call AGI what it is: artificial humans. In this story, the computer essentially comes alive. We must not let impressive developments—including the advent of generative AI—propel wild beliefs about the impending arrival of superintelligence. Instead, pivot from grandiose goals to credible ones. The false promise of creating AGI within mere decades leads to poor planning, incurs great cost, gravely misinforms the public and misguides legislation.





Source link

author-sign