If 2023 was the year artificial intelligence got loose in the wild, 2024 will be the year policy makers try to contain it. For anyone who witnessed the technology debates of the past several decades, the signs are clear. The white papers from executive agencies are piling up, the pace of congressional hearings is accelerating, the think-tank experts are convening with one another, the Europeans are overreaching, and talk is turning to the shiniest object of all: a new regulatory agency.

Establishing frameworks for AI policy is important. But as so often happens in such moments, the exact subject of concern is hazy. The experts are too deep in the weeds, while policy makers seem lost in abstractions. Their confusion is excusable. AI has always been an amorphous concept—a way of describing what computer science can’t quite do yet. The technology entrepreneur Jim Manzi tells of taking a course on artificial intelligence at the Massachusetts Institute of Technology in the 1980s, which the instructor began with a slide that read, “If it works, it’s not AI.”

That’s what changed in 2023: Something called AI actually works. In essence it’s an analytical technology that learns complex patterns from training data and then draws on those patterns to make predictions about new data. With modest computing power, that looks like a guess at the next word in a text message. But with access to gargantuan amounts of data and vast computing capacity, it can look like a clean-prose response to a complex prompt approximating what a human would write. It can do the same with graphics, video and sound.

The potential of such technology is immense. Drawing on deep wells of existing information to produce a plausible next increment in response to a prompt is what a lot of intelligent human action looks like. Not only the knowledge but even the judgment of experts in many fields is a product of extended exposure to complex patterns of information. They develop a knack for knowing what should or will come next. AI can develop and apply a similar knack on a larger scale and far faster than most human experts, and that scale and speed will only grow.

From a regulatory point of view, what stands out about this definition of AI is its breadth. Today’s artificial intelligence is a general-purpose technology—an approach to manipulating and generating information that could be used in many fields. Its promise and perils will depend on how it is deployed in countless different domains. This makes the regulatory challenge far more complex and should suggest an approach that begins with what regulators know, not what they don’t.

Policy makers should consider the analogy of the most recent transformative general-purpose technology: the internet. In the 1990s, as the early web was taking shape, there was much talk of creating a specialized internet regulator to respond to the challenges it posed, just as experts are now talking about creating an AI agency. Policy makers ultimately concluded that, precisely because they were dealing with a general-purpose technology, regulation would be needed where the internet touched on domains of American life that were already significant or sensitive enough to be regulated. Regulation of the internet largely began through the existing apparatus of American government.

In 1996, Judge Frank Easterbrook of the Seventh U.S. Circuit Court of Appeals articulated this view in a famous paper called “Cyberspace and the Law of the Horse.” In 19th-century America, Judge Easterbrook argued, horses were important in many domains of life—they were bought and sold, needed veterinary care, transported people and goods, could cause injuries, and on and on. Laws were made to regulate all that, but within the existing domains of commercial law, professional licensing and tort claims. It would have been crazy to consolidate them under an agency or court for the regulation of horses. Regulating the internet that way would be equally senseless.

The same can now be said about artificial intelligence. Creating a government agency with specialized expertise in AI and expecting it to deal with the many domains in which this new technology will play a role would be a fool’s game. There is already immense technical expertise in pharmaceutical safety, trademark law, national security and consumer protection in a variety of executive agencies, congressional committee staffs and other public bodies. They will be far better positioned to consider the risks and benefits of AI than experts in artificial intelligence would be able to develop deep knowledge in all those arenas.

In time, AI might pose grave dangers we can’t now imagine. It could require more specialized regulation. That has turned out to be true of the internet too, and there are ways in which cyberspace is now underregulated (children’s access to social media is an example). But had policy makers acted aggressively on their predictions of such risks 30 years ago, they would have been spectacularly wrong, failed to avert the problems we now face online, and denied us many benefits along the way. Addressing problems as they presented themselves in already regulated domains and adapting as warranted was a far wiser course, as it would be now.

That isn’t because we know where artificial intelligence will take us, but precisely because we don’t. Worrying properly about a novel technology with tremendous promise and a real potential for creating novel risks requires clarity and humility. The AI debates could use more of both.

Mr. Levin is director of social, cultural and constitutional studies at the American Enterprise Institute and editor of National Affairs.



Source link

author-sign