Joanna Andreasson/DALL-E4Joanna Andreasson/DALL-E4

Joanna Andreasson/DALL-E4

The term wicked problem has become a standard way for policy analysts to describe a social issue whose solution is inherently elusive. Wicked problems have many causal factors, complex interdependencies, and no ability to test all of the possible combinations of plausible interventions. Often, the problem itself cannot be articulated in a straightforward, agreed-upon way. Classic examples of wicked problems include climate change, substance abuse, international relations, health care systems, education systems, and economic performance. No matter how far computer science advances, some social problems will remain wicked.

The latest developments in artificial intelligence represent an enormous advance in computer science. Could that technological advance give bureaucrats the tool they have been missing to allow them to plan a more efficient economy? Many advocates of central planning seem to think so. Their line of thinking appears to be:

  1. Chatbots have absorbed an enormous amount of data.

  2. Large amounts of data produce knowledge.

  3. Knowledge will enable computers to plan the economy.

These assumptions are wrong. Chatbots have been trained to speak using large volumes of text, but they have not absorbed the knowledge contained in the text. Even if they had, there is knowledge that is critical for economic operations that is not available to a central planner or a computer.

The Promise of Pattern Matching

The new chatbots are trained on an enormous amount of text. But they have not absorbed this data in the sense of understanding the meaning of the text. Instead, they have found patterns in the data that enable them to write coherent paragraphs in response to queries.

Loosely speaking, there are two approaches to embedding skills and knowledge into computer software. One approach is to hard-code the sort of heuristics that a human being is able to articulate. In chess, this would mean explicitly coding formulas that reflect how people would weigh various factors in order to choose a move. In loan underwriting, it would mean spelling out how an experienced loan officer would regard a borrower’s history of late credit-card payments in order to decide whether to make a new loan.

The other approach is pattern matching. In chess, that would mean giving the computer a large database of games that have been played, so that it can identify and distinguish positions that tend to result in wins. When the computer then plays the game, it would select moves that create positions that fit a winning pattern. In loan underwriting, pattern matching would mean looking at a large historic sample of approved loans to find characteristics that distinguish the borrowers who subsequently repaid the money from those who subsequently defaulted. It would then recommend approving loans where the credit report resembles the pattern of a borrower who is likely to repay.

Human beings use both pattern matching and explicit heuristics. An experienced chess player will not try to calculate the advantages and disadvantages of every single possible move in a position. Instead, the player will immediately recognize a pattern in the position, and this will intuitively suggest a few possible moves. The player will then make a more careful analysis to choose from among those moves. In speed chess, a player relies more on pattern recognition and less on heuristics and careful thought.

If you are on a hike, you may instinctively flinch when you see something that resembles the pattern of a snake. But then you will stop and reason about what you see. If it is not moving, you may conclude that it is merely a stick.

In American football, the quarterback may call a play based on careful reasoning about what the defense is likely to do in a situation. But once the play starts, the quarterback has to make instantaneous decisions based on what his instinct tells him about what the defense is doing. For these decisions, the quarterback is pattern matching.

We tend to pride ourselves on our ability to use heuristics and careful reasoning. When we examine our own thought processes, we do not think of ourselves as mere pattern matchers. But the latest advances in computer science rely heavily on pattern matching. ChatGPT has studied an enormous corpus of text in order to find patterns in how words are used in relation to one another, without having been given any instruction about what the words mean. Many experts, who assumed computers would have to be programmed to know the meaning of words, are surprised that this pattern matching works as well as it does. When you type a comment or a question into ChatGPT, not only will it respond by putting words in proper order; the response is usually meaningful, relevant, and appropriate.

It is almost mysterious how this happens. To a chatbot, a word is a mere “token,” like a tiny square of cloth with a particular color. All it knows is which squares of cloth tend to appear near each other in the patterns that are in its training dataset. One at a time, it places squares of cloth in a sequence, and when the sequence is read as words it makes sense to a human reader.

Pattern matching also works with images. You can give a computer a prompt to draw an image; based on the patterns it finds, it will produce an image that follows the instructions in the prompt. The same pattern-matching technique can be applied to working with computer code, sounds, and video.

A Natural Language Revolution

These new tools revolutionize the way that people and computers communicate, because now computers can respond to our language. Before, we had to learn the computer’s language. The first computers only understood “machine language,” consisting of sets of zeroes and ones. An improvement was provided by “assembly language.” Beyond assembly language were “programming languages,” such as COBOL, FORTRAN, and BASIC.

About 40 years ago, most of us began communicating via the “user interface.” We learned to manipulate a cursor and click on a mouse. Later we learned to use gestures on a phone.

With ChatGPT, we can communicate with a computer using “natural language.” We type something in English, and we get a response in English. This is a superpower, and we are just starting to learn how to take advantage of it.

I wanted to be able to judge essays based on how well they address differing points of view. Can a computer do this for me? If I had to design, code, and test a program to do so, it would take months of work. But after spending a few hours experimenting with and refining prompts, I can get ChatGPT to perform this essay-grading task.

Pattern matching also lets computers respond to and mimic the emotional content of human writing. Chatbots do not merely retrieve information—for that purpose they may actually be less reliable than a Google search. They respond in paragraphs, using human phrasing and tone.

Because they communicate in natural language, ChatGPT and its relatives can do impersonations. Economist and public intellectual Tyler Cowen used a chatbot to conduct an “interview” with Jonathan Swift, the early 18th century satirist. Prompted to impersonate Swift, the chatbot responded to questions from Cowen. For example, Cowen asked about Swift’s attitude toward the Church of Ireland. The chatbot responded, in part: “Ah, dear reader, it is indeed true that I, Jonathan Swift, harbored a certain degree of skepticism towards religious doctrine. Yet, one must not confuse this skepticism with a lack of faith or an aversion to the church altogether. Indeed, I was ordained a priest in the Church of Ireland, and throughout my life, I served in various capacities within its structure.

“Now, you may wonder how a man of my disposition could reconcile these seemingly contradictory stances,” it continued. “The answer, I daresay, lies in the understanding that the church and the state must work in concert to maintain order and stability within society. The Church of Ireland, as an established state church, served as an important institution that promoted social cohesion and provided moral guidance to the people.

Chatbots can impersonate celebrities, therapists, companions, and romantic partners. They are able to connect to us emotionally to an extent that far exceeds any previous computer programs. The implications of this advance may be profound.

A few decades ago, math teacher Jaime Escalante became famous for motivating teenagers in a high-poverty school to learn calculus at an Advanced Placement level. His demanding, confrontational style and striking accomplishments were immortalized in a film, Stand and Deliver. Imagine being able to clone Escalante and put him in classrooms across the country.

Or consider the problem of training a robot. Today that involves working in computer code, but within a few years we should be able to communicate with robots using natural language.

Customer support calls are another area with obvious potential. All of us have experienced the frustration of menu systems (“If you are calling about , press 1”). Thankfully, those systems may soon be obsolete. Instead, a chatbot can quickly catch on to the customer’s question or respond sympathetically to the customer’s complaint.

Some enthusiasts see chatbots becoming lifelong companions. Futurist Peter Diamandis has predicted that “you’ll ultimately give your personal AI assistant access to your phone calls, emails, conversations, cameras…every aspect, of every moment, of your day. Our personal AIs will serve (and we may become dependent upon them) as our cognitive collaborators, our on-demand researcher, our consigliere, our coaches…giving us advice on any and all topics that require unbiased wisdom.”

Venture capitalist Marc Andreessen has argued similarly that within a few years every child will grow up with a personal chatbot as a lifelong partner. Your personal chatbot would have the ability to understand your abilities and desires. It would be able to motivate you, coach you, train you, and serve you.

It is too early to know which of these forecasts will actually pan out and which will fail to materialize, let alone what unexpected uses will appear out of nowhere. This is reminiscent of the World Wide Web circa 1995, when many of us anticipated rapid disruptions in education or the real estate market that have yet to occur. Meanwhile, nobody was predicting real-time driving directions or podcasting.

Limited Knowledge

Chatbots use pattern matching to provide coherent, relevant responses. But that does not mean that they have encyclopedic knowledge. The answers that a chatbot gives are not necessarily wise. They are not even necessarily true.

I have written several papers on the 2008 financial crisis, in which I make a case for what I believe were the most important causal factors. But when I asked ChatGPT to summarize my views on the crisis, it included explanations that are favored by other economists but not me. That is because the chatbot is trained to identify word patterns without knowing what the words mean.

Some knowledge is not available in any corpus of data. For example, we cannot predict how an innovation will play out.

As of this writing, Apple has introduced a revolutionary product it calls the Vision Pro. No one knows exactly how this product will be used, or whether it will be successful. This knowledge will emerge over time, with the market providing the ultimate judgment. As economist Friedrich Hayek wrote, market competition is a discovery procedure. Even if a computer possessed all of present knowledge, it could not replace this discovery procedure.

Central Planning Still Won’t Work

Economic organization is a wicked problem. Your intuition might be that the best approach would be for a department of experts to determine what goods and services get produced and how they are distributed. This is known as central planning, and it has not worked well in reality. The Soviet Union fell in part because its centrally planned economy could not keep up with the West.

Some advocates of central planning have claimed that computers could provide the solution. In a 2017 Financial Times article headlined “The Big Data revolution can revive the planned economy,” columnist John Thornhill cited entrepreneur Jack Ma, among others, claiming that eventually a planned economy will be possible. Those with this viewpoint see central planning as an information-processing problem, and computers are now capable of handling much more information than are individual human beings. Might they have a point?

F.A. Hayek made a compelling counterargument. In a famous paper called “The Use of Knowledge in Society,” first published in 1945, Hayek argued that some information is tacit, meaning that it will never be articulated in a form that can be input to a computer. He also argued that some information is dispersed, meaning that it is known only in small part to any one person. Given the decentralized character of information, a market system generates prices, which in turn generate the knowledge necessary to efficiently organize an economy.

A central computer is not going to know how you as an individual would trade off between two goods. You may not be able to articulate your preferences yourself, until you are confronted with a choice at market prices. The computer is not going to know how consumers will respond to a new product or service, and it is not going to know how a new invention might change production patterns. The trial-and-error process of markets, using prices, profits, and losses, addresses these challenges.

Economists have a saying that “all costs are opportunity costs.” That is, the cost of any good is the cost of what you have to forgo in order to obtain it. In other words, cost is not inherent in the nature of the good itself or how it is produced. It is impossible to know the cost of a good until it is traded in the market. If central planners do away with the market, then they will not have the information needed to calculate costs and make good decisions. Forced to use guesswork, planners will inevitably misallocate resources.

In a market system, bad decisions result in losses for firms, forcing them to adapt. Without the signals provided by prices, profits, and losses, a central planner’s computer will not even be aware of the mistakes that it makes.

Learning From Simulations

The problem of organizing an economy is too wicked to be solved by computers, whether they use pattern matching or other methods. But that does not mean that advances in computer science will be of no help in improving economic policy.

New software tools can be used to create complex simulations. The tools that gave us chatbots could be used to create thousands of synthetic economic “characters.” We could have them interact according to rules and heuristics designed to mimic various economic policies and institutions, and we could compare how different economic policies affect the outcomes of these simulations.

Among economists, this technique is known as “agent-based modeling.” So far, it has been of only limited value, because it is difficult to create agents that vary along multiple dimensions. But it may be improved if we can use the latest tools to create a richer set of economic characters than what modelers have used in the past. Still, this improvement would be incremental, not revolutionary. They will not permit us to hand off the resource allocation problem to a central computer.

The latest techniques for using large datasets and pattern matching offer new and exciting capabilities. But these techniques alone will not enable us to solve society’s wicked problems.

The post Not Even Artificial Intelligence Can Make Central Planning Work appeared first on Reason.com.



Source link

author-sign