The sudden popularity of generative AI has re-generated a popular pre-pandemic preoccupation: How many jobs will AI destroy?
Some prediction experts predicted a decade ago that almost half of U.S. jobs could be replaced by AI by 2023 (!) or, at most, by 2033, mainly impacting low-skill jobs (e.g., no more truck drivers because we will have self-driving trucks). Other crystal-ball observers argued that in contrast to previous waves of automation, we are entering a new era in which the most affected will be highly-skilled knowledge workers.
The tight labor market of recent years has suppressed somewhat these dire predictions. The widespread excitement about generative AI, however, is bringing back the anxiety about jobs, especially the creative kind of jobs.
According to the Harris Poll, “most workers are wary of generative AI,” and 50% don’t trust the technology. The Atlantic tells us confidently: “No technology in modern memory has caused mass job loss among highly educated workers. Will generative AI be an exception?… While it is difficult to predict the exact extent of this trend, it is clear that AI will have a significant impact on the job market for college-educated workers.”
The only thing true in this “clear” prediction is that its difficult to make predictions, especially about the future. But it is certainly possible to assess predictions about the future of work. All you need is knowledge of historical facts and an analysis of past failed predictions.
Let’s start with history. Already with the invention of the earliest proto-computer, Charles Babbage’s Analytical Engine, Ada Lovelace saw its potential (in 1843) as a symbol manipulator. She mused about the possibilities for a much broader range of automation beyond speeding up calculation, including creative tasks such as composing music.
Lovelace made clear, however, that what became to be known as “artificial intelligence,” depends on human intelligence and does not replace it. Rather, it augments it, sometimes throwing new light on an established body of knowledge but never developing it from scratch: “The Analytical Engine has no pretensions whatever to originate anything. It can do whatever we know how to order it to perform.”
More generally, Lovelace argued that “In considering any new subject, there is frequently a tendency, first, to overrate what we find to be already interesting or remarkable; and, secondly, by a sort of natural reaction, to undervalue the true state of the case, when we do discover that our notions have surpassed those that were really tenable.”
In this she anticipated the oft-repeated Silicon Valley’s maxim (known as Amara’s Law): “We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.”
That “law” worked overtime with the emergence in the 1940s of “giant brains,” as the first electronic, digital computers were called at the time, a century after Lovelace’s mother referred to the Analytical Engine as “Babbage’s thinking machine.” Not a “working machine” or “manual labor machine,” but a “thinking,” “brain”-like machine, a knowledge worker potentially replacing human knowledge workers.
In his 1947 talk at the London Mathematical Society about the ACE, one of these early “giant brains,” Alan Turing estimated that it could do the work of 10,000 “computers.” This was one of the first authoritative assessments of the possible impact of modern computers on jobs—not manual jobs but high-skill jobs. The “computers” were the highly educated humans performing calculations, entirely replaced by modern computers by the 1960s. Modern computers, however, created new job categories such as “programmers” and “systems analysts.”
Popularizing the term “automation,” John Diebold wrote in a 1953 Harvard Business Review article: “The effect of automation on the information handling functions of business will probably be more spectacular and far-reaching [than its impact on factory workers]. Repetitive office work, when in sufficient bulk as in insurance companies, will be put on at least a partially automatic basis. Certain functions, such as filing and statistical analysis on the lower management levels, will be performed by machines. Yet very few offices will be entirely automatic. Much day-to-day work—answering correspondence and the like—will have to be done by human beings.”
Diebold established one of the first consulting companies advising businesses on how to adopt modern computers. An entire new industry and entire new breed of knowledge workers followed in his path.
Computers have altered, augmented and replaced knowledge work since the 1950s. How about that quintessential knowledge work, that of a lawyer? Sure, today we have e-discovery leading to “armies of expensive lawyers, replaced by cheaper software.” But how many expensive lawyers were “replaced” by the advent of LexisNexis in the 1970s?
Diebold predicted that the “day-to-day work” would not be automated and what he probably had in mind was that secretaries will keep their jobs. The fact that they didn’t show how difficult it is to predict what impact computers will have (or not have) on knowledge work. Secretaries were not replaced by computers, but by managers who accepted the new social norm that it was not beneath them to “answer correspondence and the like,” as long as they did it with the new status symbol—the personal computer.
Failed predictions can reveal a lot about why yesterday’s futures did not materialize. I have in my files a great example, a report published in 1976 by the Long Range Planning Service of the Stanford Research Institute (SRI), titled “Office of the Future.”
The author of the report was a Senior Industrial Economist at SRI’s Electronics Industries Research Group, and a “recognized authority on the subject of business automation.” His bio blurb indicates that he “also worked closely with two of the Institute’s engineering laboratories in developing his thinking for this study. The Augmentation Research Center has been putting the office of the future to practical test for almost ten years… Several Information Science Laboratory personnel have been working with state-of-the-art equipment and systems that are the forerunners of tomorrow’s products. The author was able to tap this expertise to gain a balanced picture of the problems and opportunities facing office automation.”
And what was the result of all this research and analysis? The manager of 1985, the report predicted, will not have a personal secretary. Instead, he (decidedly not she) will be assisted, along with other managers, by a centralized pool of assistants (decidedly and exclusively, according to the report, of the female persuasion). He will contact the “administrative support center” whenever he needs to dictate a memo to a “word processing specialist,” find a document (helped by an “information storage/retrieval specialist”), or rely on an “administrative support specialist” to help him make decisions.
Of particular interest is the report’s discussion of the sociological factors driving the transition to the “office of the future.” Forecasters often leave out of their analysis the annoying and uncooperative (with their forecast) motivations and aspirations of the humans involved. But this report does consider sociological factors, in addition to organizational, economic, and technological trends. And it’s worth quoting at length what it says on the subject:
“The major sociological factor contributing to change in the business office is ‘women’s liberation.’ Working women are demanding and receiving increased responsibility, fulfillment, and opportunities for advancement. The secretarial position as it exists today is under fire because it usually lacks responsibility and advancement potential. The normal (and intellectually unchallenging) requirements of taking dictation, typing, filing, photocopying, and telephone handling leave little time for the secretary to take on new and more demanding tasks. The responsibility level of many secretaries remains fixed throughout their working careers. These factors can negatively affect the secretary’s motivation and hence productivity. In the automated office of the future, repetitious and dull work is expected to be handled by personnel with minimal education and training. Secretaries will, in effect, become administrative specialists, relieving the manager they support of a considerable volume of work.”
Regardless of the women’s liberation movement of his day, the author could not see beyond the creation of a two-tier system in which some women would continue to perform dull and unchallenging tasks, while other women would be “liberated” into a fulfilling new job category of “administrative support specialist.” In this 1976 forecast, there are no women managers.
But this is not the only sociological factor the report missed. The most interesting sociological revolution of the office of the 1980s – and one missing from most (all?) accounts of the PC revolution – is what managers (male and female) did with their new word processing, communicating, calculating machine. They took over some of the “dull” secretarial tasks that no self-respecting manager would deign to perform before the 1980s.
This was the real revolution: The typing of memos (later emails), the filing of documents, the recording, tabulating, and calculating. In short, a large part of the management of office information, previously exclusively in the hands of secretaries, became in the 1980s (and progressively more so in the 1990s and beyond) an integral part of the work of business executives.
This was very difficult, maybe impossible, to predict. It was a question of status. No manager would type before the 1980s because it was perceived as work that was not commensurate with his status. Many managers started to type in the 1980s because now they could do it with a new “cool” tool, the PC, which conferred on them the leading-edge, high-status image of this new technology. What mattered was that you were important enough to have one of these cool things, not that you performed with it tasks that were considered beneath you just a few years before.
What was easier to predict was the advent of the PC itself. And the SRI report missed this one, too, even though it was aware of the technological trajectory: “Computer technology that in 1955 cost $1 million, was only marginally reliable, and filled a room, is now available for under $25,000 and the size of a desk. By 1985, the same computer capability will cost less than $1000 and fit into a briefcase.”
But the author of the SRI report could only see a continuation of the centralized computing of his day. The report’s 1985 fictional manager views documents on his “video display terminal” and the centralized (and specialized) word processing system of 1976 continues to rule the office ten years later.
This was a failure to predict how the computer that will “fit into a briefcase” will become personal, i.e., will take the place of the “video display terminal” and then augment it as a personal information management tool. And the report also failed to predict the ensuing organizational development in which distributed computing (and eventually, in-your-pocket computing) replaced or was added to centralized computing.
Yes, predicting is hard to do. But compare forecasters and analysts with another human subspecies: Entrepreneurs. Entrepreneurs don’t predict the future, they make it happen.
A year before the SRI report was published, in January 1975, Popular Electronics published a cover story on the “first minicomputer kit,” the Altair 8800. Paul Allen and Bill Gates, Steve Jobs and Steve Wozniak, founded their companies around the time the SRI report was published not because they read reports about the office of the future. They simply imagined it.
For the last ten years, and especially since 2017, creative human intelligence has advanced the state of generative AI. Human intelligence imagined new deep learning architectures, new methods for statistical analysis of text, and new approaches to training AI models on the vast Web “literature.” As Lovelace has observed, human imagination and creativity are not and cannot be components of artificial intelligence.
So regard the current projections of how many and what type of jobs will be destroyed (or created) by artificial intelligence with healthy skepticism. No doubt, as in the past, many low-skill and high-skill occupations will be affected by the increased scope of what computers can do. But many current occupations will thrive and new ones will be created, as—and that’s a safe prediction—the way work is done will continue to change, driven by our creativity and imagination.