Those rambunctious monkeys.
There is a quite famous thought experiment that you might have heard of involving monkeys. The altogether intriguing contrivance is often used by those that want to make a particularly honed point.
Here’s how the plot goes.
Imagine that a monkey is typing on a typewriter. If the monkey keeps typing over an infinite amount of time, and assuming that the monkey is typing keys purely on a random basis, the odds are that the entire works of Shakespeare will be inevitably typed.
The gist seemingly is that by random chance alone it is feasible to sometimes get an intelligible answer. We all tend to agree that the works of Shakespeare are a tremendous exhibition of intelligible writing and reasoning. Thus, anything or any means of producing Shakespeare’s prized words would seem to be amazingly impressive, though, at the same time, we would be stridently let down that it wasn’t by intelligence per se and instead by merely random luck.
Some are nowadays trying to compare this monkey-laden metaphor to the latest in Artificial Intelligence (AI).
You likely know that the hottest form of AI these days is Generative AI, which is exemplified via a widely and wildly popular AI app known as ChatGPT made by OpenAI. I’ll be explaining more about generative AI and ChatGPT in a moment. For right now, just know that this is a text-to-text or text-to-essay AI app that can produce an essay for you based on an entered prompt of your choosing.
The claimed connection relating to the legendary typing monkey is that supposedly the impressive, outputted essays produced by generative AI that appear to be completely fluent are no more astounding than the accomplishments of the typing primate. If you accept the premise that a monkey randomly typing can generate the works of Shakespeare, and if you are willing to concede that ChatGPT and other generative AI are ostensibly the same, you must ergo conclude that generative AI is not at all especially noteworthy. It is just randomness fooling us.
Well, this might seem like a compelling case, but we need to unpack it. A mindful unpacking will showcase that the comparison between the two is misleading and plainly wrong.
Stop making the comparison. For those that insist on continuing to make a comparison, please at least do so in a prudent and aboveboard fashion.
Those that simply toss around the comparison are doing a disservice to generative AI. And, the more vital concern is that this is misleading to the general public and society at large. I suppose we could also add that they are doing a disservice to the hard-working monkeys too, or perhaps undermining the value of the infinite typing monkeys theorem. Be fair. Be kind. Be truthful.
Before we get into a deep dive on this, there is an insider joke that leverages the typing monkey notion. You might like it.
The cynical bit of humor is often traced to personal correspondence during the initial heyday of the Internet. This is when the Internet was edging out of being a somber serious online realm and into the unhinged territory of being noisy, boisterous, and unruly as the number of people using the Internet rose demonstrably.
The humorous anecdote says that if monkeys typing on typewriters would ultimately produce or shall we say reproduce the entire body of work by Shakespeare, we now have proof that thanks to the advent of the Internet this must decidedly not be true.
Are you laughing?
Some construe this to be an uproariously funny remark.
The joke is a putdown on how the Internet with all its frothing and spewing postings is nary rising to the level of producing Shakespeare. It is a sharply cutting remark highlighting that the Internet presumably has not elevated discourse but instead denigrated discourse. Many assumed that the Internet would be a boon to intelligent interaction, allowing for thought-provoking discussions across the globe. Seems like we haven’t necessarily witnessed this on as large a basis as hoped for.
Of course, we would be remiss in taking the joke as a true harbinger of what the Internet has wrought. There are plenty of great reveals and noteworthy values associated with the Internet. The joke is an embellishment or overstatement. Nonetheless, the point is well-taken that we need to be watchful of insidious and gutter content, while aiming toward finding and uplifting societally inspiring works via the use of the Internet. For my coverage about how AI can both help and yet in a dual-use fashion undercut societal discourse via adverse postings on the Internet, see my discussion at the link here.
In today’s column, I will be addressing the significant differences between generative AI and the classic tale of the typing monkeys. I’ll explain where the comparison falls short. You will undoubtedly end up knowing more about the typing monkeys theorem, along with understanding more concretely how generative AI works. I will be occasionally referring to ChatGPT since it is the 600-pound gorilla of generative AI (pun intended), though do keep in mind that there are plenty of other generative AI apps and they generally are based on the same overall principles.
Meanwhile, you might be wondering what in fact generative AI is.
Let’s first cover the fundamentals of generative AI and then we can take a close look at the typing monkeys theorem comparisons.
Into all of this comes a slew of AI Ethics and AI Law considerations.
Please be aware that there are ongoing efforts to imbue Ethical AI principles into the development and fielding of AI apps. A growing contingent of concerned and erstwhile AI ethicists are trying to ensure that efforts to devise and adopt AI takes into account a view of doing AI For Good and averting AI For Bad. Likewise, there are proposed new AI laws that are being bandied around as potential solutions to keep AI endeavors from going amok on human rights and the like. For my ongoing and extensive coverage of AI Ethics and AI Law, see the link here and the link here, just to name a few.
The development and promulgation of Ethical AI precepts are being pursued to hopefully prevent society from falling into a myriad of AI-inducing traps. For my coverage of the UN AI Ethics principles as devised and supported by nearly 200 countries via the efforts of UNESCO, see the link here. In a similar vein, new AI laws are being explored to try and keep AI on an even keel. One of the latest takes consists of a set of proposed AI Bill of Rights that the U.S. White House recently released to identify human rights in an age of AI, see the link here. It takes a village to keep AI and AI developers on a rightful path and deter the purposeful or accidental underhanded efforts that might undercut society.
I’ll be interweaving AI Ethics and AI Law related considerations into this discussion.
Fundamentals Of Generative AI
The most widely known instance of generative AI is represented by an AI app named ChatGPT. ChatGPT sprung into the public consciousness back in November when it was released by the AI research firm OpenAI. Ever since ChatGPT has garnered outsized headlines and astonishingly exceeded its allotted fifteen minutes of fame.
I’m guessing you’ve probably heard of ChatGPT or maybe even know someone that has used it.
ChatGPT is considered a generative AI application because it takes as input some text from a user and then generates or produces an output that consists of an essay. The AI is a text-to-text generator, though I describe the AI as being a text-to-essay generator since that more readily clarifies what it is commonly used for. You can use generative AI to compose lengthy compositions or you can get it to proffer rather short pithy comments. It’s all at your bidding.
All you need to do is enter a prompt and the AI app will generate for you an essay that attempts to respond to your prompt. The composed text will seem as though the essay was written by the human hand and mind. If you were to enter a prompt that said “Tell me about Abraham Lincoln” the generative AI will provide you with an essay about Lincoln. There are other modes of generative AI, such as text-to-art and text-to-video. I’ll be focusing herein on the text-to-text variation.
Your first thought might be that this generative capability does not seem like such a big deal in terms of producing essays. You can easily do an online search of the Internet and readily find tons and tons of essays about President Lincoln. The kicker in the case of generative AI is that the generated essay is relatively unique and provides an original composition rather than a copycat. If you were to try and find the AI-produced essay online someplace, you would be unlikely to discover it.
Generative AI is pre-trained and makes use of a complex mathematical and computational formulation that has been set up by examining patterns in written words and stories across the web. As a result of examining thousands and millions of written passages, the AI can spew out new essays and stories that are a mishmash of what was found. By adding in various probabilistic functionality, the resulting text is pretty much unique in comparison to what has been used in the training set.
There are numerous concerns about generative AI.
One crucial downside is that the essays produced by a generative-based AI app can have various falsehoods embedded, including manifestly untrue facts, facts that are misleadingly portrayed, and apparent facts that are entirely fabricated. Those fabricated aspects are often referred to as a form of AI hallucinations, a catchphrase that I disfavor but lamentedly seems to be gaining popular traction anyway (for my detailed explanation about why this is lousy and unsuitable terminology, see my coverage at the link here).
Another concern is that humans can readily take credit for a generative AI-produced essay, despite not having composed the essay themselves. You might have heard that teachers and schools are quite concerned about the emergence of generative AI apps. Students can potentially use generative AI to write their assigned essays. If a student claims that an essay was written by their own hand, there is little chance of the teacher being able to discern whether it was instead forged by generative AI. For my analysis of this student and teacher confounding facet, see my coverage at the link here and the link here.
There have been some zany outsized claims on social media about Generative AI asserting that this latest version of AI is in fact sentient AI (nope, they are wrong!). Those in AI Ethics and AI Law are notably worried about this burgeoning trend of outstretched claims. You might politely say that some people are overstating what today’s AI can actually do. They assume that AI has capabilities that we haven’t yet been able to achieve. That’s unfortunate. Worse still, they can allow themselves and others to get into dire situations because of an assumption that the AI will be sentient or human-like in being able to take action.
Do not anthropomorphize AI.
Doing so will get you caught in a sticky and dour reliance trap of expecting the AI to do things it is unable to perform. With that being said, the latest in generative AI is relatively impressive for what it can do. Be aware though that there are significant limitations that you ought to continually keep in mind when using any generative AI app.
One final forewarning for now.
Whatever you see or read in a generative AI response that seems to be conveyed as purely factual (dates, places, people, etc.), make sure to remain skeptical and be willing to double-check what you see.
Yes, dates can be concocted, places can be made up, and elements that we usually expect to be above reproach are all subject to suspicions. Do not believe what you read and keep a skeptical eye when examining any generative AI essays or outputs. If a generative AI app tells you that Abraham Lincoln flew around the country in his private jet, you would undoubtedly know that this is malarky. Unfortunately, some people might not realize that jets weren’t around in his day, or they might know but fail to notice that the essay makes this brazen and outrageously false claim.
A strong dose of healthy skepticism and a persistent mindset of disbelief will be your best asset when using generative AI.
We are ready to move into the next stage of this elucidation.
What Is Happening With Those Typing Monkeys
Now that you have a semblance of what generative AI is, we can explore the comparison to the typing monkeys. In a sense, I am going to step-by-step be incrementally taking apart the monkey typing theorem. I do so to illuminate the underpinnings. We can then use the revealed elements to do a comparison to generative AI.
The typing monkeys theorem or hypothesis contains a core set of elements:
- a) Who or What. The identified creature or actor doing the typing
- b) Number And Longevity. How many of them there are and their longevity status
- c) Symbols Outputted. Production of letters and known symbols via a rudimentary device
- d) Time. Length of time performing the task
- e) Intelligence. What savviness do they bring to the performance of the task
- f) Targeted Output. The targeted output of what we want them to produce
Let’s first examine the typing monkeys.
You might recall that I mentioned at the opening of this discussion that we were to imagine that a monkey was typing on a typewriter. I referred to the basic concepts as entailing just one monkey doing so. We can adjust that facet.
Here are ways that the situation is oftentimes portrayed:
- One solitary monkey of an everyday mortal existence
- A thousand such monkeys
- A million such monkeys
- An infinite number of such monkeys
- A solitary monkey that is immortal
- Some number of immortal monkeys
Notice that rather than having only one monkey, we might recast the thought experiment and have a multitude of monkeys that are working presumably simultaneously. Furthermore, another adjustable aspect is whether the monkeys are mortal or immortal. I’ll dig further into this momentarily.
We also need to include the factor of time as a crucial ingredient.
Usually, the time factor is one of these two considerations:
- Finite period of time
- Infinite time
Another somewhat unspoken underlying element is that monkeys are being used in this case because we consider them to be relatively unthinking. They do not know how to read or write. They are not able to exhibit intelligence in the same manner that we associate intelligence with human capacities.
This is somewhat insulting when you give it a modicum of thought. I think we can all reasonably agree that monkeys are amazingly smart, at least for what they can accomplish within their thinking limits. I would dare say that we ascribe greater thinking prowess to monkeys than we do to many other animals. There are plenty of studious research experiments that have been done to showcase how mentally sharp monkeys can be.
In any case, for purposes of the metaphor, the assumption is that monkeys are not able to think to a degree that they could of their own accord conceive of the works of Shakespeare. Whereas the classic movie Planet Of The Apes tried to forewarn us that this might be a faulty assumption, we are in any case going with it in today’s world.
If we substituted the use of ants for the monkeys, the metaphor somewhat dissipates. We don’t conceive of ants as being able to type on typewriters. We could try to substitute the use of dogs or cats since they could almost type on a typewriter, but in the end, the use of monkeys is best since they can type in a manner reminiscent of humans typing. They have the appropriate limbs and body structure to perform the task at hand. They also mentally are viewed as capable of typing, though we assume they do not know what they are typing.
As an aside, there have been many research experiments involving monkeys and their recognition of symbols. Included in these various studies have been setups that had the monkeys typing on typewriters or similar devices. If done appropriately, this can be meaningful in the pursuit of useful insights about intelligence and the arising of intelligent behaviors.
Regrettably, the research entailing typing on typewriters is at times not done in a particularly serious vein. At times, the approach used has been nothing more than a feeble wink-wink nod to the famous or infamous monkeys typing theorem, rather than to bona fide foundational research pursuits. I do not find such antics amusing or proper. The notion has been that monkeys were physically given typewriters and encouraged to type as based on their whim or sometimes for treats such as food. Unless this is done in a bona fide robust experimental manner, it is nothing more than a façade.
A slight twist that is more agreeable consists of setting up computer-based simulations that purport to perform what monkeys might do in these circumstances. The computer is used to simulate these aspects. No actual monkeys are involved. Some have even gone so far as to do a bit of so-called citizen science by parceling out the simulation to anyone willing to allow their laptop or computer to be used for these efforts. Do not fall for fake scams that insidiously claim they are doing this for science when the reality is they are attempting to infect your computer with a computer virus. Be wary.
Back to the matter at hand.
One aspect that also is instrumental to the circumstance is that typewriters are being used in this typing monkey hypothetical.
Because that’s how we can get the production of letters, which then can be formed into words, which can then be formed into stories. The same or similar notion of producing lots of letters does not necessarily require that we type them. Indeed, there are variants of this metaphor that go back to the days of Aristotle and ergo there weren’t typewriters around then.
We could change the metaphor and refer to modern-day keyboards and computers. We could say that the monkeys are banging away on a laptop or maybe even on a smartphone. The beauty of referring to typewriters is that we associate typewriters as being non-computerized and therefore they do not aid in the typing process itself. This is crucial to the contrivance involved.
Lastly, we are usually presented with the aspect that the works of Shakespeare are to be produced. We could readily substitute Shakespeare for any other well-known author. It could be that we want to know whether the monkeys can produce the entire works of Charles Dickens, Jane Austen, Ernest Hemingway, and so on. It doesn’t especially matter. The essence is that the writing has to be something that we all know and that we acknowledge to be outstanding writing.
We can readily substitute any writing that we want to set as the target.
The convenience of referring to Shakespeare is that his works are construed as at the topmost or pinnacle of human writing. We could instead find an essay written by a first grader and use that as the target. Believe it or not, the same precepts still apply. People would probably not find this inspiring that the monkeys were able to reproduce the writing of a child. To keep things engaging, the writing has to be of the highest caliber.
A variant of the targeted output would be to refer to a specific work of Shakespeare rather than his entire body of work. As you’ll soon see, it makes little difference to the core essence of the matter. I would guess that many people tend to mention Hamlet as part of the monkey typing theorem, perhaps since this happens to be his longest play, amounting to a reported 29,551 words in size (composed of around 130,000 letters or so).
Any of his plays would suffice.
The whole contrivance hinges on the various laws of probability. You might have learned about the nuances of probabilities in those grueling classes on statistics and mathematics that you took in school.
Let’s use the word “Hamlet” to see what it takes to randomly produce those six letters in that specific sequence of H-a-m-l-e-t.
The easiest way to arithmetically calculate this consists of assuming that we have an easy round number of the count of available keys on a typewriter. Suppose we have a typewriter that has 50 distinct and equally usable keys. Each key represents a particular symbol such as the symbols of the usual English alphabet. Assume that the keys are arranged in random order and that we haven’t rigged the situation by putting the H-a-m-l-e-t separate keys in a particular arrangement to induce typing those specific keys more so than any other keys.
Each key is pressed completely independently of whatever key has been pressed before it. Therefore, out of the 50 keys, the chances of any key being pressed is considered as 1 out of 50 chance. The same holds true for all of the keys and throughout the entirety of the typing effort. The calculation for a single key being pressed is a 1 out of 50 chance, or that’s 1/50.
The chances then of typing the letter “H” is 1/50, and the chances of typing the letter “a” are 1/50, and the chances of typing the letter “m” are 1/50, and so on.
- The probability of “H” being typed is 1/50.
- The probability of “a” being typed is 1/50.
- The probability of “m” being typed is 1/50.
- The probability of “l” being typed is 1/50.
- The probability of “e” being typed is 1/50.
- The probability of “t” being typed is 1/50.
A standard rule or law of probability states that if two or more events are fully statistically independent of each other, we can calculate the chances of their both occurring by simply multiplying their probabilities by each other respectively. We can do so regarding these six letters.
We have this calculation: “H” (1/50) x “a” (1/50) x “m” (1/50) x “l” (1/50) x “e” (1/50) x “t” (1/50)
That is: (1/50) x (1/50) x (1/50) x (1/50) x (1/50) x (1/50)
The minuscule number comes to 1 / 15,625,000,000.
The chances then of typing the six-letter word “Hamlet” is roughly one in 15 billion, all else being equal.
Those are daunting odds. And this is merely for typing a particular six-letter word. Try applying this same calculation to the 29,551 words of the entire Hamlet play. If you decide to calculate this, realize too that the spaces between words need to be accounted for.
The longer the targeted output, the more the chances mount against our being able to generate those precise sets of letters and words. The odds get smaller and smaller. The chances are so small that we would almost toss in the towel and say that it seems like it would “never” happen (be cautious when using the word “never” since that’s a formidable contention).
Take for example a mortal monkey.
According to various reputable online indications, the usual life span of a monkey in the wild is around 40 years or so. If you want to debate that lifespan, we can just use the number 100 and proceed with a rather unlikely upper bound. A monkey typing on a typewriter non-stop for say one hundred years, not including time to rest, time to eat, or the like, and assuming that this is all the monkey did from the moment of birth to their last breath, still won’t help even up the odds writing of Hamlet all told (the monkey, if typing a key each second non-stop for the 100 years, would press about 3,155,673,600 keys).
We can reasonably say that it is enormously unlikely that a mortal monkey could end up typing by random chance the play Hamlet.
You can increase the number of mortal monkeys, but this does little to budge the overwhelming odds against typing Hamlet. Some posit that there are a thousand monkeys. Another approach says there are a million monkeys. Assuming they all lived to be 100 years of age, and each typed one random key on their own respective typewriter at a non-stop pace of one key per second, this still does not make a statistically notable dent in typing out the play Hamlet.
Ponder all this.
Somewhat tongue-in-cheek, where exactly would you house a million monkeys for this task? Imagine too that the typewriters have to last for one hundred years of continual use (can you find a million working typewriters that nobody wants and is willing to donate to this erstwhile project?). Seems like you would need to have a lot of spare typewriters at the instant ready. And so on. The logistics are staggering.
This all then seems gloomy that the mortal monkeys are not likely to reproduce Hamlet.
But suppose we make them immortal. Yes, we give them some magic potion that lets them live forever. We don’t even need more than one immortal monkey. Just one will do. It might make the metaphor more exciting to claim that we have a thousand or a million immortal monkeys.
If we have one monkey that can live forever, we might suggest that this is an infinite monkey. It can for an infinite time be pounding away at the keys of the typewriter. That monkey will just keep going and going. Accordingly, even though the chances of typing the play Hamlet were extremely small, the aspect that the monkey will unendingly keep trying is suggestive that at some point the play Hamlet will almost surely have been typed out.
The rule-of-thumb, as it were, is that a sequence of events that has a non-zero chance of happening, albeit extraordinarily low in chances, we would reasonably agree will almost nearly occur if we have infinite time to play with, all else being equal. Those in the mathematics and statistics fields are prone to describing the same consideration via the use of strings or even binary numbers of 0 and 1. If you have a finite set of symbols, and there is an infinite string of them, whereby each symbol has been chosen uniformly at random, there is a finite string in there that you could almost surely anticipate to occur.
There is a big catch to all of this.
We live in a world of finite. None of us would seem to have infinite time available. For those of you that say you do, kudos. My hat goes off to you.
If you impose the finite world on the typing monkeys, you are going to find yourself hitting a rather hard wall. Analyses of the typing monkey theorem will pretty much proffer that the probability of the attainment of the play Hamlet is close enough to zero in finite time that for any reasoned operational basis it is simply not likely to happen. The usual depiction is that if you used as many monkeys as there are atoms in the known universe, and they kept typing for many zillions of times of the time span of the universe, you are still looking at inconceivably teeny tiny unfathomable odds of seeing the play Hamlet.
The typing monkey theorem is quite a hoot and is often ranked as being in the top seven thought experiments of our times. You are welcome to do some additional scrutiny about the theorem as there are lots of analyses available online. It is a vivid and enjoyable way to get a grasp on probability and statistics. Rather than dealing exclusively with dry numbers, you get to envision those fun-loving rollicking monkeys and all those old-fashioned clickity-clackity typewriters.
We are now ready to bring generative AI into the monkeys and typewriters conundrum.
Generative AI Gets Irked By The Typing Monkeys
The premise that we are going to closely examine is the contentious claim that generative AI such as ChatGPT is no different than the typing monkeys. It is said that if ChatGPT or any generative AI can produce Hamlet or similar known works, this is entirely a random result that by probability has perchance arisen in the same manner that monkeys might arrive at typing up this long-prized and deeply revered Shakespearean play.
Sorry, that’s wrongful thinking on this weighty topic.
Let’s see why.
First, let’s review and expand on what generative AI consists of.
Recall that I earlier indicated that generative AI is software that entails the use of algorithms to data train on the text that exists on the Internet and via other akin sources. A vast array of pattern-matching has mathematically and computationally identified patterns among the millions upon millions of narratives and essays that we humans have composed.
The words have no particular significance unto themselves. Think of them as objects. Within the computer, they are represented as numbers that we denote as tokens. They are used as a convenient means to associate other words or tokens with each other, doing so in an in-depth and intricately statistical web-like structure.
Some in the AI field are worried that this is nothing more than what is referred to as a stochastic parrot.
You see, rather than trying to connect some semblance of “meaning” to the words, instead this is just an extensive indexing of words that seem to be used around or next to other words. In contrast, we assume that humans can “understand” the nature and meaning of words.
Consider your daily access to the presence of word-to-word correspondences. Similar to when you use a commonplace auto-complete function in your word processing software, the computer is mathematically calculating that a particular word is usually followed by some other particular word, which in turn is followed by another particular word, and so on. Thus, you can oftentimes start to write a sentence and the word processing package will show you a guess of what the additional words of the sentence will be.
It is a guess because statistically, these might be the usual words of the sentence, but you might have something else in mind to say, thus the prediction is off from what you wanted to write. Enough other examples presumably exist of sentences that do use those words that the algorithm is able to estimate that you are likely to want to finish the sentence with the predicted words. This is not ironclad. Also, there is no “meaning” associated with this computational guess.
Some AI researchers argue that to attain true AI, often coined as Artificial General Intelligence (AGI), we will need to somehow codify into computers an as-yet-discovered or invented form of “comprehension” (see my column for numerous postings about AGI and the pursuit of AGI). They worry that the mania over generative AI is no more than a dead-end. We will keep trying to push further and further the generative AI by upscaling the size of the computational networks and throwing more and more computer processing power at the matter. All of that will be to no avail when it comes to arriving at AGI, they contend.
An added qualm is that perhaps this pursuit of a supposed dead-end is distracting us from the correct or proper course of action. We will expend immense energy and effort toward a misguided end-state. Sure, generative AI might be stunning at the mimicry trickery, but it could be that this has little or nothing to do with AGI. We could fool ourselves into wasting precious focus. We might delay or maybe even fail to ever get to AGI because of this alluring distraction.
Anyway, for purposes of the typing monkeys, let’s get back to the overall fracas.
We need to consider these notable factors:
- 1) Sentient versus not sentient
- 2) Thinking versus not “thinking”
- 3) Limited thinking processes versus computer-based algorithms and pattern-matching
- 4) Untrained or unable to train versus computational data trained
Let’s tackle each one of those factors.
Sentient Versus Not Sentient
I believe we can concede that monkeys are sentient beings. Regardless of how smart or lacking in smarts you might wish to argue they are; they undeniably are sentient. That’s a fact. Nobody can reasonably contend otherwise.
The Artificial Intelligence of today is not sentient. Period, full stop.
Furthermore, I contend that we aren’t anywhere close to AI sentience. Others might of course disagree. But anyone of reasonable composure would agree that today’s AI is not sentient. For my analysis of the abysmally mistaken labeling of AI sentience by that Google engineer last year, see my discussion at the link here.
So, one crucial difference between those eagerly typing monkeys and today’s generative AI is that the monkeys are sentient beings while the AI is not. On top of this, it is often a slippery slope to start comparing today’s AI to anything sentient. There is a tendency to anthropomorphize AI. I stridently urge that to try and prevent this easy mental trap from befalling us, we avoid any comparisons between AI and sentient beings unless we are aboveboard and clearly explicitly identify and demarcate that difference.
Few if any make that demarcation when comparing the typing monkeys and generative AI. They assume that you will either already realize that there is this difference, or they don’t care that there is a difference, or they haven’t thought about it, etc.
Thinking Versus Not “Thinking”
I would claim that monkeys can think. They are thinking beings. We can readily debate how much thinking they can do. You almost certainly though have to agree that monkeys can think.
Today’s AI of all kinds, including generative AI, does not rise to what I consider the human capacity of thinking.
I’ll repeat my just-mentioned refrain related to sentience. It is misleading and I contend wrong to go around saying that today’s AI can think. Sadly, people do this all the time, including AI researchers and AI developers. I believe this is once again unfortunate and ill-advised anthropomorphizing. You are giving a semblance of capacity or capabilities to AI that are not there and that will misinform society at large on the matter. Stop doing this.
Generative AI is a complex web-like structure of mathematical and computational properties. It is admirable. It is gob smocking of what this achieves. I do not believe any reasonable interpretation of “thinking” as we conceive of it, in all its glory, befits this AI.
Limited Thinking Processes Versus Computer-Based Algorithms And Pattern-Matching
Monkeys are limited in their thinking processes.
You might find of interest that there are many comparisons in the scientific literature of monkey brains versus the brains of humans. For example, consider this research study: “The human brain is about three times as big as the brain of our closest living relative, the chimpanzee. Moreover, a part of the brain called the cerebral cortex – which plays a key role in memory, attention, awareness, and thought – contains twice as many cells in humans as the same region in chimpanzees. Networks of brain cells in the cerebral cortex also behave differently in the two species” (in an article published in eLife, September 2016, entitled “Differences and similarities between human and chimpanzee neural progenitors during cerebral cortex development”).
We all realize that monkeys are not on par with human thinking. Those wondrous creatures can be endearing and do a surprising amount of thinking, no doubt about it. They just don’t rise to the levels of human thinking. I will regret saying this, once the apes take over humankind.
I already voiced a moment ago that today’s AI does not think. I emphasized that what AI is doing should not be labeled as “thinking” since doing so is misleading and confounding.
Here’s where the generative AI does outshine the monkeys, in terms of using computer processing based on human-devised algorithms and predicated on human-produced writings. There is little or no chance that the thinking monkey could absorb and pattern-match to the vast use of written symbols that humans have come up with. Monkeys don’t have that kind of thinking capacity.
I hesitate to suggest such a comparison, given my other expressed qualms. But, I am clearly stating what the assumptions are and how to properly and suitably undertake this analysis.
Untrained Or Unable To Train Versus Computationally Data Trained
Similar to what I just said, you are not going to be able to train a thinking monkey on the vast use of written symbols of humankind. You can do this on an extremely limited basis, and studies have shown that monkeys can seemingly think about written symbols. This is far less than being able to memorize and repeat back extensive patterns of words, sentences, and entire narratives.
Generative AI is a computer-based statistical mimicry that can be computationally data trained. If we keep feeding more data such as additional texts that we collect or find, the assumption and hope are that the patterns found will get deeper and deeper. Plus, using faster and faster computer chips and processing will also boost this pattern-matching and response capacity.
Looking At The Bottom-Line
If generative AI were to produce the play Hamlet, what would that signify?
First, we have to consider whether or not the story or play was fed into the generative AI at the time of the data training. If so, there is nothing especially notable or remarkable about the generative AI later on spouting back out the same words it had previously scanned.
An AI researcher might be a bit dismayed because the pattern-matching presumably went overboard, having essentially memorized the words. We usually refer to this in the machine learning realm as overfitting to the data that was used during training. Typically, you don’t want the exact words to be patterned, you want a generalized pattern to be formed.
I’ve discussed in my columns the concern that at times we might see privacy intrusions and the revealing of confidential data in cases where generative AI did a precise matching rather than a generalized matching of fed data, see my coverage at the link here.
Second, suppose that the play Hamlet was not fed into the generative AI. The next consideration then would be whether any of Shakespeare’s works had been scanned during data training.
If so, it is conceivable that the play Hamlet could be produced based on the patterns associated with Shakespeare’s other works, especially if there are other references or mentions of Hamlet elsewhere in the data training set. All of those could be potentially utilized by the pattern-matching for forming a style of Hamlet. Admittedly, being able to generate Hamlet word-for-word would be an outstretched reach, a considerably eye-opening and surprising result.
Third, if generative AI produced the entirety of Hamlet and had never beforehand been fed anything whatsoever about Shakespeare, well, that would be astonishing. It would not though necessarily be quite the same as the purely random nature of pecking away at keys on a typewriter. We have to realize that the words of Shakespeare are words, thus, they are part of the totality of wordings found across the vast array of text stories and narratives fed into the generative AI. You are improving the odds by starting with the cornerstone of words and the associations among words. Still, the chances are pretty thin of something like this happening.
When it comes to producing words and essays, generative AI is going gangbusters since it is based on human-devised words and essays (of course, we need to squarely deal with the errors, falsehoods, and AI hallucinations). The AI doesn’t “understand” the words emitted. There isn’t any there, there.
You don’t have to wait an infinite period of time to see fluent essays and fully readable outputs. They happen daily and at the touch of a button. They aren’t jumbled, at least not most of the time, due to being pattern produced based on what humans have written. The pattern matching should be further finetuned and eventually good enough to slim down much of the oddball wordings, see my explanation of how this might work, shown at the link here. This tuning will continually be refined, and we will all be increasingly smitten with what generative AI produces.
The words are not purely randomly chosen. The words are not purely randomly spelled out. There are some probabilistic aspects such as when generating the outputted essay as to which words to select. But this is still based on human writings and thus not presumably purely at random. It is based on a random choice among a handful or some number of wording options that might otherwise all statistically be feasible as being the next chosen word or set of words.
Where do the monkeys fit into this?
Those typing monkeys are surely attractive as a basis for comparison to generative AI. Monkeys producing Hamlet versus generative AI producing Hamlet. That’s an enthralling contest. You might say that there isn’t really a contest involved at all. The AI that was devised by humankind and is based on humankind’s writings has an unfair advantage in that respect.
Speaking of typing monkeys, in an episode of The Simpsons, Mr. Burns decides to hire monkeys to go ahead and type away on typewriters as part of the office typing pool. He is the kind of cantankerous boss that would gleefully gravitate toward using monkeys in his needed office work over the use of humans if he could do so.
Fans of the show might remember what happens.
Mr. Burns grabs one of the typed pages and reads with avid anticipation what the monkey has typed. He reads the page aloud and says “It was the best of times, it was the blurst of times” (i.e., there is one word that is messed up, the “blurst” or something sounding like that). He becomes completely enraged and utterly disappointed at those “stupid monkeys” as to what they can produce.
We know that if a monkey typed that portion of Charles Dicken’s “A Tale Of Two Cities” we ought to be ecstatic and jumping for joy. Not so for Mr. Burns.
As a final comment for this discussion, perhaps we should invoke the full sentence that Charles Dickens wrote: “It was the best of times, it was the worst of times, it was the age of wisdom, it was the age of foolishness, it was the epoch of belief, it was the epoch of incredulity, it was the season of light, it was the season of darkness, it was the spring of hope, it was the winter of despair.”
We aren’t quite sure where we are headed with AI. Some say it is going to be the best thing since sliced bread. Others forewarn that the AI we are making is going to be an existential risk to the survival of humanity. It indeed is either the best of times or the worst of times.
Do not be surprised to see generative AI outputting those very words. Do be surprised if you happen to see monkeys in a zoo that are perchance typing on typewriters and manage to type the same insightful words.
Please do let me know if you see that happen.
I’m willing to wait a long time for this to occur, but probably not for infinite.