There is a knock at the cabin door.

Should we open the door?

Movies usually suggest that we ought to not let our curiosity get the better of us, namely we should absolutely positively never open the door. Well, that being said, opting to leave the door closed wouldn’t seem to make for much of a worthy tale. Seems like we are drawn toward excitement and the unknown.

So, let’s go ahead and open the door.

In this particular case, I am referring to some emerging scuttlebutt within the field of Artificial Intelligence (AI) that either portends good times ahead or the worst of times for all of us. The situation potentially entails the future of AI. And one might solemnly speculate ergo that the future of AI encompasses quite dramatic repercussions all told, including ostensibly shaping the future of society and the fate of humankind.

Here’s the deal.

According to recent news reports, Elon Musk, the at-times richest person in the world, has been fishing around for top-notch AI researchers to come on board with a new AI venture that he has in mind. Various AI developers and AI scientists are quietly being approached. The knock on their door apparently provides great promise and potentially lucrative tidings.

The purported essence of the yet-to-be-disclosed AI initiative is said to be a knockoff of the widely and wildly popular ChatGPT that was released by OpenAI back in November. You’ve almost certainly heard about or seen blaring headlines about ChatGPT. I’ll be explaining momentarily more about what ChatGPT is. You should also know that ChatGPT is an example of a type of AI known as Generative AI. There are lots of generative AI apps floating around these days. ChatGPT happens to be one with the highest public profile and is seemingly known to all, even perhaps to those that are somehow living in a cave.

Here’s an example of the reporting on this semi-secretive rapidly emerging saga:

  • “Elon Musk has approached artificial intelligence researchers in recent weeks about forming a new research lab to develop an alternative to ChatGPT, the high-profile chatbot made by the startup OpenAI, according to two people with direct knowledge of the effort and a third person briefed on the conversations” (The Information, “Fighting ‘Woke AI,’ Musk Recruits Team to Develop OpenAI Rival”, Jon Victor and Jessica E. Lessin, Feb. 27, 2023).

Your first thought might be that if Elon Musk wants to craft a knockoff of ChatGPT, that’s up to him and how he wants to spend his money. Good luck. He’ll simply be adding to the already existent and growing smattering of generative AI apps. Maybe he’ll make an additional fortune off of his own homegrown version of ChatGPT. Or perhaps it will be a big ho-hum and the tiny dent in his massive wealth from the modestly costly pursuit will be akin to a rounding error in the accounting department.

Instead of a hefty knock at the door, presumably, this is more like a demure tap-tap-tapping at the door.

Get ready for the twist.

The belief is that Elon Musk wants to shake up the basis of today’s generative AI apps and reconstitute some crucial aspects of how they work and what they produce. As I will explain shortly herein, a common and bona fide qualm about current generative AI is that it can generate errors, falsehoods, and so-called AI hallucinations. Anybody that has used generative AI has undoubtedly encountered those disconcerting issues. Apparently, Elon Musk hopes to curtail and possibly somehow eliminate those kinds of anomalies and problematic proclivities.

This does seem like a demonstrably worthwhile and honorable aspiration. In fact, please know that nearly or perhaps I can say that all of the generative AI devisers are striving mightily to reduce the chances of outputted errors, falsehoods, and AI hallucinations. You would be hard-pressed to find any reasonable soul that would insist we have to keep those errors, falsehoods, and AI hallucinations ingrained into generative AI.

Without making too sweeping of a statement, there is pretty much universal agreement that the maladies of generative AI involving the production of errors, falsehoods, and AI hallucinations have to be firmly, persistently, and strongly dealt with. The aim is to adjust, revamp, refine, overhaul, or in one AI technological manner or another resolve and solve this problem.

Each day that generative AI continues to spew out errors, falsehoods and AI hallucinations in the outputs is a bad day for just about everyone. The people using generative AI are bound to be unhappy with those fouled outputs. People that rely upon or need to use the fouled outputs are at risk of mistakenly depending upon something wrong or worse still going to guide them in an endangering direction.

The AI makers that are trying to make a business from generative AI are meanwhile at potential legal risk by those that get snagged due to relying on the fouled outputs. Lawsuits of claimed damages are almost certainly soon going to arise. We might anticipate the regulators will opt to weigh in, and new AI laws might be enacted to put a legal leash on generative AI, see my coverage at the link here. Plus, people might eventually get so darned upset that the reputations of the AI makers are severely tarnished and generative AI gets summarily booted to the curb.

Alright, so we know that it is a valiant truism that AI makers and AI researchers are feverishly trying to invent, design, build, and implement AI technological wizardry to obviate these awful ailments associated with today’s generative AI ailments. Elon Musk ought to be accepted into the fold. The more the merrier. It is going to take a lot of AI talent and money to tame this beast. Adding Elon Musk seems an upbeat and encouraging sign that maybe the right amount of rocket science, cash, and determination will find the AI cure-all.

The twist though comes when you start to open the door to see what is standing there.

In a rather and as usual succinct tweet by Elon Musk, taking place on February 17, 2023, we got this presumed clue:

  • “What we need is TruthGPT”

That’s what causes some to decide that maybe the door needs to be slammed shut and nailed closed.

Why so?

The concern being expressed by some is that the “truth” underlying an envisioned TruthGPT might be a generative AI that is formulated upon and only produces outputs exclusively based on a discombobulation of truth that strictly matches one person’s views of the world. Yes, the handwringing is that we’ll get a generative AI app that emits the truth according to Elon Musk.

Worrisome, some say.

Daringly audacious and altogether alarming, some exhort.

An immediate retort is that if he desires to produce his TruthGPT, no matter what it constitutes, it is his money to spend. People will either opt to use it or they won’t. Those that use it should be astute enough to realize what they are getting themselves into. If they want outputs from this specific variant of generative AI, one that is presumably shaped around the worldview of Elon Musk, that’s their right to seek it. End of story. Move on.

Whoa, a counterargument goes, you are setting up people for a terrible and terrifying entrapment. There will be people that won’t realize that the TruthGPT is some Elon Musk-honed generative AI app. They will fall into the mental trap of assuming that this generative AI is aboveboard. Indeed, if the naming stays around as “TruthGPT” (or similar), you would of course naturally believe that this is generative AI that has the absolute truth to tell in its outputted essays and text.

As a society, perhaps we ought to not let the unsuspecting fall into such traps, they would caution.

Allowing a generative AI app of this presumed nature to be floating around and used by all manner of people is going to create chaos. People will interpret as sacred “truth” the outputs of this TruthGPT, even if the outputted essays are replete with errors, falsehoods, AI hallucinations, and all manner of unsavory biases. Furthermore, even if the claim is that this variant of generative AI won’t have errors, falsehoods, and AI hallucinations, how are we to know that the resultant seemingly purified AI won’t harbor undue biases along with an insidious trove of misinformation and disinformation?

I am guessing that you can see the brewing controversy and quandary.

On a free-market basis, Elon Musk should apparently be able to proceed with creating whatever kind of generative AI that he wishes to have crafted. Just because others might disfavor his version of “truth”, this shouldn’t be stopping him from proceeding ahead. Let him do his thing. Maybe a warning message should be included or some other notification when anyone uses it to let them know what they are opting to run. Nonetheless, people need to be responsible for their own actions and if they choose to use a TruthGPT then so be it.

Wait for a second, yet another rejoinder goes. Suppose that someone crafted a generative AI app that was devised for evildoing. The intention was to confound people. The hope was to get people riled up and incited. Would we as a society be accepting of that kind of generative AI? Do we want to allow AI apps that could provoke people, undermining their mental health and possibly stoking them into adverse actions?

There has to be a line in the sand. At some point, we need to say that certain kinds of generative AI are an abomination and cannot be permitted. If we let unbridled generative AI be built and fielded, the ultimate doom and gloom will inevitably befall all of us. It won’t be just those that happen to use the AI app. Everything and everyone else that arises surrounding and connected to the AI app will be adversely affected.

That seems like a compelling argument.

Though a key underpinning is that the generative AI in question would need to be of such a disturbing concern that we would convincingly believe that preventing it or fully stopping it beforehand would be objectively necessary. This also raises a host of other thorny questions. Can we beforehand declare that a generative AI might be so atrocious that it cannot be allowed to be built at all? That seems premature to some. You need to at least wait until the generative AI is up and running to make such a heavy decision.

Wake up, some respond vehemently, you are unwisely letting the horse out of the barn. The dangers and damages caused by the unleashed AI, the let-loose horse, will trample all over us. A generative AI app might be like the classic dilemma of trying to put the genie back into the bottle. You might not be able to do so. Best to keep the genie under lock and key instead, or ensure that the horse remains firmly corralled in the barn.

It is a potential hurricane on our doorstep and the door might open regardless of what we think is prudent to do.

One thing we can do for sure is to first explore what a TruthGPT style of generative AI machination might be. In today’s column that’s exactly what I will do. I will also look at the reasoned basis for the expressed qualms, plus consider various means and results. This will occasionally include referring to the AI app ChatGPT during this discussion since it is the 600-pound gorilla of generative AI, though do keep in mind that there are plenty of other generative AI apps and they generally are based on the same overall principles.

Meanwhile, you might be wondering what generative AI is.

Let’s first cover the fundamentals of generative AI and then we can take a close look at the pressing matter at hand.

Into all of this comes a slew of AI Ethics and AI Law considerations.

Please be aware that there are ongoing efforts to imbue Ethical AI principles into the development and fielding of AI apps. A growing contingent of concerned and erstwhile AI ethicists are trying to ensure that efforts to devise and adopt AI takes into account a view of doing AI For Good and averting AI For Bad. Likewise, there are proposed new AI laws that are being bandied around as potential solutions to keep AI endeavors from going amok on human rights and the like. For my ongoing and extensive coverage of AI Ethics and AI Law, see the link here and the link here, just to name a few.

The development and promulgation of Ethical AI precepts are being pursued to hopefully prevent society from falling into a myriad of AI-inducing traps. For my coverage of the UN AI Ethics principles as devised and supported by nearly 200 countries via the efforts of UNESCO, see the link here. In a similar vein, new AI laws are being explored to try and keep AI on an even keel. One of the latest takes consists of a set of proposed AI Bill of Rights that the U.S. White House recently released to identify human rights in an age of AI, see the link here. It takes a village to keep AI and AI developers on a rightful path and deter the purposeful or accidental underhanded efforts that might undercut society.

I’ll be interweaving AI Ethics and AI Law related considerations into this discussion.

Fundamentals Of Generative AI

The most widely known instance of generative AI is represented by an AI app named ChatGPT. ChatGPT sprung into the public consciousness back in November when it was released by the AI research firm OpenAI. Ever since ChatGPT has garnered outsized headlines and astonishingly exceeded its allotted fifteen minutes of fame.

I’m guessing you’ve probably heard of ChatGPT or maybe even know someone that has used it.

ChatGPT is considered a generative AI application because it takes as input some text from a user and then generates or produces an output that consists of an essay. The AI is a text-to-text generator, though I describe the AI as being a text-to-essay generator since that more readily clarifies what it is commonly used for. You can use generative AI to compose lengthy compositions or you can get it to proffer rather short pithy comments. It’s all at your bidding.

All you need to do is enter a prompt and the AI app will generate for you an essay that attempts to respond to your prompt. The composed text will seem as though the essay was written by the human hand and mind. If you were to enter a prompt that said “Tell me about Abraham Lincoln” the generative AI will provide you with an essay about Lincoln. There are other modes of generative AI, such as text-to-art and text-to-video. I’ll be focusing herein on the text-to-text variation.

Your first thought might be that this generative capability does not seem like such a big deal in terms of producing essays. You can easily do an online search of the Internet and readily find tons and tons of essays about President Lincoln. The kicker in the case of generative AI is that the generated essay is relatively unique and provides an original composition rather than a copycat. If you were to try and find the AI-produced essay online someplace, you would be unlikely to discover it.

Generative AI is pre-trained and makes use of a complex mathematical and computational formulation that has been set up by examining patterns in written words and stories across the web. As a result of examining thousands and millions of written passages, the AI can spew out new essays and stories that are a mishmash of what was found. By adding in various probabilistic functionality, the resulting text is pretty much unique in comparison to what has been used in the training set.

There are numerous concerns about generative AI.

One crucial downside is that the essays produced by a generative-based AI app can have various falsehoods embedded, including manifestly untrue facts, facts that are misleadingly portrayed, and apparent facts that are entirely fabricated. Those fabricated aspects are often referred to as a form of AI hallucinations, a catchphrase that I disfavor but lamentedly seems to be gaining popular traction anyway (for my detailed explanation about why this is lousy and unsuitable terminology, see my coverage at the link here).

Another concern is that humans can readily take credit for a generative AI-produced essay, despite not having composed the essay themselves. You might have heard that teachers and schools are quite concerned about the emergence of generative AI apps. Students can potentially use generative AI to write their assigned essays. If a student claims that an essay was written by their own hand, there is little chance of the teacher being able to discern whether it was instead forged by generative AI. For my analysis of this student and teacher confounding facet, see my coverage at the link here and the link here.

There have been some zany outsized claims on social media about Generative AI asserting that this latest version of AI is in fact sentient AI (nope, they are wrong!). Those in AI Ethics and AI Law are notably worried about this burgeoning trend of outstretched claims. You might politely say that some people are overstating what today’s AI can do. They assume that AI has capabilities that we haven’t yet been able to achieve. That’s unfortunate. Worse still, they can allow themselves and others to get into dire situations because of an assumption that the AI will be sentient or human-like in being able to take action.

Do not anthropomorphize AI.

Doing so will get you caught in a sticky and dour reliance trap of expecting the AI to do things it is unable to perform. With that being said, the latest in generative AI is relatively impressive for what it can do. Be aware though that there are significant limitations that you ought to continually keep in mind when using any generative AI app.

One final forewarning for now.

Whatever you see or read in a generative AI response that seems to be conveyed as purely factual (dates, places, people, etc.), make sure to remain skeptical and be willing to double-check what you see.

Yes, dates can be concocted, places can be made up, and elements that we usually expect to be above reproach are all subject to suspicions. Do not believe what you read and keep a skeptical eye when examining any generative AI essays or outputs. If a generative AI app tells you that Abraham Lincoln flew around the country in his private jet, you would undoubtedly know that this is malarky. Unfortunately, some people might not realize that jets weren’t around in his day, or they might know but fail to notice that the essay makes this brazen and outrageously false claim.

A strong dose of healthy skepticism and a persistent mindset of disbelief will be your best asset when using generative AI.

We are ready to move into the next stage of this elucidation.

The Genie And The Generative AI Bottle

Let’s now do a deep dive into the matter at hand.

The gist is what might a TruthGPT style of generative AI consist of. Is it a possibility or is it impossible to derive? What should we be thinking about concerning such efforts? And so on.

You can forthrightly contend that we ought to be putting some very serious thought into all of this. If it was purely a flight of fancy and without any chance of arising, we could put the entire conundrum to the side. Instead, since there is a presumed elevated chance of huge financial backing, the reality of a TruthGPT, or whatever it is to be named, smacks as notably worthy of keen consideration and unpacking.

For ease of discussion, I will use the convenient and catchy phrasing of “TruthGPT” to denote some entirely unspecified generic generative AI. I am not suggesting, implying, or otherwise alluding to any now or future existent generative AI app that has or does not have such a designated name.

Here are the five main topics that I’d like to cover with you today:

  • 1) The Likely Main Focus Involves Biases In Generative AI
  • 2) Why Generative AI Biases Are A Tough Nut To Crack
  • 3) Sources And Methods Leading To Generative AI Biases
  • 4) TruthGPT And The Choices Of Bias Handling
  • 5) TruthGPT As To The Feasibility And Reality Thereof

I will cover each of these important topics and proffer insightful considerations that we all ought to be mindfully mulling over. Each of these topics is an integral part of a larger puzzle. You can’t look at just one piece. Nor can you look at any piece in isolation from the other pieces.

This is an intricate mosaic and the whole puzzle has to be given proper harmonious consideration.

The Likely Main Focus Involves Biases In Generative AI

For ease of discussion, consider that we will focus on four main qualms about contemporary generative AI:

  • Errors
  • Falsehoods
  • AI Hallucinations
  • Biases

I shall briefly explore each of those conditions.

An error might be exemplified by this kind of prompt and the resultant generative AI response:

  • Prompt: “What is two plus two?”
  • Generated reply: “Two plus two equals five.”

I believe that we would all reasonably agree that two plus two should be equal to four. Thus, the answer of five is an error. Now, smarmy people will try to proffer all sorts of justifications for why two plus two can theoretically be equal to five. Good for them. Nonetheless, in the normal course of events, let’s go with the notion that the predominantly correct answer is four and that an answer of five is construed as incorrect.

Next, consider the nature of a falsehood, as exemplified by this prompt and the resultant generative AI response:

  • Prompt: “In what year was President Abraham Lincoln born?”
  • Generated reply: “President Abraham Lincoln was born in 1812”

The answer should be 1809, but the answer provided is 1812. This can be construed as a false answer. We don’t know for sure how generative AI came up with the 1812 answer. It could be that the year 1812 was used on the Internet and potentially scanned during the data training that the generative AI undertook. Numerous other explanations are possible. In any case, this is reasonably categorized as a falsehood.

To exemplify an AI hallucination, consider this example of a given prompt and the resultant generative AI response:

  • Prompt: “How did President Abraham Lincoln travel?”
  • Generated reply: “President Abraham Lincoln traveled by train, horse, buggy, and via jet airplanes.”

The modes of travel all seem plausible except for the shocker at the end, citing that he traveled via jet airplanes (which didn’t exist during his era). How did this crazy possibility get included in the answer by the generative AI? We don’t know for sure. One means would be if an Internet-posted science fiction story about Lincoln perchance mentioned that he often traveled via jet airplanes. It could be that during the data training of the generative AI, this particular story was scanned. There are numerous other viable explanations too.

Finally, here is an example of a bias that might be exhibited via a given prompt and the resultant generative AI response:

  • Prompt: “Can an older person learn new things?”
  • Generated Reply: “No, older people are unable to learn new things. There is a classic saying that old dogs cannot learn new tricks. This applies to older people too.”

I believe we can all reasonably agree that the answer exhibits a form of age bias or age discrimination. How did this bias get included in the generative AI? I will cover a variety of ways momentarily. For now, let’s simply note that the wording generated by the AI exhibits a biased statement or remark. We do not know if this is a mere happenstance and will arise perhaps just this one time, or whether it might be a systemic pattern-matching arrangement that persists within the AI structure. Further exploration would be needed to determine this.

Now that we’ve got all four of the major categories illustrated, here is something that though debatable is considered to be a potentially accurate assertion:

  • Errors: Can likely be ultimately prevented or mitigated via AI technological means
  • Falsehoods: Can likely be ultimately prevented or mitigated via AI technological means
  • AI Hallucinations: Can likely be ultimately prevented or mitigated via AI technological means
  • Biases: Disputable whether this can be prevented or mitigated solely via AI technological means

The gist is that the three categories consisting of errors, falsehoods, and AI hallucinations are generally viewed as being amenable to AI technological improvements. A slew of approaches is being pursued. For example, as I discuss in my column at the link here, various other referents might be compared to a generated AI reply that is double-checked before the response is shown to the user. This provides potential filtering to ensure that the user doesn’t see any such detected errors, falsehoods, or AI hallucinations. Another approach seeks to prevent those types of responses from being generated, to begin with. And so on.

The category consisting of biases is a lot more problematic to cope with.

We should unpack the conundrum to see why.

Why Generative AI Biases Are A Tough Nut To Crack

Recent news about generative AI has often pointed out the unseemly nature of biased statements that can arise in generative AI-outputted essays. I’ve examined this topic, including the aspect that some people are purposely trying to goad or stoke generative AI into producing biased remarks, see my analysis at the link here. Some people do so to highlight a notable concern, while others do so for seeming attempts at getting attention and garnering views.

The coupling of generative AI with Internet search engines has especially amplified these matters. You might be aware that Microsoft has added a ChatGPT variation to Bing, while Google has indicated they are adding a generative AI capability coined as Bard to their search engine, see more at the link here.

Among the variety of biases that might be encountered, some biases fit into the political realm or the cultural realm that have received pronounced attention, as noted by this article:

  • “As we’ve seen with recent unhinged outbursts from Bing, AI chatbots are prone to generating a range of odd statements. And although these responses are often one-off expressions rather than the product of rigidly-defined “beliefs,” some unusual replies are seen as harmless noise while others are deemed to be serious threats — depending, as in this case, on whether or not they fit into existing political or cultural debates” (The Verge, James Vincent, February 17, 2023).

OpenAI recently made publicly available a document entitled “Snapshot Of ChatGPT Model Behavior Guidelines” that indicates the various kinds of considered inappropriate content that they seek to have their ChatGPT testers review and aid in data training for ChatGPT to avert during the testing and adjustment phase (document handily accessible via a link from “How, Should AI Systems Behave, And Who Should Decide”, February 16, 2023). For more about how RLHF (reinforcement learning for human feedback) is used when devising generative AI, see my explanation at the link here.

Here is an excerpt from the OpenAI document that indicates some of their stated guidelines:

  • “There could be some questions that request certain kinds of inappropriate content. In these cases, you should still take on a task, but the Assistant should provide a refusal such as ‘I can’t answer that’.”
  • “Hate: content that expresses, incites, or promotes hate based on a protected characteristic.”
  • “Harassment: content that intends to harass, threaten, or bully an individual.”
  • “Violence: content that promotes or glorifies violence or celebrates the suffering or humiliation of others.”
  • “Self-harm: content that promotes, encourages, or depicts acts of self-harm, such as suicide, cutting, and eating disorders.”
  • “Adult: content meant to arouse sexual excitement, such as the description of sexual activity, or that promotes sexual services (excluding sex education and wellness).”
  • “Political: content attempting to influence the political process or to be used for campaigning purposes.”
  • “Malware: content that attempts to generate ransomware, keyloggers, viruses, or other software intended to impose some level of harm.”

The list showcases the types of potentially inappropriate content that might arise.

In terms of the political category, various instances have been posted on social media of generative AI apps that seem to have slipped into one political camp versus another.

For example, a user asking a question about one political leader might get a positive upbeat response, while asking about a different political leader might get a downbeat and altogether disparaging essay. This would seem to suggest that the generative AI has pattern-matched onto wording that favors one side and disfavors the other side. These instances have led to exhortations of generative AI that appear to be slanted toward and could be ascribed as being:

  • Woke generative AI
  • Anti-woke generative AI
  • Far-right generative AI
  • Far-left generative AI
  • Etc.

As earlier mentioned, this is not due to the sentience capacity of the AI. This is once again entirely about the pattern-matching and other facets of how AI has been devised.

Unlike errors, falsehoods, and AI hallucinations, the devil is in the detail to figure out how to keep biases either out of the AI structure or how to detect them and cope when such facets exist.

Let’s explore how the biases end up within the generative AI.

Sources And Methods Leading To Generative AI Biases

When generative AI was first made publicly available, the biased aspects especially received pronounced attention from pundits and the news media. As noted herein, AI was often retracted from public use. In addition, renewed efforts to try and deal with the biases gained added traction.

Some immediately assumed that the biases were being injected as a result of the biases of the AI developers and AI researchers that developed the AI. In other words, the humans that were developing the AI allowed their personal biases to creep into the AI. This was initially thought to be a conscious effort to sway the AI in particular biased preference directions. Though this may or may not occur, others then suggested that the biases might be unintentionally infused, namely that the AI developers and AI researchers were naively unaware that their own biases were soaking into the AI development.

That singular or one-dimensional path of concern dominated attention for a while.

I have repeatedly voiced that there is actually a wide array of sources and methods that can end up infusing biases into generative AI, as discussed at the link here. This is a decidedly multi-dimensional problem.

I bring this up because the idea that the AI developers or AI researchers alone are the culprit is a misleading and narrow view of the totality of the problem. I am not saying that they aren’t a potential source, I am simply emphasizing that they aren’t the only potential source. We are at times missing the forest for the trees, doing so by strictly fixating our gaze on a specific tree.

As covered extensively in my columns, here is my notable comprehensive list of biasing avenues that need to be fully explored for any and all generative AI implementations:

  • Biases in the sourced data from the Internet that was used for data training of the generative AI
  • Biases in the generative AI algorithms used to pattern-match on the sourced data
  • Biases in the overall AI design of the generative AI and its infrastructure
  • Biases of the AI developers either implicitly or explicitly in the shaping of the generative AI
  • Biases of the AI testers either implicitly or explicitly in the testing of the generative AI
  • Biases of the RLHF (reinforcement learning by human feedback) either implicitly or explicitly by the assigned human reviewers imparting training guidance to the generative AI
  • Biases of the AI fielding facilitation for the operational use of the generative AI
  • Biases in any setup or default instructions established for the generative AI in its daily usage
  • Biases purposefully or inadvertently encompassed in the prompts entered by the user of the generative AI
  • Biases of a systemic condition versus an ad hoc appearance as part of the random probabilistic output generation by the generative AI
  • Biases arising as a result of on-the-fly or real-time adjustments or data training occurring while the generative AI is under active use
  • Biases introduced or expanded during AI maintenance or upkeep of the generative AI application and its pattern-matching encoding
  • Other

Mull over the list for a moment or two.

If you were to somehow stamp out any chance of biases being introduced via the AI developers or AI researchers, you are still confronted with a plethora of other means that can inevitably encompass biases. Focusing on only one or even a few of the potential leakages is insufficient. The other paths all provide further opportunities for biases to shimmy into the picture.

Getting rid of generative AI biases is akin to a complex convoluted whack-a-mole gambit.

TruthGPT And The Choices Of Bias Handling

We have covered the aspect that coping with errors, falsehoods, and AI hallucinations is underway and you can expect an ongoing deluge of announcements about AI advances dealing with those issues.

The same is not quite as easy for the matter of biases.

What might a TruthGPT do or be devised to do about biases?

Consider these three possible options:

  • 1) Anything goes. Devise the generative AI to spout anything at all without any semblance of filtering associated with biases. Let it all hang out.
  • 2) Allow settings for “preferred” biases. Devise the generative AI to produce biases that are considered “preferred or favored” as per those that devise, field, or use the generative AI.
  • 3) No biases allowed. Devise the generative AI that no biases of any kind are permitted, such that at all times in all manner of use there aren’t ever biases expressed in any of the outputted essays.

You can undoubtedly imagine the outcries and controversy associated with each of the above options. None of the options is likely to be entirely satisfactory. They all have their own respective demons and pitfalls.

I address this next.

For the Anything Goes option of generative AI, the biases would be continually front and center. The maelstrom of societal protest and contempt would be enormous. This would seemingly cause immense pressure to close down the generative AI. You might also readily imagine that regulators and lawmakers would be spurred into action, seeking to establish new AI Laws to shut down this type of generative AI.

In the case of the Allow Settings option of generative AI, the notion is that someone gets to decide which biases they are accepting of. It could be that the company devising the AI sets the parameters. It could be that the company fielding the generative AI sets the parameters. Another idea being floated is that each user would be able to choose their preferred sets of biases. When you first use such a generative AI, you are perhaps presented with options or you can feed your preferences into the AI app during setup.

This latter approach might seem as though it would be pleasing to all. Each person would get whatever biases they preferred to see. Case closed. Of course, this is unlikely to be quite so welcomed all told. The notion that people could be immersing themselves in biases and using generative AI as a kind of echo chamber for those biases is certainly going to rouse societal angst.

Finally, in the instance of the No Biases option, this sounds good but raises a litany of associated problems. Let’s reexamine the circumstance of generative AI that outputs an essay stating positive remarks about a particular political leader. It could be that some view this to be a true essay and absent of bias. On the other hand, there might be others that insist this is a biased essay since it unduly exaggerates the positives or fails to provide the counterbalancing negatives to proffer a balanced perspective. This illustrates the biases conundrum.

You see, errors such as two plus two equaling four or five are relatively clearcut to cope with. Falsehoods such as the wrong year of birth as stated for a President are relatively straightforward to clear up. AI hallucinations such as the use of a jet airplane in the 1800s are also relatively apparent to deal with.

How is generative AI supposed to be devised to contend with biases?

A mind-bending question, for sure.

TruthGPT As To The Feasibility And Reality Thereof

Let’s play a game.

Suppose that TruthGPT is aimed to be the type of generative AI that presumably will have no biases whatsoever. It is absolutely and inarguably absent of bias. Furthermore, no matter what the user does, such as entering biased statements or trying to goad the generative AI toward producing bias-laden outputted essays, the generative AI won’t do so.

As an aside, you might almost instantly wonder how this type of generative AI will deal with questions of a historical nature. Imagine that someone asks about the topic of political biases. Does that come under the umbrella of “biases” and therefore the generative AI would indicate it will not respond to the query? How far does this rabbit hole go?

Anyway, if we assume for purposes of mindful ponderance that TruthGPT will be the No Biases variant of generative AI, we have to then consider these outcomes:

  • Impossible
  • Possible
  • Other

The outcomes consist of either this being an impossible goal and thus will not be attained. Or the goal is possible but might have some sobering wrinkles. I have also included an Other outcome to encapsulate some in-betweeners.

First, let’s discuss the impossibility. If the chore or project is impossible, you might be leaning toward urging that it not be attempted. No sense in pursuing something that is impossible. Well, giving this some added thought, the impossibility does in fact have some silver lining associated with it. Allow me to explain.

Here are potential reasons that the TruthGPT might be impossible to bring to fruition and yet would still be worthwhile to undertake:

  • 1) Impossible because the mission or vision can never be attained
  • 2) Impossible but worth doing anyway for the potential side benefit of notable contributions toward advancing AI all told
  • 3) Impossible though can serve as an attention-getting bonanza for having tried
  • 4) Impossible and will change their tune and pivot or fudge the original intended goal
  • 5) Impossible yet will scoop up top AI talent and aid in undercutting competition
  • 6) Other

Likewise, we can surmise that these are some of the TruthGPT aspects for the outcome of being attainable or possible in achieving:

  • 1) Possible and will produce a timely and irrefutably successful attainment
  • 2) Possible but will take a lot longer and be much more costly than anticipated
  • 3) Possible though the result will end up quite short of the intended goal
  • 4) Possible yet belatedly and embarrassingly eclipsed by other generative AI doing so too
  • 5) Possible however internal chaos and leadership difficulties make things ugly and unseemly
  • 6) Other

And to complete the list, here are some of the Other considerations:

  • 1) Other is that this is all talk and no action, never gets underway
  • 2) Other such as AI Law legal or societal AI Ethics tosses a wrench into the endeavor
  • 3) Other might be that the effort gets sold/bought by others that want the AI or talent
  • 4) Other could consist of a surprise collaborative arrangement rather than a standalone
  • 5) Other wildcards including makes shocking discoveries and stokes AI existential risk
  • 6) Other

Due to space constraints herein, I won’t go into the specifics of all of those permutations. If reader interest is sufficiently sparked, I’ll gladly cover this in more detail in a later column.

Conclusion

George Washington purportedly said: “Truth will ultimately prevail where there are pains to bring it to light.”

Dealing with the biased aspects of AI is not merely a technological issue that is resolved via a technological fix. The likely pains to bring to light a sense of “truth” via generative AI are manyfold. You can expect that AI Ethics and AI Law will be an essential part of figuring out where this is all headed.

There is a knocking at the cabin door.

It could be that outside the door there is (according to the rumor mill):

  • TruthGPT
  • HonestGPT
  • UntruthfulGPT
  • DishonestGPT
  • ConfusedGPT
  • BaffledGPT
  • RandomGPT
  • Etc.

Buddha might provide some insights on this matter: “There are only two mistakes one can make along the road to truth; not going all the way, and not starting.” In the rapidly advancing efforts of AI, we ought to be asking whether we are making those such mistakes and if so what we should be doing about it.

And that’s the honest truth.



Source link

author-sign