Saryu Nayyar is CEO of Gurucul, a provider of behavioral security analytics technology and a recognized expert in cyber risk management.

Society is on the precipice of an AI revolution. ChatGPT and CPT-4 from OpenAI, Google’s Bard and many other entrants in the AI race represent the early stages of what promises to be transformational technology. Bill Gates calls these AI platforms one of the “four most important milestones in digital technology.”

What Mr. Gates and I refer to is technically called “artificial general intelligence,” or AGI, which today is still a concept but growing more viable by the day. AGI is the notion that a machine can mimic the intelligence of the human mind and its ability to learn to solve virtually any type of problem. The machine can understand input from the outside and act in a way that’s indistinguishable from a human.

Contrast AGI to a single-purpose “narrow AI” that solves just one problem. A good example is the AI that analyzes an organization’s daily sales figures to forecast future sales. That’s all the AI does, and the machine learning behind it is self-contained. Such a forecast helps the company plan and manage inventory to optimize profits.

AGI is trained on much broader data, often using natural language processing, leading to limitless applications. AGI-based tools are already used to provide online customer service, personalized tutoring and training, assistance to tourists and facility visitors, advice to small investors and much more. Proponents expect that entirely new fields of business will be built around the capabilities of AGI.

But these tools also have a dark side. Just as they can be used for good, they can be used for bad—to create malware, write convincing phishing messages, create deep fake images and videos, conduct social manipulations and so on.

This begs the question: Should we start laying the groundwork now for the ethics of AGI?

Society Has Been Down This Road Before

One of the first chatbots was created at MIT in 1964. Professor Joseph Weizenbaum warned then about the potential harm of projects devoted to enabling computers to understand human speech. Presciently, Weizenbaum envisioned computers being able to monitor human discussions and report information to government agencies. Hey Siri, could that really happen? Well, sure. The issue is concerning enough that AI-driven voice assistants from China face opposition in Western markets over the possibility of the government grabbing access to private data.

Weizenbaum’s fears show that ethical concerns over computers’ capabilities are nothing new. As we enter the exciting age of AGI-led possibilities, perhaps we should take lessons from what happened with social media platforms.

When applications like MySpace, Facebook and the like first launched, they were touted as a means to bring people together and enable self-expression through personal posts and photo sharing. The platforms’ intent was to connect people in a convenient, friendly way.

What the platforms’ founders didn’t envision is that one day, these networks would bombard members with annoying advertisements that creepily follow them around. They didn’t worry that they were asking members to give their most personal details to large corporations or possibly even governments (e.g., TikTok). They didn’t expect that disinformation would interfere in elections or that children would be bullied or view harmful content.

As a result, the operations of these social platforms are now under question and they might face government regulation if they can’t gain control over content and data privacy. The same will be true for AGI platforms if we don’t establish guardrails early on.

Corporations Shouldn’t Hold All The Power

Here are just a few issues to think about in these early stages. How should AGI be regulated, if at all, and by whom? Who’s liable if a chatbot returns wrong or false information, such as medical advice, that’s subsequently acted upon? Who owns the content? How can we prevent misinformation or disinformation from being disseminated? How do we know what data these systems are trained on? Are there biases in the training? How do we deal with data leaks, like information or conversations being exposed?

Sam Altman, cofounder and CEO of OpenAI, is thinking ahead. In a recent interview with the New York Times, Altman said, “The way this should work is that there are extremely wide bounds of what these systems can do, that are decided by, not Microsoft or OpenAI, but society, governments, something like that, some version of that, people, direct democracy. We’ll see what happens. And then within those bounds, users should have a huge amount of control over how the AI behaves for them because different users find very different things offensive or acceptable or whatever. And no company …, I think, is in a place where they want to, or should want to say, here are the rules.”

He certainly has the right attitude. It would be great to get more founders, investors and technologists on board to think about some of these concerns before they blow up to be societal failures. AGI can’t be all about making money for corporations without any consideration for how it affects people and society.


Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?




Source link

author-sign