Another well-known AI technologist is speaking up about potential pitfalls of AI.

“It is hard to see how you can prevent the bad actors from using it for bad things,” Geoffrey Hinton tells(Opens in a new window) The New York Times. “I console myself with the normal excuse: If I hadn’t done it, somebody else would have.”

Hinton, 75, resigned from Google last month after more than a decade in order to “freely speak out about the risks of AI,” the Times says. He won a 2018 Turing Award for pioneering some of the technology that now powers ChatGPT and similar generative AI tools—and has earned the nickname “Godfather of AI”—but now says a part of him regrets his life’s work.

Hinton’s immediate concern is the spread of misinformation: a flood of fake photos, videos, and text that makes it impossible to know “what is true anymore.” New tools that create deepfakes, or computer-generated clones of a famous person’s voice and appearance, make it easy for anyone with a computer and internet connection to misrepresent public figures.

Midjourney, for example, recently paused free access to its AI image generator amid a spike in demand. Some people were using the platform to generate fake images of public figures like Donald Trump (see below), Elon Musk, and the Pope.

Hinton also says AI will replace jobs at an alarming rate. He acknowledges most of these positions are “drudge work,” but AI “might take away more than that,” he says.

In the long term, he fears machines will begin to exhibit unplanned behavior, like writing and executing original code on their own. The most apocalyptic outcome of this could come in the form of autonomous weapons—killer robots—which Hinton strongly opposes. (US lawmakers recently moved to make sure AI has no part in launching nuclear weapons.)

Hinton says Google was a “proper steward” of AI for the majority of his tenure, but last fall, it issued an internal “code red” following the launch of Microsoft’s Bing AI chatbot. Google subsequently launched Bard, and the two tech giants are now “locked in a competition that might be impossible to stop,” Hinton tells the Times.

Hinton did not add his signature to a March open letter signed by Elon Musk, Steve Wozniak, and hundreds of other AI experts calling for a temporary pause on AI development because he wanted to leave Google before speaking out.

Recommended by Our Editors

Hinton spoke with Google CEO Sundar Pichai prior to his departure, but declined to tell the Times what they discussed.

Google has, however, tangled with now-former employees who criticized its approach to AI. In 2020, Timnit Gebru, an AI researcher, was fired after warning(Opens in a new window) of large language models’ ability to propagate racist, sexist, and otherwise harmful rhetoric and spread misinformation. In 2022, Blake Lemoine called the Google LaMDA chatbot “sentient” and accused Google of racing to bring the product to market without fully understanding its capabilities. He was also let go.

Google is now cranking away on Bard, which is built on a lightweight version of LaMDA. Pichai says the company plans to add Bard to the core search engine at some point, while it reportedly works on a separate AI-based search engine dubbed Magi. Microsoft has already updated its Bing search engine with an AI chatbot-based experience.

In a statement, Google tells the Times that it is “committed to a responsible approach to AI. We’re continually learning to understand emerging risks while also innovating boldly.”

Get Our Best Stories!

Sign up for What’s New Now to get our top stories delivered to your inbox every morning.

This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of Use and Privacy Policy. You may unsubscribe from the newsletters at any time.





Source link

author-sign