We may look on our time as the moment civilization was transformed as it was by fire, agriculture and electricity. In 2023, we learned that a machine taught itself how to speak to humans like a peer. which is to say, with creativity, truth, errors and lies. The technology, known as a chatbot, is only one of the recent breakthroughs in artificial intelligence — machines that can teach themselves superhuman skills. In April, we explored what’s coming next at Google, a leader in this new world. CEO Sundar Pichai told us AI will be as good or as evil as human nature allows. The revolution, he says, is coming faster than you know.
Scott Pelley: Do you think society is prepared for what’s coming?
Sundar Pichai: You know, there are two ways I think about it. On one hand I feel, no, because you know, the pace at which we can think and adapt as societal institutions, compared to the pace at which the technology’s evolving, there seems to be a mismatch. On the other hand, compared to any other technology, I’ve seen more people worried about it earlier in its life cycle. So I feel optimistic. The number of people, you know, who have started worrying about the implications, and hence the conversations are starting in a serious way as well.
Our conversations with 50-year-old Sundar Pichai started at Google’s new campus in Mountain View, California. It runs on 40% solar power and collects more water than it uses — high-tech that Pichai couldn’t have imagined growing up in India with no telephone at home.
Sundar Pichai: We were on a waiting list to get a rotary phone and for about five years. It finally came home I can still recall it vividly. It changed our lives. To me it was the first moment I understood the power of what getting access to technology meant and so probably led me to be doing what I’m doing today.
What he’s doing, since 2019, is leading both Google and its parent company, Alphabet, valued at $1.5 trillion. Worldwide, Google runs 90 percent of internet searches and 70 percent of smartphones. But its dominance was attacked this past February when Microsoft unveiled its new chatbot. In a race for AI dominance, in March, Google released its version named Bard.
Sissie Hsiao: It’s really here to help you brainstorm ideas, to generate content, like a speech, or a blog post, or an email.
We were introduced to Bard by Google Vice President Sissie Hsiao and Senior Vice President James Manyika. The first thing we learned was that Bard does not look for answers on the internet like Google search does.
Sissie Hsiao: So I wanted to get inspiration from some of the best speeches in the world…
Bard’s replies come from a self-contained program that was mostly self-taught— our experience was unsettling.
Scott Pelley: Confounding, absolutely confounding.
Bard appeared to possess the sum of human knowledge…
…with microchips more than 100-thousand times faster than the human brain. We asked Bard to summarize the New Testament. It did, in five seconds and 17 words. We asked for it in Latin–that took another four seconds. Then, we played with a famous six word short story, often attributed to Hemingway.
Scott Pelley: For sale. Baby shoes. Never worn.
The only prompt we gave was ‘finish this story.’ In five seconds…
Scott Pelley: Holy Cow! The shoes were a gift from my wife, but we never had a baby…
From the six-word prompt, Bard created a deeply human tale with characters it invented — including a man whose wife could not conceive and a stranger, grieving after a miscarriage, and longing for closure.
Scott Pelley: I am rarely speechless. I don’t know what to make of this. Give me that story…
We asked for the story in verse. In five seconds, there was a poem written by a machine with breathtaking insight into the mystery of faith, Bard wrote “she knew her baby’s soul would always be alive.” The humanity, at superhuman speed, was a shock.
Scott Pelley: How is this possible?
James Manyika told us that over several months, Bard read most everything on the internet and created a model of what language looks like. Rather than search, its answers come from this language model.
James Manyika: So, for example, if I said to you, Scott, peanut butter and?
Scott Pelley: Jelly.
James Manyika: Right. So, it tries and learns to predict, okay, so peanut butter usually is followed by jelly. It tries to predict the most probable next words, based on everything it’s learned. So, it’s not going out to find stuff, it’s just predicting the next word.
But it doesn’t feel like that. We asked Bard why it helps people and it replied – quote – “because it makes me happy.”
Scott Pelley: Bard, to my eye, appears to be thinking. Appears to be making judgments. That’s not what’s happening? These machines are not sentient. They are not aware of themselves.
James Manyika: They’re not sentient. They’re not aware of themselves. They can exhibit behaviors that look like that. Because keep in mind, they’ve learned from us. We’re sentient beings. We have beings that have feelings, emotions, ideas, thoughts, perspectives. We’ve reflected all that in books, in novels, in fiction. So, when they learn from that, they build patterns from that. So, it’s no surprise to me that the exhibited behavior sometimes looks like maybe there’s somebody behind it. There’s nobody there. These are not sentient beings.
Zimbabwe born, Oxford educated, James Manyika holds a new position at Google — his job is to think about how AI and humanity will best co-exist.
James Manyika: AI has the potential to change many ways in which we’ve thought about society, about what we’re able to do, the problems we can solve.
But AI itself will pose its own problems. Could Hemingway write a better short story? Maybe. But Bard can write a million before Hemingway could finish one. Imagine that level of automation across the economy.
Scott Pelley: A lot of people can be replaced by this technology.
James Manyika: Yes, there are some job occupations that’ll start to decline over time. There are also new job categories that’ll grow over time. But the biggest change will be the jobs that’ll be changed. Something like more than two-thirds will have their definitions change. Not go away, but change. Because they’re now being assisted by AI and by automation. So this is a profound change which has implications for skills. How do we assist people to build new skills? Learn to work alongside machines. And how do these complement what people do today.
Sundar Pichai: This is going to impact every product across every company and so that’s, that’s why I think it’s a very, very profound technology. And so, we are just in early days.
Scott Pelley: Every product in every company.
Sundar Pichai: That’s right. AI will impact everything. So, for example, you could be a radiologist. You know, if I– if I– if you think about five to 10 years from now, you’re gonna have a AI collaborator with you. It may triage. You come in the morning. You– let’s say you have 100 things to go through. It may say, ‘These are the most serious cases you need to look at first.’ Or when you’re looking at something, it may pop up and say, ‘You may have missed something important.’ Why wouldn’t we, why wouldn’t we take advantage of a super-powered assistant to help you across everything you do? You may be a student trying to learn math or history. And, you know, you will have something helping you.
We asked Pichai what jobs would be disrupted, he said, “knowledge workers.” People like writers, accountants, architects and, ironically, software engineers. AI writes computer code too.
Today Sundar Pichai walks a narrow line. A few employees have quit, some believing that Google’s AI rollout is too slow, others–too fast. There are some serious flaws. James Manyika asked Bard about inflation. It wrote an instant essay in economics and recommended five books. But days later, we checked. None of the books is real. Bard fabricated the titles. This very human trait, error with confidence, is called, in the industry, hallucination.
Scott Pelley: Are you getting a lot of hallucinations?
Sundar Pichai: Yes, you know, which is expected. No one in the, in the field has yet solved the hallucination problems. All models do have this as an issue.
Scott Pelley: Is it a solvable problem?
Sundar Pichai: It’s a matter of intense debate. I think we’ll make progress.
To help cure hallucinations, Bard features a “Google it” button that leads to old-fashioned search. Google has also built safety filters into Bard to screen for things like hate speech and bias.
Scott Pelley: How great a risk is the spread of disinformation?
Sundar Pichai: AI will challenge that in a deeper way the scale of this problem will be much bigger.
Bigger problems, he says, with fake news and fake images.
Sundar Pichai: It will be possible with AI to create– you know, a video easily. Where it could be Scott saying something, or me saying something, and we never said that. And it could look accurate. But you know, on a societal scale, you know, it can cause a lot of harm.
Scott Pelley: Is Bard safe for society?
Sundar Pichai: The way we have launched it today, as an experiment in a limited way, I think so. But we all have to be responsible in each step along the way.
This past spring, Google released an advanced version of Bard that can write software and connect to the internet. Google says it’s developing even more sophisticated AI models.
Scott Pelley: You are letting this out slowly so that society can get used to it?
Sundar Pichai: That’s one part of it. One part is also so that we get the user feedback. And we can develop more robust safety layers before we build, before we deploy more capable models.
Of the AI issues we talked about, the most mysterious is called emergent properties. Some AI systems are teaching themselves skills that they weren’t expected to have. How this happens is not well understood. For example, one Google AI program adapted, on its own, after it was prompted in the language of Bangladesh, which it was not trained to translate.
James Manyika: We discovered that with very few amounts of prompting in Bengali, it can now translate all of Bengali. So now, all of a sudden, we now have a research effort where we’re now trying to get to a thousand languages.
Sundar Pichai: There is an aspect of this which we call– all of us in the field call it as a “black box.” You know, you don’t fully understand. And you can’t quite tell why it said this, or why it got wrong. We have some ideas, and our ability to understand this gets better over time. But that’s where the state of the art is.
Scott Pelley: You don’t fully understand how it works. And yet, you’ve turned it loose on society?
Sundar Pichai: Yeah. Let me put it this way. I don’t think we fully understand how a human mind works either.
Was it from that black box, we wondered, that Bard drew its short story that seemed so disarmingly human?
Scott Pelley: It talked about the pain that humans feel. It talked about redemption. How did it do all of those things if it’s just trying to figure out what the next right word is?
Sundar Pichai: I have had these experiences talking with Bard as well. There are two views of this. You know, there are a set of people who view this as, look, these are just algorithms. They’re just repeating what it’s seen online. Then there is the view where these algorithms are showing emergent properties, to be creative, to reason, to plan, and so on, right? And, and personally, I think we need to be, we need to approach this with humility. Part of the reason I think it’s good that some of these technologies are getting out is so that society, you know, people like you and others can process what’s happening. And we begin this conversation and debate. And I think it’s important to do that.
The revolution in artificial intelligence is the center of a debate ranging from those who hope it will save humanity to those who predict doom. Google lies somewhere in the optimistic middle, introducing AI in steps so that civilization can get used to it. We saw what’s coming next in machine learning earlier this year at Google’s AI lab in London — a company called Deepmind — where the future looks something like this.
Scott Pelley: Look at that! Oh, my goodness…
Raia Hadsell: They got a pretty good kick on them…
Scott Pelley: Ah! Goal!
A soccer match at DeepMind looks like fun and games but, here’s the thing: humans did not program these robots to play–they learned the game by themselves.
Raia Hadsell: It’s coming up with these interesting different strategies, different ways to walk, different ways to block…
Scott Pelley: And they’re doing it, they’re scoring over and over again…
Raia Hadsell, vice president of Research and Robotics, showed us how engineers used motion capture technology to teach the AI program how to move like a human. But on the soccer pitch the robots were told only that the object was to score. The self-learning program spent about two weeks testing different moves. It discarded those that didn’t work, built on those that did, and created all-stars.
And with practice, they get better. Hadsell told us that, independent from the robots, the AI program plays thousands of games from which it learns and invents its own tactics.
Raia Hadsell: Here we think that red player’s going to grab it. But instead, it just stops it, hands it back, passes it back, and then goes for the goal.
Scott Pelley: And the AI figured out how to do that on its own.
Raia Hadsell: That’s right. That’s right. And it takes a while. At first all the players just run after the ball together like a gaggle of, you know, 6-year-olds the first time they’re playing ball. Over time what we start to see is now, ‘Ah, what’s the strategy? You go after the ball. I’m coming around this way. Or we should pass. Or I should block while you get to the goal.’ So, we see all of that coordination emerging in the play.
Scott Pelley: This is a lot of fun. But what are the practical implications of what we’re seeing here?
Raia Hadsell: This is the type of research that can eventually lead to robots that can come out of the factories and work in other types of human environments. You know, think about mining, think about dangerous construction work or exploration or disaster recovery.
Raia Hadsell is among 1,000 humans at DeepMind. The company was co-founded just 12 years ago by CEO Demis Hassabis.
Demis Hassabis: So if I think back to 2010 when we started nobody was doing AI. There was nothing going on in industry. People used to eye roll when we talked to them, investors, about doing AI. So, we couldn’t, we could barely get two cents together to start off with which is crazy if you think about now the billions being invested into AI startups.
Cambridge, Harvard, MIT, Hassabis has degrees in computer science and neuroscience. His Ph.D. is in human imagination. And imagine this, when he was 12, in his age group, he was the number two chess champion in the world.
It was through games that he came to AI.
Demis Hassabis: I’ve been working on AI for decades now, and I’ve always believed that it’s gonna be the most important invention that humanity will ever make.
Scott Pelley: Will the pace of change outstrip our ability to adapt?
Demis Hassabis: I don’t think so. I think that we, you know, we’re sort of an infinitely adaptable species. You know, you look at today, us using all of our smartphones and other devices, and we effortlessly sort of adapt to these new technologies. And this is gonna be another one of those changes like that.
Among the biggest changes at DeepMind was the discovery that self-learning machines can be creative. Hassabis showed us a game playing program that learns. It’s called AlphaZero and it dreamed up a winning chess strategy no human had ever seen.
Scott Pelley: But this is just a machine. How does it achieve creativity?
Demis Hassabis: It plays against itself tens of millions of times. So, it can explore parts of chess that maybe human chess players and programmers who program chess computers haven’t thought about before.
Scott Pelley: It never gets tired. It never gets hungry. It just plays chess all the time.
Demis Hassabis: Yes. It’s kind of an amazing thing to see, because actually you set off AlphaZero in the morning and it starts off playing randomly. By lunchtime, you know, it’s able to beat me and beat most chess players. And then by the evening, it’s stronger than the world champion.
Demis Hassabis sold DeepMind to Google in 2014. One reason, was to get his hands on this. Google has the enormous computing power that AI needs. This computing center is in Pryor, Oklahoma. But google has 23 of these, putting it near the top in computing power in the world. This is one of two advances that make AI ascendant now. First, the sum of all human knowledge is online and, second, brute force computing that “very loosely approximates” the neural networks and talents of the brain.
Demis Hassabis: Things like memory, imagination, planning, reinforcement learning, these are all things that are known about how the brain does it, and we wanted to replicate some of that in our AI systems.
Those are some of the elements that led to DeepMind’s greatest achievement so far — solving an ‘impossible’ problem in biology.
Most AI systems today do one or maybe two things well. The soccer robots, for example, can’t write up a grocery list or book your travel or drive your car. The ultimate goal is what’s called artificial general intelligence– a learning machine that can score on a wide range of talents.
Scott Pelley: Would such a machine be conscious of itself?
Demis Hassabis: So that’s another great question. We– you know, philosophers haven’t really settled on a definition of consciousness yet, but if we mean by sort of self-awareness and– these kinds of things– you know, I think there’s a possibility AI one day could be. I definitely don’t think they are today. But I think, again, this is one of the fascinating scientific things we’re gonna find out on this journey towards AI.
Even unconscious, current AI is superhuman in narrow ways.
Back in California, we saw Google engineers teaching skills that robots will practice continuously on their own.
Robot: Push the blue cube to the blue triangle.
They comprehend instructions…
And learn to recognize objects.
Robot 106: What would you like?
Scott Pelley: How ’bout an apple?
Ryan: How about an apple.
Robot 106: On my way, I will bring an apple to you.
Vincent Vanhoucke, senior director of Robotics, showed us how Robot 106 was trained on millions of images…
Robot 106: I am going to pick up the apple.
…and can recognize all the items on a crowded countertop.
Vincent Vanhoucke: If we can give the robot a diversity of experiences, a lot more different objects in different settings, the robot gets better at every one of them.
Now that humans have pulled the forbidden fruit of artificial knowledge…
Scott Pelley: Thank you.
…we start the genesis of a new humanity…
Scott Pelley: AI can utilize all the information in the world. What no human could ever hold in their head. And I wonder if humanity is diminished by this enormous capability that we’re developing.
James Manyika: I think the possibilities of AI do not diminish humanity in any way. And in fact, in some ways, I think they actually raise us to even deeper, more profound questions.
Google’s James Manyika sees this moment as an inflection point.
James Manyika: I think we’re constantly adding these, in, superpowers or capabilities to what humans can do in a way that expands possibilities, as opposed to narrow them, I think. So I don’t think of it as diminishing humans, but it does raise some really profound questions for us. Who are we? What do we value? What are we good at? How do we relate with each other? Those become very, very important questions that are constantly gonna be, in one case– sense exciting, but perhaps unsettling too.
It is an unsettling moment. Critics argue the rush to AI comes too fast — while competitive pressure– among giants like Google and start-ups you’ve never heard of, is propelling humanity into the future ready or not.
Sundar Pichai: But I think if take a 10-year outlook, it is so clear to me, we will have some form of very capable intelligence that can do amazing things. And we need to adapt as a society for it.
Google CEO Sundar Pichai told us society must quickly adapt with regulations for AI in the economy, laws to punish abuse, and treaties among nations to make AI safe for the world.
Sundar Pichai: You know, these are deep questions. And, you know, we call this ‘alignment.’ You know, one way we think about: How do you develop AI systems that are aligned to human values– and including– morality? This is why I think the development of this needs to include not just engineers, but social scientists, ethicists, philosophers, and so on. And I think we have to be very thoughtful. And I think these are all things society needs to figure out as we move along. It’s not for a company to decide.
We’ll end with a note that had never appeared on 60 Minutes but one, in the AI revolution, you may be hearing often. The preceding was created with 100% human content.
Produced by Denise Schrier Cetta and Katie Brennan. Associate producer, Eliza Costas. Broadcast associate, Michelle Karim. Edited by Warren Lustig .