Michael Spence

Michael Spence
| Photo Credit:  Lekha Naidu

Michael Spence appears to be cautiously optimistic about Artificial Intelligence. “You could probably produce a shortage of jobs if you systematically went out to do that. But my best guess is that it is not going to happen,” he says, pointing out that if powerful tools are put in the hands of Indian entrepreneurs and technologists, they will more likely use those to go after different problems rather than simply automating “a whole lot of stuff.” 

While he admits that he cannot prove it, he believes there are ways we could significantly modify the initial bias in the direction of automation. “I feel like Don Quixote, swiping windmills. But I am pretty sure we will settle down,” says the Nobel Laureate in Economics, who recently featured in one of the Azim Premji University public lecture series, titled Artificial Intelligence in the Age of Uncertainty,  at the Bengaluru International Centre, Domlur.

Anurag Behar in conversation with Spence.

Anurag Behar in conversation with Spence.
| Photo Credit:
 Lekha Naidu

What can go wrong

Reassuring though this may seem, Spence, who was in conversation with Anurag Behar, the Chief Executive Officer of the Azim Premji Foundation, also voiced some very pertinent reservations about AI. “There is a whole bunch of quasi-practical issues that need to be dealt with,” he says at the talk. “It is a long list (of things) that could go wrong or cause an overreaction,” he says. 

For starters, AI is flooding the system with rubbish that people have trouble distinguishing from real content, says Spence. “I am not sure the market mechanisms can solve informational problems like this,” he says. Scarier, perhaps, is the powerful use of these technologies in national security and warfare. While fully automated weapons that can make decisions about who will be killed aren’t here just yet, they aren’t too far away either, says Spence. “The first thing I have heard people suggest we do is to have a treaty… agree that we will never use a fully automated weapon.,” he says, comparing fully automated weapons to nuclear, biological and chemical ones that have to be taken off the table because they are too dangerous. He believes another huge set of issues is likely to arise from data, whether it is around security, privacy, responsible use, control or access to it.  

Essentially “rubbish, data and warfare,” quips Behar before veering into a discussion about the limitations of AI and the uncertainties embedded in it. Take, for instance, the concept of artificial general intelligence–a hypothetical system whose cognitive capabilities are equal to or perhaps even more than a human being–possibly posing an existential threat to our species. “There is a debate about whether or not we will get there,” says Spence, pointing out that the timeline of this event ranges from ten years to never, according to experts. “I don’t have nightmares about it,” he admits. “I think it is more interesting to think about the path we are going to take and where we are going to get rather than having a wildly intense debate on what the world is going to be like,“ he says with a laugh. “Because…who knows.” 

‘Lazy thinkers’

Responding to Behar’s very real concern about the natural tendency of students and teachers to resort to AI in the education space, creating what he refers to as “lazy thinkers”, Spence says, “I completely agree that you can hand in B+ papers for a course without actually doing the work or understanding the material.” He also suggests that perhaps the old-fashioned way of testing, where one has to sit down and answer questions, unaided by phones or computers, is not a bad idea. “We are going to have to find a way around that,” he believes.



Source link

author-sign