Yonatan Mintz tries to keep both the promise and the peril in mind in his research on what he calls the “human-sensitive applications” of artificial intelligence.
On the promise side, the UW-Madison assistant professor of industrial and systems engineering is working on a project to make diabetes treatment more efficient and effective in poor urban areas in India. A computer-based board game he’s helped develop is intended to shed light on the different ways machines and humans learn. He said it could eventually lead to ways humans and AI can work together to solve problems, like in emergencies when time is precious.
On the peril side, he’s worried less about the “Terminator problem” — that artificial intelligence could one day escape human control and take over the world, a la the “Terminator” movie franchise — than about the more imminent and already-known potential dangers of AI.
People are also reading…
“What are the design principles that we need to apply in order to make sure that when we come up with these things and deploy them in the wild, these tools work for us?” he said. “There must be some kind of democratic controls in place” as AI becomes more common.
When not at work, the 31-year-old Mintz, who lives with his wife in Madison’s Greenbush area, has of late been experimenting with different kinds of ethnic cooking, including those from his native Israel.
UW-Madison is making major investments in data science, including building a new 350,000-square-foot home for the School of Computer, Data and Information Sciences. What needed to happen to get to this point?
The reason (data science) didn’t exist prior to 2013 is just the technology wasn’t there or even if it was, (companies) like Epic and Oracle needed to take time to collect enough (data) before you could do something with it. The second thing that needed to happen is taking that data and coming up with either a way to process it to make predictions or help make decisions. Once those problems were 90% solved, now you could start asking questions about, OK, now that I have this capacity, how do I actually turn all this stuff into something useful?
What’s the difference between AI and machine learning?
AI is kind of like a more broad discipline on trying to figure out, in an automated way, different tasks or (to) automate different sorts of tasks. So it would include things like having a robot walk around, playing Go, obviously ChatGPT and human language — a very, very broad discipline. Machine learning is kind of like the mathematical part of that. So the discipline machine learning is looking at how do we come up with algorithms and make sure that they are theoretically sound and we can actually implement them and things work mathematically in order to help with some of these learning tasks — prediction or different types of decision-making.
Elon Musk and Steve Wozniak are among the people who have signed an open letter calling for a six-month moratorium on advanced AI development. What do you think of that?
It’s not clear to me what pausing or stopping AI research means. The best interpretation I can get from what they’re saying is like trying to limit the amount of data being processed and the capacity that way. But to me this all seems a bit alarmist, and I’m not super clear that this is really that big of an issue yet, especially because of the breadth of what AI research is and how many people are doing it. It’s one thing to understand abstract mathematical problems but once you start getting into these issues of well, what does it mean to have human language? What does it mean to have thoughts and put them on paper? That’s a philosophical question. That’s no longer a math question. The problem is not well-defined enough to be able to make a statement like, ah yes, this is where the issues are. The real thing should be about risk-mitigation, not necessarily halting. We should have more engineers being critical of the tools they’re designing and less grand statements made by (people) who are maybe slightly farther away from the research that’s actually being done.
So you should bake in the risk-mitigation to the work as it goes along?
Yes, that’s part of it. Making the hard choices at each individual point in the design process and making a commitment of: This is how I’m doing this. This is how the management’s going to work. This is what the training data was. This is what is going to be documenting all these things to make sure to trace it all the way through. I’m less worried about Terminators in a thousand years as I am about the dangers that are happening today. Like if we want to get automated vehicles on the road, there’s things that we’re going to have to answer. Imagine if all my driving data for my automated vehicle comes from a driver in New York or in LA, and then I put a Wisconsinite in the seat. Do you think the average Wisconsinite’s going to be as comfortable with the aggressive driving that the car’s going to do?
What about the values-alignment problem, or how you align human values with the values of an automated-decision system?
I think we need to be cautious with these kinds of things because societal values and norms change over time, and whatever values we decide are important to maintain now are probably not the same ones that we want to maintain later. Something that seems universal or normal now might be unsavory even like in 10, 20 years. To me that’s why I think the more pertinent way to address something like this is put the mechanisms and the safeguards in place where you could have some kind of decision-maker be able to essentially pull the plug on what’s going on.
I really like cooking — Middle Eastern and some Jewish food, but I’m really prolific so I’ve been experimenting with Korean food, recently trying to learn Mexican food. I’ve been watching a lot of Rick Bayless videos. If I lived not in an apartment I would be into barbecue, but it’s been harder to convince the landlord that I should be allowed to do that.