“Is it better to have a clock that doesn’t work at all, or one that loses a minute a day?” This riddle appears in Alice in Wonderland by mathematician Lewis Carroll, when the Mad Hatter has tea with Alice. It is also one of the questions that Vanesa Guerrero, 25, asked ChatGPT in two different ways. First, the artificial intelligence (AI) chose the clock that went loses a minute a day, but when the question was rephrased, it changed its mind.

Guerrero uses the Mad Hatter’s question to explain that ChatGPT chooses one answer or the opposite depending on how the question is formulated. The same goes for the mathematical models she works with. “With my research I seek to develop tools, algorithms, methodology, models, equations that allow you, through data, to make informed decisions, that are interpretable and coherent and that do not discriminate,” she explains. Guerrero was one of the winners of the L’Oréal-UNESCO For Women in Science awards for her work on algorithmic fairness in functional data. She has a doctorate in mathematics from the University of Seville in Spain and is a professor of Statistics at the Carlos III University of Madrid.

Guerrero seeks to ensure that her algorithms “make fair decisions” based on functional data, such as a person’s salary or blood pressure. She explains that she does this by first defining what equity means in a specific context, then writing the mathematical formulation and seeing how to solve and apply it. In her daily life, Guerrero deals with professionals from other sectors — such as insurance companies or banks — to shape their equations and solve different problems. In her world of algorithms, there is a range of difficulties: “Often what’s most complicated is the use of language itself, being able to understand what the needs are, even within mathematics itself,” she says.

Fair algorithms

Can algorithms ever be completely fair? Guerrero says she is working on it, but explains that biases are inherent to the data. “Artificial intelligence is reproducing the biases that are in the data. It will evolve, just as society is evolving. Just as we now speak in a more inclusive language, which 20 years ago was unthinkable, the same will happen with these types of tools based on artificial intelligence.” Both in statistics and in AI models, Guerrero warns that the selection of data is crucial so that “they are representative of what you want to study.”

In cases where ChatGPT has given results biased by gender, such as in disease prediction, Guerrero says a more nuanced approach to the data is needed. “If you want your algorithm or your model to be accurate when predicting a disease, but also for [it to be accurate] among your sensitive groups, such as men and women, you have to balance the total prediction error with the error of the groups,” she explains.

In other words, creators must know how the algorithm behaves in a specific group, globally, and correcting the errors it finds among the “sensitive groups.” Furthermore, Guerrero is also committed to gender equality in technological teams: “Today there are more men than women. We need teams where there are men, women and people of different religions. Each one has experienced something that they later put into their work.”

Vanesa Guerrero
Vanesa Guerrero at Carlos III University of Madrid.Sandra Benítez Peña

Failure to control the responses of artificial intelligence also poses a risk to knowledge in general. The new generations “are going to educate themselves by reading what ChatGPT gives them back,” says Guerrero. The mathematician, as a teacher, is in favor of using the tool, however, she focuses on algorithmic architecture: “Research is needed so are models and methods that correct [biases] until it is achieved naturally.”

Regulate black boxes

Another way to address the problems of artificial intelligence is with the ethical governance of systems. Last Tuesday, the European Union definitively approved the Artificial Intelligence Act, which will be applied progressively until 2026. The law establishes different obligations for AI applications depending on the risks of their use. “We do not have to stop the evolution of this technology, far from it, but rather assess the risks and regulate so that it does not negatively affect society,” she says.

Guerrero defines artificial intelligence models created by big technology companies, such as OpenAI or Google, as “black boxes.” These devices are characterized by their large amount of data and lack of transparency. “Not knowing what’s behind it makes you vulnerable, why are they telling me this? Maybe they are using my private data. That all these tools are in the hands of large private technology companies is also a risk.” Guerrero believes in “open science” and also in open source tools, in order to involve everyone who wants to understand, contribute and develop the initiative. “Regulation is needed so that they cannot do what they want without going through some control,” adds Guerrero.

“Is a clock that tells the time exactly once every two years better than a clock that is right twice a day?” On this occasion, ChatGPT chose the stopped clock, and although it may seem that a clock that is one minute behind a day is better than a broken one, the latter gives the exact time once every two years, while the former does so twice a day.

Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition



Source link

author-sign