The latest generative AI models are sharpening up some long-standing debates about privacy, bias and transparency of AI systems. These issues are often called AI Ethics, or Responsible AI. Francesca Rossi points out that among other things, previous systems were trained on data sets which were more heavily curated, so attempts could at least be made to minimise the amount of bias they displayed. Generative AI systems are hungry beasts, devouring as much as possible of the data available in the world – or at least, on the internet.

Rossi studied computer science at the University of Pisa in Italy, where she became a professor, before spending 20 years at the University of Padova. In 2015 she joined IBM’s T.J. Watson Research Lab in New York, where she is now an IBM Fellow, and also IBM’s AI Ethics Global Leader. She joined the London Futurists Podcast to discuss how we can ensure that AI is deployed responsibility as it becomes more and more capable.

Rossi is a member of numerous international bodies concerned with the beneficial use of AI, including the Partnership on AI, the Global Partnership on AI, and the Future of Life Institute. From 2022 until 2024 she holds the prestigious role of President of the Association for the Advancement of Artificial Intelligence (AAAI).

Old and new concerns

Generative AI systems have revived older concerns about AI, and also raised some new ones. The older ones include worries about the potential loss of human jobs – and these worries are now about white-colour jobs at least as much as the blue-collar jobs which until recently were expected to be the main area of risk.

The latest systems also raise fairly new issues regarding truth and accuracy, because their outputs look and sound so plausible that people will assume they are correct, or realistic, when in fact they are sometimes not. In addition, they raise concerns about copyright which the community of people who refer to themselves as AI ethicists weren’t so concerned about before.

Some people hope that all these issues can eventually be tackled by adding filters to the output of generative AI systems, using Reinforcement Learning from Human Feedback (RLHF), which leverages the human ability to identify preferred content, instead of relying entirely on automated training signals. Other people are even more ambitious, hoping that neuro-symbolic AI can contribute sufficient fact-checking and error correction process without human intervention.

Looking further ahead, Rossi hopes that making some headway in tackling these Responsible AI issues could teach us invaluable lessons about what is often called “AI Safety”, or “AI Alignment”, the task of ensuring that very advanced AI, up to and including superintelligence, behaves in a way that is aligned to human values.

Public reactions to ChatGPT

The fact that millions of people have now used ChatGPT means that there is far greater awareness than just a few months ago about the impressive capabilities of generative AI. Rossi thinks this is a good thing, although it is important that especially people who understand how AI works resist the temptation to anthropomorphise them, for instance by talking about them as if they understand the world around them, or as if they are already approaching human levels of intelligence.

ChatGPT and similar systems have tended to surprise people on the upside, in the sense that they were not yet expected to be as powerful as they have become. But they can also surprise us on the downside, by making mistakes that we could reasonably expect them to avoid, given that they have been trained on vast amounts of data, including the whole of Wikipedia and huge numbers of books. These shortcomings have led Rossi to doubt that just adding more compute power or more training data will close the gap between machines and humans.

Fast and slow, neural nets and GOFAI

One of the main themes of the recent annual meeting of the Association for the Advancement of Artificial Intelligence (AAAI), that Rossi chairs, was the need to bridge the gap between the two main approaches to AI, namely neural networks and symbolic AI. This is also the focus of her own research work at IBM. It is increasingly conventional wisdom in the AI research community that these two approaches to AI will need to be used together to ensure that future AI systems are both helpful and non-harmful.

An analogy can be drawn between the two main approaches to AI and the two systems of thinking described by Daniel Kahnemann in his 2011 book “Thinking, Fast and Slow”. Neural network systems (including modern deep learning systems and the transformer systems that arrived in 2017 with the publication by Google researchers of the “Attention is all you need” paper) can be seen as analogous to Kahnemann’s fast thinking system, which relies on intuition and pattern matching.

Rossi argues that it is potentially misleading to call deep learning systems fast because they rely on many hours of painstaking training. But so does most human intuition, which is seeded in millions of years of evolution, and cultivated during childhood and adolescence. It’s not until we are adults that we can reliably interpret most of the complicated signals provided by our environments. This is why experience is so valuable.

Symbolic AI systems, also known as Good Old Fashioned AI, or GOFAI for short, are analogous to Kahnemann’s slow thinking system, which relies on logic. Humans exploit both thinking modalities when making decisions, so it seems reasonable to combine the two main AI approaches to advance AI.


Rossi is a member of both the Partnership on AI (PAI) and the Global Partnership on AI (GPAI). They share the idiosyncratic use of the conjunction “on”, and they share the goal of trying to work out how to make advanced AI beneficial for humans rather than harmful. But they have very different memberships and styles. PAI was established by US tech giants to facilitate discussions with non-government organisations, and academia. The members of GPAI, by contrast, are governments, who are trying to work out how to regulate this fast-changing new technology. China is not a member of GPAI, which Rossi thinks is probably for the best in an initial phase, as it has such divergent values with regard to the future role of AI that discussions could become less fruitful.

That said, Rossi is acutely aware of the need to talk to people from different countries, with different views, backgrounds, and assumptions. In 2015, she spent a sabbatical year at the Radcliffe Institute at Harvard, which every year gathers together 50 fellows from many different disciplines, including both humanities and sciences. Until that point she had spent most of her professional life talking to other AI researchers, and this experience taught her the importance of finding new ways to talk about AI that were more inclusive. This was an invaluable lesson for the role she has gone on to play in convening AI ethics discussions between divergent groups.

Source link