Meta’s chief artificial intelligence scientist Yann LeCun is no stranger to attracting controversy on Twitter.
He’s had his fair share of online feuds over the years, in which he’s stridently defended the benefits of artificial intelligence (AI).
But something he posted on Wednesday night hit a nerve for one of Australia’s leading AI researchers.
“I’m all in favour of technological advances benefiting everyone. But first, that’s a goal for politicians and democracy to achieve,” Mr LeCun tweeted.
“The mere possibility of unequal distribution is not a sufficient reason to stop the progress of science and the development of technology.”
Rebecca Johnson, who’s an expert in the ethics of AI at the University of Sydney, said the comments signalled a big problem in the AI world right now.
She feels too many people in positions of power at tech companies don’t think it’s their responsibility to develop ethical AI that best serves the needs of society.
“People like Yann LeCunn divorce themselves from accountability and say it’s up to the politicians and got nothing to do with me,” she said.
“These companies say ‘Oh yeah, we believe in ethics’. OK, well, then enforce it at all levels of your company, including your chief scientist and chief CEO.
“We have got to move away from these individualistic rock star perceptions of, ‘You can’t touch me’.”
The rise of generative AI
Generative AI – which uses algorithms to create text, images, and audio – has exploded this year.
The ChatGPT model has taken the world by storm, with people using it to write a resume, explain complex concepts, make a diet plan or even get relationship or psychological advice.
This rise has raised fears of students cheating on assignments, with five Australian states banning the technology.
Earlier this month a group of AI experts even called for a six-month pause on developments, citing “risks to society”.
Google and Microsoft have launched AI chatbots, with Google CEO Sundar Pichai describing AI as being good at “synthesising insights for questions where there’s no one right answer”.
What is the right answer?
The ethics of how these algorithms are built, and who deems them a success, is what Ms Johnson, a PhD researcher, is concerned about.
Put simply, evaluation metrics are used to determine things like whether an AI program has a bias against women, if it’s accurate or if it’s disseminating hate speech.
But those metrics can be incredibly flawed and influenced by companies trying to outcompete for profit by getting the highest ‘ethics’ score.
“They’re all trying to say ‘Hey, my model is better than your model’ but they are basing that on such poorly validated tests, if you showed them to any psychologist they would just laugh at you and throw you out,” Ms Johnson said.
“By delving into these evaluation benchmarks I found some really concerning stuff.”
For example, to assess the common sense of an AI model, the metrics looked at what an English-speaking, able-bodied, seven-year-old from a middle-class background in a western country would know.
“So what about a child that lives in Sudan? They might have common sense about all sorts of things, so that infuriated me,” Ms Johnson said.
Questions over benchmarks from Reddit
Some benchmarks have even been drawn from a Reddit forum called Am I The Asshole?, a place where people can ask others if they are on the wrong side of a dispute.
“That’s just straight up crazy – the text from that subreddit was built into the evaluation benchmarks and then people use that to say ‘my model scored 8 out of ten’ on ethics,” Ms Johnson said.
Ms Johnson said she would “absolutely” support government regulation of these evaluation benchmarks, but said the crux of the issue comes down to a lack of cognitive diversity in tech spheres.
“Let’s have cross-disciplinary conversations so more people can spot these problems.”
This push for more opinions prompted Ms Johnson to launch ChatLLM23 – a major symposium being held in Sydney today, which will bring together experts in everything from economics and arts to discuss how to mitigate the risks of AI.
Those risks are currently being assessed by Industry Minister Ed Husic who is soon expected to release a review into rapid developments in AI.
Industry Minister Ed Husic is due to release a review into recent rapid developments in AI and how the government should respond to the emergence of technologies such as Chat-GPT, Midjourney and others that could disrupt many industries.
Mr Husic has been contacted for comment.