V is for…

Voice cloning

Given only a minute of a person speaking, some AI tools can now quickly put together a “voice clone” that sounds remarkably similar. Here the BBC investigated the impact that voice cloning could have on society – from scams to the 2024 US election.

W is for…

Weak AI

It used to be the case that researchers would build AI that could play single games, like chess, by training it with specific rules and heuristics. An example would be IBM’s Deep Blue, a so-called “expert system”. Many AIs like this can be extremely good at one task, but poor at anything else: this is “weak” AI.

However, this is changing fast. More recently, AIs like DeepMind’s MuZero have been released, with the ability to teach itself to master chess, Go, shogi and 42 Atari games without knowing the rules. Another of DeepMind’s models, called Gato, can “play Atari, caption images, chat, stack blocks with a real robot arm and much more”. Researchers have also shown that ChatGPT can pass various exams that students take at law, medical and business school (although not always with flying colours.)

Such flexibility has raised the question about how close we are to the kind of “strong” AI that is indistinguishable from the abilities of the human mind (see “Artificial General Intelligence”)

X is for…

X-risk

Could AI bring about the end of humanity? Some researchers and technologists believe AI has become an “existential risk”, alongside nuclear weapons and bioengineered pathogens, so its continued development should be regulated, curtailed or even stopped. What was a fringe concern a decade ago has now entered the mainstream, as various senior researchers and intellectuals have joined the fray.

It’s important to note that there are differences of opinion within this amorphous group – not all are total doomists, and not all outside this goruop are Silicon Valley cheerleaders. What unites most of them is the idea that, even if there’s only a small chance that AI supplants our own species, we should devote more resources to preventing that happening. There are some researchers and ethicists, however, who believe such claims are too uncertain and possibly exaggerated, serving to support the interests of technology companies.

Y is for…

YOLO

YOLO – which stands for You only look once – is an object detection algorithm that is widely used by AI image recognition tools because of how fast it works. (Its creator, Joseph Redman of the University of Washington is also known for his rather esoteric CV design.)

Z is for…

Zero-shot

When an AI delivers a zero-shot answer, that means it is responding to a concept or object it has never encountered before.

So, as a simple example, if an AI designed to recognise images of animals has been trained on images of cats and dogs, you’d assume it’d struggle with horses or elephants. But through zero-shot learning, it can use what it knows about horses semantically – such as its number of legs or lack of wings – to compare its attributes with the animals it has been trained on.

The rough human equivalent would be an “educated guess”. AIs are getting better and better at zero-shot learning, but as with any inference, it can be wrong.

Join one million Future fans by liking us on Facebook, or follow us on Twitter or Instagram.

If you liked this story, sign up for the weekly bbc.com features newsletter, called “The Essential List” – a handpicked selection of stories from BBC Future, Culture, Worklife, Travel and Reel delivered to your inbox every Friday.





Source link

author-sign