Artificial intelligence (AI) has been the buzz term of the year, understandably so as developments around it have been nothing short of astonishing. 

Just over a year ago it didn’t seem likely that AI systems would be able to generate lifelike video from text, yet OpenAI has already been touting a tool called Sora that can generate synthetic videos from just a few lines of text that are very hard to distinguish from reality. 

With under three weeks until our elections, we have seen a few examples of AI being used. This week we look at AI guidelines for political parties and tips for the public on how to spot possible AI. If you do see something dodgy online, report it to Real411

One of the key issues surrounding AI is that as much as we need to acknowledge its potential, we also need to acknowledge its risk. 

Physicist Stephen Hawking is quoted as having said: “Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks.” 

The concern about the risks posed by AI have been sounded by Elon Musk (which, while we should take heed, is also deeply ironic that he should be concerned about risk and harm when he cares not a tinker’s cuss for the entire African continent, leaving Twitter/X without a single person looking after trust and safety.) 

AI Musk

US tech entrepreneur Elon Musk, owner of Tesla, SpaceX and X, at a conversation event with British Prime Minister Rishi Sunak (unseen) in London on 2 November 2023. (Photo: EPA-EFE / Tolga Akmen)

While we need not be hysterical about the dangers, we do need to be pressing for the adoption of a rights-based approach in relation to AI. If we don’t, the risks feared by Hawking and others and the existing digital divide and inequalities won’t just be replicated but will be baked into the systems – with only a few billionaires having the power to act. 

The absence of a rights-based approach for social media is a central reason why social media systems incentivise polarisation, spreading mis- and disinformation, have normalised misogyny, facilitated online bullying and crimes like sextortion. Given that AI may have a catastrophic impact on our world, it is essential that we start doing all we can to ensure it works to our benefit and serves the public interest.

What better time than elections, as we go to the polls, to start asking parties questions about their use of AI. It’s one of the core reasons that we, together with Alt Advisory, developed guidelines for political parties on AI. You can see the full version of the guidelines here.

For this piece, given its focus on AI, we asked Google’s Gemini to give us a summary of our Media Monitoring Africa (MMA) guidelines, which is useful to get a sense of the key elements:

General principles:

  • Link AI policy to existing party values and commitments (manifestos, data privacy policies);
  • Foster public trust by being transparent about AI use:
    • Disclose details on AI use in content generation/campaigns.
    • Be transparent about internal AI use (candidate selection, voter outreach);
  • Maintain human oversight for all AI systems;
  • Define allowed and prohibited uses of AI:
    • Commit to not using AI for misinformation, deep fakes or subverting democracy.
    • Use AI with the “do no harm” principle in mind.
    • Be cautious about AI use in elections; and
  • Protect privacy and confidentiality when using third-party AI platforms.

Addressing algorithmic bias:

  • Mitigate bias by training AI on local contexts and languages.

Developing AI expertise:

  • Train party members, especially communications teams, on using AI responsibly.

Managing dependencies:

  • Clarify collaborations with external AI providers;
  • Develop plans to manage dependence on third-party systems; and
  • Explore in-house AI development or customisation (if capacity allows).

Enforcement:

  • Define consequences for non-compliance with AI guidelines;
  • Develop plans for rectifying non-compliance; and
  • Establish mechanisms to enforce AI guidelines.

Tailoring AI guidelines:

  • Adapt general principles to fit each party’s specific context;
  • Create a diverse group to address specific details:
    • Party leadership
    • Communications staff
    • Technology experts
    • Constituent representatives; and
  • Customise guidelines to address relevant AI risks and challenges.

Transparency and collaboration:

  • Consider making AI guidelines public;
  • Collaborate with other parties for shared standards and accountability; and
  • Engage in public consultations to gather feedback.

Overall goal:

  • Use AI guidelines for self-regulation and responsible engagement with constituents;
  • Develop robust, context-specific AI guidelines and enforcement; and
  • Promote healthier political discourse and rebuild trust in the AI era.”

The full set of guidelines is worth the read. Ideally, we would like a version of them to be included in the next code of conduct signed ahead of the 2026 elections. We will be meeting with the IEC to discuss. Until then we hope to still carry out some training with parties on AI.

2024 elections ANC

(Illustration: Midjourney AI)

Complaints

So, what have we seen on Real411 in terms of AI and online harms? An early complaint showed Donald Trump endorsing a party. The video was clearly manipulated with face movements not matching dialogue, and the voice was slightly off as well. 

Recently other examples used AI to sell products, using Johann Rupert and Leanne Manas, with slightly better quality if viewed on a small screen. If the viewer isn’t watching too carefully they may be believed.

More recently we were asked about THAT DA advert – whether the flag being burned is in fact AI, and whether the DA should, in terms of our guidelines, have declared the use of AI. 

We asked a senior executive from the biggest digital advertising company in South Africa for his expert take. 

His comments were derisory: “Honestly if it was made using AI it would be considerably better.” 

Read more in Daily Maverick: Navigating new frontiers — AI and the future of political campaigns in Africa

He went on: “This is just stock-standard animation tools used by someone who should be ashamed of themselves.” 

So, it would appear the DA advert, while not actually burning a real flag, made use of computer-generated animation to do so.

While there haven’t yet been very convincing efforts at disinformation using AI it is only a matter of time, and audio AI – where it is easier to mimic voices of famous people, are likely to emerge soon. Efforts to spot and act are being undertaken by the Institute of Security Studies with Murmur Intelligence and MMA

Sora from OpenAI is very convincing. Watch this video – for a series of short videos all showing off video generated from text. One of the short clips is of a cat waking a person. We include it below.

Watch it; while it’s very realistic, the cat’s front paw is the giveaway. As the cat touches its owner’s face its front paw duplicates and comes back and does the same thing. 

Spotting AI requires paying close attention to detail, looking for things such as extra fingers, differently shaped ears and jewellery. As AI develops, it will be harder to spot synthetic from reality. Here are a few other tips, again thanks to Gemini for helping condense into brief points

Spotting AI on social media: A user’s guide

  • Content consistency: Look for unnatural consistency in language or visuals. Humans make slight errors, AI might not;
  • Missing sources: AI-generated content often lacks citations or sources. Be wary of articles without them;
  • Repetitive language: Pay attention to overuse of specific words or phrases, a sign of AI filling gaps;
  • Uncanny perfection: Images or videos might look too perfect, lacking the wear and tear of reality (watch for strange lighting or blurry details);
  • Unnatural emotions: AI-generated faces might appear overly happy or sad, with unconvincing expressions;
  • Inconsistent details: In images, for example, there might be odd shadows, extra fingers or nonsensical text in the background; and
  • Unrealistic claims: Be sceptical of outlandish promises or claims that seem too good to be true.

AI is constantly evolving. Manipulating information has become easier than ever before. Those with nefarious agendas, or whose aim is distort reality to further a political agenda, to sow division, have the tools readily available to do just that. 

In our current election period, the need for accurate, credible, reliable information is more important than ever before. A single red flag doesn’t guarantee AI, but multiple ones raise suspicion. If something seems off, report it to Real411. DM

Download the Real411 App on Google Play Store or Apple App Store.

William Bird is the director of Media Monitoring Africa (MMA). Thandi Smith heads the policy and quality programme at MMA, a partner in the 411 platform to counter disinformation.

Using the Real411 platform we have analysed disinformation trends which have largely focused on Covid19.
(Photo: real411.org/Wikipedia)

Gallery



Source link

author-sign