Because of digital deception, this election season is like no other. There are certainly a lot of political ads, but there’s something else you need to look out for: fake manipulated pictures, videos and sounds. What experts here in New Orleans say is very spooky is the fact that just about anyone can do it, and they have done so by trying to impact your thoughts and opinions for this coming election. “What a bunch of malarkey,” the following statements were extracted from a robocall dispersed two days before the New Hampshire primary. The robocall had President Joe Biden saying, “This Tuesday only enables the Republicans in their quest to elect Donald Trump again.” These are not the words of Biden; they were created using deepfake technology generated with artificial intelligence. This call represents one of the many ways deepfakes are being used to influence the 2024 U.S. presidential election. A deepfake is an image or recording that has been convincingly altered and manipulated to misrepresent someone as doing or saying something that was not actually done or said. Earlier this year, a political consultant paid a magician who was in New Orleans $150 to do something that, to the untrained eye, would seem like magic. The job he was tasked with was creating an audio recording of Biden saying something he never said. “It’s important that you save your vote for the November election,” said the robocall. “It’s a crazy story,” said Paul Carpenter, who made the voice. He says this isn’t the first time he was asked to make a politician audio recording. “When I did the Lindsay Graham, which I listened to, it was, ‘If you’re not going to vote for blank, who would you vote for?’” Carpenter said. “So, I thought this is what that would be. So, he said, ‘Can you do Biden?’ I said, ‘Yeah, let me go work on the voice.’” Once he captured Biden’s voice from an interview online, it only took minutes to generate before being transformed into a robocall that reached 20,000 people. “Through our conversations, I didn’t know until after the effect. He was always telling me, ‘I am creating a new product; we are showing it to campaign managers,’” Carpenter said. Political consultant Steven Kramer was fined by the Federal Communications Commission $6 million. He has also been indicted in New Hampshire on state charges of felony voter suppression and misdemeanor impersonation of a candidate. The robocall represented one of the first times a cloning deepfake was used for voter suppression. Other wide-reaching deepfakes have attempted to influence voting behaviors on social platforms, too. For instance, on May 10, 2023, Donald Trump Jr. reposted a video on X, formerly Twitter, that received over 3 million views. In it, a deepfake of Anderson Cooper praises Trump. Taylor Swift also called out Trump for sharing AI-generated pictures of her backing Trump and pictures showing Swifties for Trump. The Ron DeSantis campaign even shared a video on X that contained deepfake images of Trump hugging Dr. Anthony Fauci. The post received over 10 million views. Creating deepfakes like these used to take multiple weeks and could cost thousands of dollars, but now they can be made in a matter of minutes at little to no cost. “It’s pretty good in terms of the whole compensation,” said Nick Mattei, associate professor for computer science at Tulane University, who says it’s a practice becoming more prominent. “I think there’s going to be more and more stuff in this next election cycle that is going to more applications of generative AI and other technologies to trying to get people to vote or not vote or think certain things,” Mattei said. “So, yeah, it’s interesting.” Mattei says it can be used for a lot of good, like how it is being used at Tulane.“They just had a big project,” Mattei said. “The president was on campus looking at using AI to help identify cancerous cells in tissue samples — that’s joint with the biomedical engineering as well. So that’s a really big project we have going on.” But in the wrong hands, Mattei says it could be used for things that are not so helpful. “You know, like a wrecking ball to bring down a building that’s falling down,” Mattei said. “Or you can use it to bring down a house. You could add in a political candidate to a place where they’ve never been, and that’s a bad use of the exact same technology. And that’s kind of one of those, one of those questions where things get a little gray.” The FCC announced in February that AI-generated audio clips in robocalls are illegal, but deepfakes on social media and in campaign advertisements aren’t. More than 3 in 4 Americans believe it is likely AI will be used to affect the election outcome, according to an Elon University poll conducted in April 2024. Many voters in the same poll also said they are worried they are not prepared to detect fake photos, videos and audio on their own. One community member said, “The AI plays into preconceived notions. So. You know, if you’re conservative and the AI plays into what you believe, then yeah, you’re going to believe it. And if you’re a liberal, it’s the same way, yeah.” Independent researchers have worked to track the spread and impact of AI creations. Early this year, a group of researchers at Purdue created an incidents database of political deepfakes, which has since logged more than 500 incidents. Surprisingly, a majority of those videos have not been created to deceive people but rather are satire, educational or political commentary. Researchers on the project say many deepfakes are likely designed to reinforce the opinions of people who were already predisposed to believe their messaging. At least half of the U.S. is trying to combat the confusion with laws regulating the use of AI in political campaigns. Louisiana is not. “It’s so dangerous,” said Rep. Mandie Landry. This past legislative session, she introduced a House bill designed to make it illegal to deceive voters with false impersonations or false depictions of political candidates through deepfakes. “This was more about stealing someone’s identity, and it passed through the House and the Senate,” Landry said. It was vetoed by the governor. When that happened, the governor said, “While I applaud the efforts to prevent false political attacks, I believe this bill creates serious First Amendment concerns as it relates to emerging technologies. The law is far from settled on this issue, and I believe more information is needed before such regulations are enshrined into law … legally speaking, it’s already illegal to knowingly deceive voters.” There was also a Senate bill that would have required anyone making a deepfake video to label it as such, but that bill was not voted through. “There need to be punishments for people who are doing these types of things, again using someone’s likeness without their permission,” said Dr. Jill Schiefelbein, chief experience officer at an AI intelligence company called Render. As an expert in the field, she says this is nothing new. “When it comes to deepfakes, what’s really interesting is they have been around for seven-plus years,” said Schiefelbein. “We are now just being more aware of them, and we are more cognizant of them, and with any evolution in technology, the same.” When it comes to legislation, Schiefelbein says it can be tricky. “It takes time for the law to catch up because once you put something into law, there are hard consequences, right?” Schiefelbein said. “There is a line that you can and can’t cross. And until something is fully understood, I understand the hesitation to making laws and legislation on this, but I encourage, you know, our legislatures are business leaders, concerned citizens focus on what can be done, not just development, innovation, but what can be done to make consumers and information more aware, and I think. The labeling system really is a solid starting point, but it’s not the endpoint.” Schiefelbein says there is something to look for when it comes to telling whether something is real or fake. “When you are looking at videos online, you can look for minor glitches for discrepancies in the background,” Schiefelbein said. “Look for slightly elongated objects, for background images that don’t quite match up and sometimes even random little numbers generated in small places. In images, there’s a lot of different tells that you can be looking for. But the biggest thing is when it comes to identifying the veracity of information you find — and this is not just online, I would say it is anywhere — make sure you can have multiple sources confirm whatever you are finding. Don’t take it at face value. If you’re wondering if it’s too good to be true, oftentimes it is. So, make sure you’re verifying that information before massively sharing it.” Social media companies and U.S. intelligence agencies say they are also tracking nefarious AI-driven influence campaigns and are prepared to alert voters about malicious deepfakes and disinformation.

Because of digital deception, this election season is like no other.

There are certainly a lot of political ads, but there’s something else you need to look out for: fake manipulated pictures, videos and sounds.

What experts here in New Orleans say is very spooky is the fact that just about anyone can do it, and they have done so by trying to impact your thoughts and opinions for this coming election.

“What a bunch of malarkey,” the following statements were extracted from a robocall dispersed two days before the New Hampshire primary.

The robocall had President Joe Biden saying, “This Tuesday only enables the Republicans in their quest to elect Donald Trump again.”

These are not the words of Biden; they were created using deepfake technology generated with artificial intelligence.

This call represents one of the many ways deepfakes are being used to influence the 2024 U.S. presidential election.

A deepfake is an image or recording that has been convincingly altered and manipulated to misrepresent someone as doing or saying something that was not actually done or said.

Earlier this year, a political consultant paid a magician who was in New Orleans $150 to do something that, to the untrained eye, would seem like magic. The job he was tasked with was creating an audio recording of Biden saying something he never said.

“It’s important that you save your vote for the November election,” said the robocall.

“It’s a crazy story,” said Paul Carpenter, who made the voice. He says this isn’t the first time he was asked to make a politician audio recording.

“When I did the Lindsay Graham, which I listened to, it was, ‘If you’re not going to vote for blank, who would you vote for?’” Carpenter said. “So, I thought this is what that would be. So, he said, ‘Can you do Biden?’ I said, ‘Yeah, let me go work on the voice.’”

Once he captured Biden’s voice from an interview online, it only took minutes to generate before being transformed into a robocall that reached 20,000 people.

“Through our conversations, I didn’t know until after the effect. He was always telling me, ‘I am creating a new product; we are showing it to campaign managers,’” Carpenter said.

Political consultant Steven Kramer was fined by the Federal Communications Commission $6 million. He has also been indicted in New Hampshire on state charges of felony voter suppression and misdemeanor impersonation of a candidate.

The robocall represented one of the first times a cloning deepfake was used for voter suppression. Other wide-reaching deepfakes have attempted to influence voting behaviors on social platforms, too.

For instance, on May 10, 2023, Donald Trump Jr. reposted a video on X, formerly Twitter, that received over 3 million views. In it, a deepfake of Anderson Cooper praises Trump.

Taylor Swift also called out Trump for sharing AI-generated pictures of her backing Trump and pictures showing Swifties for Trump.

The Ron DeSantis campaign even shared a video on X that contained deepfake images of Trump hugging Dr. Anthony Fauci. The post received over 10 million views.

Creating deepfakes like these used to take multiple weeks and could cost thousands of dollars, but now they can be made in a matter of minutes at little to no cost.

“It’s pretty good in terms of the whole compensation,” said Nick Mattei, associate professor for computer science at Tulane University, who says it’s a practice becoming more prominent.

“I think there’s going to be more and more stuff in this next election cycle that is going to more applications of generative AI and other technologies to trying to get people to vote or not vote or think certain things,” Mattei said. “So, yeah, it’s interesting.”

Mattei says it can be used for a lot of good, like how it is being used at Tulane.

“They just had a big project,” Mattei said. “The president was on campus looking at using AI to help identify cancerous cells in tissue samples — that’s joint with the biomedical engineering as well. So that’s a really big project we have going on.”

But in the wrong hands, Mattei says it could be used for things that are not so helpful.

“You know, like a wrecking ball to bring down a building that’s falling down,” Mattei said. “Or you can use it to bring down a house. You could add in a political candidate to a place where they’ve never been, and that’s a bad use of the exact same technology. And that’s kind of one of those, one of those questions where things get a little gray.”

The FCC announced in February that AI-generated audio clips in robocalls are illegal, but deepfakes on social media and in campaign advertisements aren’t.

More than 3 in 4 Americans believe it is likely AI will be used to affect the election outcome, according to an Elon University poll conducted in April 2024. Many voters in the same poll also said they are worried they are not prepared to detect fake photos, videos and audio on their own.

One community member said, “The AI plays into preconceived notions. So. You know, if you’re conservative and the AI plays into what you believe, then yeah, you’re going to believe it. And if you’re a liberal, it’s the same way, yeah.”

Independent researchers have worked to track the spread and impact of AI creations. Early this year, a group of researchers at Purdue created an incidents database of political deepfakes, which has since logged more than 500 incidents. Surprisingly, a majority of those videos have not been created to deceive people but rather are satire, educational or political commentary.

Researchers on the project say many deepfakes are likely designed to reinforce the opinions of people who were already predisposed to believe their messaging.

At least half of the U.S. is trying to combat the confusion with laws regulating the use of AI in political campaigns. Louisiana is not.

“It’s so dangerous,” said Rep. Mandie Landry. This past legislative session, she introduced a House bill designed to make it illegal to deceive voters with false impersonations or false depictions of political candidates through deepfakes.

“This was more about stealing someone’s identity, and it passed through the House and the Senate,” Landry said.

It was vetoed by the governor.

When that happened, the governor said, “While I applaud the efforts to prevent false political attacks, I believe this bill creates serious First Amendment concerns as it relates to emerging technologies. The law is far from settled on this issue, and I believe more information is needed before such regulations are enshrined into law … legally speaking, it’s already illegal to knowingly deceive voters.”

There was also a Senate bill that would have required anyone making a deepfake video to label it as such, but that bill was not voted through.

“There need to be punishments for people who are doing these types of things, again using someone’s likeness without their permission,” said Dr. Jill Schiefelbein, chief experience officer at an AI intelligence company called Render.

As an expert in the field, she says this is nothing new.

“When it comes to deepfakes, what’s really interesting is they have been around for seven-plus years,” said Schiefelbein. “We are now just being more aware of them, and we are more cognizant of them, and with any evolution in technology, the same.”

When it comes to legislation, Schiefelbein says it can be tricky.

“It takes time for the law to catch up because once you put something into law, there are hard consequences, right?” Schiefelbein said. “There is a line that you can and can’t cross. And until something is fully understood, I understand the hesitation to making laws and legislation on this, but I encourage, you know, our legislatures are business leaders, concerned citizens focus on what can be done, not just development, innovation, but what can be done to make consumers and information more aware, and I think. The labeling system really is a solid starting point, but it’s not the endpoint.”

Schiefelbein says there is something to look for when it comes to telling whether something is real or fake.

“When you are looking at videos online, you can look for minor glitches for discrepancies in the background,” Schiefelbein said. “Look for slightly elongated objects, for background images that don’t quite match up and sometimes even random little numbers generated in small places. In images, there’s a lot of different tells that you can be looking for. But the biggest thing is when it comes to identifying the veracity of information you find — and this is not just online, I would say it is anywhere — make sure you can have multiple sources confirm whatever you are finding. Don’t take it at face value. If you’re wondering if it’s too good to be true, oftentimes it is. So, make sure you’re verifying that information before massively sharing it.”

Social media companies and U.S. intelligence agencies say they are also tracking nefarious AI-driven influence campaigns and are prepared to alert voters about malicious deepfakes and disinformation.



Source link

author-sign