Jump to content

AI-generated misinformation is everywhere—identifying it may be harder than you think


Recommended Posts

  • Author

AI-generated misinformation is everywhere—identifying it may be harder than you think

AI-generated misinformation is everywhere—identifying it may be harder than you think

data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///ywAAAAAAQABAAACAUwAOw==
Credit: Pixabay/CC0 Public Domain

Artificial though it may be, the concept of “intelligence” doesn’t seem to jibe with a computer-generated image of uniformed cats toting assault rifles.

Yet that visual slur, which supports a debunked story about immigrants in Ohio eating pets, has become a signature image from the 2024 U.S. presidential election. It was created using artificial intelligence (AI) by supporters of Republican nominee Donald Trump and circulated online by the former president himself.

As the first election to play out in the era of widespread access to generative AI—which can create seemingly original content—disreputable websites and our social media feeds have featured a deluge of fabricated and misleading text and visuals.

To learn more about the role of this technology in the contest, Maryland Today spoke to three University of Maryland faculty experts: College of Information Professor Jennifer Golbeck, who studies social media algorithms and the spread of extremist content; linguistics Professor Philip Resnik, who pursues computational approaches to language and psychology with a joint appointment in the Institute for Advanced Computer Studies; and journalism Assistant Professor Daniel Trielli M.Jour. ’16, who examines how search engines amplify misinformation and intentional disinformation, among other topics.

With AI technology rapidly developing, there are few controls on how it’s used, they said, leaving voters responsible for separating good information from bad.

What’s your marquee example of questionable AI use in the 2024 election?

Golbeck: Taylor Swift, for sure. It’s astonishing to me that Donald Trump thought he could get away with posting those Swifties for Trump images. I don’t think anyone thought they were real images of Taylor Swift, but that wasn’t the point. Whether or not it’s the reason she came out and endorsed Kamala Harris, it gave her the opportunity to say, “My image was used in AI, I’m concerned about this, and now I need to tell you who I’m really going to vote for.'”

Resnik: The Biden robocall in New Hampshire (created by a political consultant associated with an obscure primary opponent’s campaign) that was designed to suppress turnout among Biden supporters in the Democratic primary was an indicator of what the future might bring.

Trielli: One of the biggest is the blatantly false charge that Haitian immigrants to Ohio were breaking a social norm by eating people’s pets. The point of this was not to have a local effect and flip Ohio, but to create a general, nationwide anti-immigrant sentiment for political gain. The AI-generated images that came out of that were not portrayed as being real, but they show how disinformation campaigns often work, by creating a negative emotional vibe against someone or something.

How are most of us encountering AI-generated mis- or disinformation about the election?

Trielli: The chances of encountering it, particularly in social media, are close to 100%. AI can generate emotionally appealing content with great speed, so there is a lot of it, and no matter who we are, someone we follow is going to share some of it. Beyond misinformation and disinformation, however, we are seeing a lot of generative AI content that might not be designed to make you believe something that’s untruthful happened, but to cause you to make a connection between a person and an idea, positive or negative.

Resnik: AI has the ability to tirelessly amplify false messages on social media, and there is work in political psychology and psychology more generally showing that even false messages break through and gain acceptance if they’re repeated enough and speak to people’s biases enough.

However, we don’t always encounter it as a blatant, whole-cloth fabrication. My former student Pranav Goel Ph.D. ’23 showed in his dissertation how you can take a legitimate news story and find something in it—maybe an oversimplified headline—that supports a narrative that may not be present in the entire piece, but can be amplified as misinformation. He showed in a very systematic way how this was happening with news sources associated with both the right wing and with progressives.

Golbeck: The role of AI in this election in some ways defies predictions. A few years ago, there was an expectation of a lot of deepfakes that were really convincing and would confuse people about what’s real and what isn’t. At least in terms of the images and videos, there is a ton of AI-generated content, but very little of it that becomes popular is confused with reality. It’s almost like a new way to create memes.

It’s different on the text side. The misinformation amplification bots on social media having access to generative AI tools like ChatGPT makes them sound much more human and able to have realistic conversations. They are more convincing than they were the last election—those of us who interact with bots know they can now yell at you convincingly.

Who’s doing it, and who their targets?

Golbeck: Of course, the Russians are involved in it this year—although they’re not now and have never been the only state actor involved. Certainly, their bots are pushing pro-MAGA and pro-Trump content, but they’re also pushing far left content to suppress votes. With some voters that means pushing Gaza war content, not to win votes for Trump, but to lose votes for the Democrats. This is the Russian, and before that, Soviet expertise in creating social division in the United States.

Trielli: The strategies and the targets vary, but one thing that’s important to understand is that these tactics only work because the United States election system is very fragile because of the electoral college. Sometimes one state can determine who is the next president of the United States, and maybe just a few tens of thousands of people in one part of that state. Generative AI allows for a great volume and great variety of content, and it’s done with the hope that one of these messages is going to hit those few people at the right time.

What’s your biggest worry about AI-fueled deception in politics?

Golbeck: There’s a classic fascist political tactic where the goal isn’t necessarily to get people to believe a lie, it’s to make them unsure of how to discern what’s true and what’s not. My biggest concern is having a situation where people say, “I have no idea what’s real, so I’m not going to trust anything.” When you do that, it lays the foundation for fascist and authoritarian politics to thrive.

How can we protect ourselves from being taken in?

Trielli: An important thing is understanding when we see something we generally agree with, we’re more likely to accept it as true even if it isn’t. As journalists, we’re taught to encounter our own biases to protect against them. But with everyone on social media, we’re all publishers of information now. Before you share something, you can use a technique called “lateral reading,” where you start with piece of information and then look around for other information and context to see if that thing that you want to share is actually plausible.

Golbeck: When you see something that makes you angry, you should be suspicious—a lot of misinformation is designed to create this big emotional response, and it’s something we’re finding AI is extremely good at. It might just be something you find funny or interesting. I’ve seen people from strong information backgrounds forward things they found interesting because they forgot to cross-reference.

Resnik: I had this experience years ago reading the Unabomber Manifesto, where it’s going on about the environment and technology, and I found myself thinking, “Yeah, yeah, that’s a good point, very logical … wait, what!?'” When somebody sounds reasonable to you but is making a very unreasonable point, you have to work to separate “how this makes me feel” from “what this really means.” For people who are not trained to do this, I think it can be very difficult—but to overcome misinformation and disinformation that targets our emotions, that’s exactly what we have to do.

Provided by
University of Maryland


Citation:
Q&A: AI-generated misinformation is everywhere—identifying it may be harder than you think (2024, October 9)
retrieved 9 October 2024
from

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.




Source link

#AIgenerated #misinformation #everywhereidentifying #harder

📬Pelican News

Source Link

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

Cookie Consent & Terms We use cookies to enhance your experience on our site. By continuing to browse our website, you agree to our use of cookies as outlined in our We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.. Please review our Terms of Use, Privacy Policy, and Guidelines for more information.