Jump to content

Why the fake news confidence trap could be your downfall | Technology


Recommended Posts

  • Author


It’s a wild world out there online, with dis- and misinformation flying about at pace. I’m part-way through writing a book about the history of fake news, so I’m well aware that people making stuff up is not new. But what is new is the reach that troublemakers have, whether their actions are deliberate or accidental.

Social media and the wider web changed the game for mischief-makers, and made it easier for the rest of us to be inadvertently hoodwinked online (see: the odd “Goodbye Meta AI” trend that I wrote about this week for the Guardian). The rise of generative AI since the release of ChatGPT in 2022 has also supercharged the risks. While early research suggests our biggest fears about the impact of AI-generated deepfakes on elections are unfounded, the overall information environment is a puzzling one.

Seeing is believing?

That much is evident in information gathered by the Behavioural Insights Team (BIT) – a social-purpose organisation spun out of the UK government – and shared exclusively with me for TechScape. The survey of 2,000 UK adults highlights just how confusing the wild west web is at present.

While 59% of those surveyed by BIT think they can spot false information online, BIT researchers found that only 36% of people were confident that others could spot fake news.

That’s a problem for two reasons. One is our low confidence in other people’s ability to identify false stories. The other is the gulf in perception between our own abilities and those of the public at large. I suspect that if we actually measured how well people discern disinformation from the truth, it’d be closer to the lower number than the higher one. In short, we tend to think we’re smarter than we are.

Don’t believe me? For my first book, YouTubers, I commissioned a survey by YouGov to see how well the public recognised major figures on the platform. The team at YouGov recommended that, among the real names, I should add someone who didn’t exist as a kind of sense check to identify the proportion of people who were bluffing. A worrying number of respondents confidently said they knew the person the pollsters had invented – and knew them well.

A morass of misinformation

All of this matters because of the scale of the false info problem that’s out there.

Three-quarters of respondents to the BIT survey said they’d seen fake news in the last week, with X (which pared back its content moderation teams to the bone in favour of Elon Musk’s free speech absolutism) and TikTok considered the worst offenders. LinkedIn was seen as the least worst (but it’s not entirely clear if that’s because many avoid platform because it has a reputation as being boring).

Regardless, the findings sit uneasily with those who conducted it. “Our latest research has added to the growing evidence that social media users are overconfident in their ability to spot false information,” says Eva Kolker, head of consumer and business markets at BIT. “Paradoxically, this might actually be making people more susceptible to it.”

In short: if you think you’re better than others at spotting fake news, you’re actually more likely to have lower defences – and fall foul of it when you (inevitably) encounter it online.

What should be done?

Responsibility to detect fake news lies with social media platforms and governments, not just individuals, the BIT says. Photograph: Rebecca Lewington/Cerebras Systems/Reuters

Well, a start would be empowering users to be more aware of the risks of fake news and the impact it can have when shared with their social circles. Things snowball quickly thanks to the mob mentality mediated by social media algorithms, such as that Goodbye Meta AI post. So thinking twice and clicking once is important. (I outlined better ways to combat threats to our data in this piece.)

But Kolker isn’t convinced that’s enough. “Many of our attempts to improve online safety have focused on improving the knowledge and capability of individual users”, she says.“While important, our research shows there are inherent limits to the effectiveness of this approach”.

“We can’t just rely on individuals changing their behaviour. To really combat misinformation we also need social media platforms to take action and regulators and government to intervene to level the playing field.”

skip past newsletter promotion

Is it time for an intervention?

The BIT came up with a slate of recommendations they’d put to governments and social media platforms to try to combat mis- and disinformation. First among them is to flag posts that contain false information as soon as they’re spotted, to try and make the public aware before they share. To Meta’s credit, that’s something it did with the Goodbye Meta AI trend, adding labels to posts pointing out that the information was not correct.

The BIT also recommends that platforms become more stringent in how much legal but harmful content they show. Conspiracy theories fester in a putrid information environment, and the BIT seem to be suggesting that the standard Silicon Valley approach – that sunlight is the best disinfectant – doesn’t cut it.

Except in one instance. Their third recommendation is regular public rankings of how much false or harmful content is on each platform.

Whether any of this will work is tough to say. I’ve been looking at the science alongside studies and surveys like BIT’s lately for a number of reasons, and every positive intervention also appears to have its drawbacks. But if the Goodbye Meta AI trend going viral shows us anything, it’s that we can’t just assume people are able to distinguish what’s real from what’s not.

Chris Stokel-Walker’s most recent book is How AI Ate the World. His next book, about the history of fake news, is due out in spring 2025.

The wider TechScape

Musau Mutisya uses the PlantVillage app to diagnose a maize plant on his farm in Matungulu sub-county in Machakos county, Kenya Photograph: Stephen Mukhongi/The Guardian
  • AI can help as well as hinder – here’s how Kenyan farmers are using the tech.

  • Millions of websites are being jeopardised by a messy spat between WordPress and WP Engine, in a messy tech dramaI report on the messy drama for Fast Company.

  • Here’s what happens when you’re falsely accused of murder by Tommy Robinson on X.

  • Hail Zuckus Maximus! The master of the metaverse is finally sorry … for being sorry, writes Marina Hyde.

  • Hail Zuckus Maximus! The master of the metaverse is finally sorry … for being sorry

  • OpenAI is shifting to become a for-profit entity, though one whistleblower is concerned.

  • California governor Gavin Newsom has vetoed an AI bill at the last minute, claiming it would drive smaller tech firms from his state.



Source link

#fake #news #confidence #trap #downfall #Technology

The post Why the fake news confidence trap could be your downfall | Technology appeared first on 247 News Center.

Source Link

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

Cookie Consent & Terms We use cookies to enhance your experience on our site. By continuing to browse our website, you agree to our use of cookies as outlined in our We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.. Please review our Terms of Use, Privacy Policy, and Guidelines for more information.