Jump to content

‘I was moderating hundreds of horrific and traumatising videos’


Recommended Posts

  • Author

‘I was moderating hundreds of horrific and traumatising videos’

‘I was moderating hundreds of horrific and traumatising videos’

data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///ywAAAAAAQABAAACAUwAOw==Getty Images

Social media moderators check for distressing or illegal photos and videos which they then remove

Over the past few months the BBC has been exploring a dark, hidden world – a world where the very worst, most horrifying, distressing, and in many cases, illegal online content ends up.

Beheadings, mass killings, child abuse, hate speech – all of it ends up in the inboxes of a global army of content moderators.

You don’t often see or hear from them – but these are the people whose job it is to review and then, when necessary, delete content that either gets reported by other users, or is automatically flagged by tech tools.

The issue of online safety has become increasingly prominent, with tech firms under more pressure to swiftly remove harmful material.

And despite a lot of research and investment pouring into tech solutions to help, ultimately for now, it’s still largely human moderators who have the final say.

Moderators are often employed by third-party companies, but they work on content posted directly on to the big social networks including Instagram, TikTok and Facebook.

They are based around the world. The people I spoke to while making our series The Moderators for Radio 4 and BBC Sounds, were largely living in East Africa, and all had since left the industry.

Their stories were harrowing. Some of what we recorded was too brutal to broadcast. Sometimes my producer Tom Woolfenden and I would finish a recording and just sit in silence.

“If you take your phone and then go to TikTok, you will see a lot of activities, dancing, you know, happy things,” says Mojez, a former Nairobi-based moderator who worked on TikTok content. “But in the background, I personally was moderating, in the hundreds, horrific and traumatising videos.

“I took it upon myself. Let my mental health take the punch so that general users can continue going about their activities on the platform.”

There are currently multiple ongoing legal claims that the work has destroyed the mental health of such moderators. Some of the former workers in East Africa have come together to form a union.

“Really, the only thing that’s between me logging onto a social media platform and watching a beheading, is somebody sitting in an office somewhere, and watching that content for me, and reviewing it so I don’t have to,” says Martha Dark who runs Foxglove, a campaign group supporting the legal action.

data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///ywAAAAAAQABAAACAUwAOw==data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///ywAAAAAAQABAAACAUwAOw==

Mojez, who used to remove harmful content on TikTok, says his mental health was affected

In 2020, Meta then known as Facebook, agreed to pay a settlement of $52m (£40m) to moderators who had developed mental health issues because of their jobs.

The legal action was initiated by a former moderator in the US called Selena Scola. She described moderators as the “keepers of souls”, because of the amount of footage they see containing the final moments of people’s lives.

The ex-moderators I spoke to all used the word “trauma” in describing the impact the work had on them. Some had difficulty sleeping and eating.

One described how hearing a baby cry had made a colleague panic. Another said he found it difficult to interact with his wife and children because of the child abuse he had witnessed.

I was expecting them to say that this work was so emotionally and mentally gruelling, that no human should have to do it – I thought they would fully support the entire industry becoming automated, with AI tools evolving to scale up to the job.

But they didn’t.

What came across, very powerfully, was the immense pride the moderators had in the roles they had played in protecting the world from online harm.

They saw themselves as a vital emergency service. One says he wanted a uniform and a badge, comparing himself to a paramedic or firefighter.

“Not even one second was wasted,” says someone who we called David. He asked to remain anonymous, but he had worked on material that was used to train the viral AI chatbot ChatGPT, so that it was programmed not to regurgitate horrific material.

“I am proud of the individuals who trained this model to be what it is today.”

data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///ywAAAAAAQABAAACAUwAOw==data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///ywAAAAAAQABAAACAUwAOw==Martha Dark

Martha Dark campaigns in support of social media moderators

But the very tool David had helped to train, might one day compete with him.

Dave Willner is former head of trust and safety at OpenAI, the creator of ChatGPT. He says his team built a rudimentary moderation tool, based on the chatbot’s tech, which managed to identify harmful content with an accuracy rate of around 90%.

“When I sort of fully realised, ‘oh, this is gonna work’, I honestly choked up a little bit,” he says. “[AI tools] don’t get bored. And they don’t get tired and they don’t get shocked…. they are indefatigable.”

Not everyone, however, is confident that AI is a silver bullet for the troubled moderation sector.

“I think it’s problematic,” says Dr Paul Reilly, senior lecturer in media and democracy at the University of Glasgow. “Clearly AI can be a quite blunt, binary way of moderating content.

“It can lead to over-blocking freedom of speech issues, and of course it may miss nuance human moderators would be able to identify. Human moderation is essential to platforms,” he adds.

“The problem is there’s not enough of them, and the job is incredibly harmful to those who do it.”

We also approached the tech companies mentioned in the series.

A TikTok spokesperson says the firm knows content moderation is not an easy task, and it strives to promote a caring working environment for employees. This includes offering clinical support, and creating programs that support moderators’ wellbeing.

They add that videos are initially reviewed by automated tech, which they say removes a large volume of harmful content.

Meanwhile, Open AI – the company behind Chat GPT – says it’s grateful for the important and sometimes challenging work that human workers do to train the AI to spot such photos and videos. A spokesperson adds that, with its partners, Open AI enforces policies to protect the wellbeing of these teams.

And Meta – which owns Instagram and Facebook – says it requires all companies it works with to provide 24-hour on-site support with trained professionals. It adds that moderators are able to customise their reviewing tools to blur graphic content.

The Moderators is on BBC Radio 4 at 13:45 GMT, Monday 11, November to Friday 15, November, and on BBC Sounds.

Read more global business stories


Source link

#moderating #hundreds #horrific #traumatising #videos

📬Pelican News

Source Link

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

Cookie Consent & Terms We use cookies to enhance your experience on our site. By continuing to browse our website, you agree to our use of cookies as outlined in our We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.. Please review our Terms of Use, Privacy Policy, and Guidelines for more information.