Graphic video of suicide spreads from Facebook to TikTok to YouTube as platforms fail moderation test

0
4
- Advertisement -

A graphic video of a man committing suicide on Facebook Live has spread from there to TikTok, Twitter, Instagram and now YouTube, where it ran alongside ads and attracted thousands more views. Do what they will, these platforms can’t seem to stop the spread, echoing past failures to block violent acts and disinformation.

The original video was posted to Facebook two weeks ago and has made its way onto all the major video platforms, often beginning with innocuous footage then cutting to the man’s death. These techniques go back many years in the practice of evading automatic moderation; By the time people have flagged the video manually, the original goal of exposing unwitting viewers to it will have been accomplished.

It’s similar in many ways to the way in which COVID-19 disinformation motherlode Plandemic spread and wreaked havoc despite these platforms deploying their ostensibly significant moderating resources towards preventing that.

For all the platforms’ talk of advanced algorithms and instant removal of rule-violating content, these events seem to show them failing when they count the most: In extremity.

The video of Ronnie McNutt’s suicide originated on August 31, and took nearly three hours to take down in the first place, by which time it had been seen and downloaded by innumerable people. How could something so graphic and plainly violating the platform’s standards, being actively flagged by users, be allowed to stay up for so long?

In a “community standards enforcement report” issued Friday, Facebook admitted that its army of (contractor) human reviewers, whose thankless job it is to review violent and sexual content all day, had been partly disabled due to the pandemic.

With fewer content reviewers, we took action on fewer pieces of content on both Facebook and Instagram for suicide and self-injury, and child nudity and sexual exploitation on Instagram.

The number of appeals is also much lower in this report because we couldn’t always offer them. We let people know about this and if they felt we made a mistake, we still gave people the option to tell us they disagreed with our decision.

McNutt’s friend and podcast co-host Josh Steen told TechCrunch that the stream had been flagged long before he killed himself. “I firmly believe, because I knew him and how these interactions worked, had the stream ended it would’ve diverted his attention enough for SOME kind of intervention,” Steen wrote in an email. “It’s pure speculation, but I think if they’d have cut his stream off he wouldn’t have ended his life.”

When I asked Facebook about this, I received the same statement others have: “We are reviewing how we could have taken down the livestream faster.” One certainly hopes so.

But Facebook cannot contain the spread of videos like this — and the various shootings and suicides that have occurred on its Live platform in the past — once they’re out there. At the same time, it’s difficult to imagine how other platforms are caught flat-footed: TikTok had the video queued up in users’ “For You” page, exposing countless people by an act of algorithmic irresponsibility. Surely even if it’s not possible to keep the content off the service entirely, there ought to be something preventing it from being actively recommended to people.

YouTube is another, later offender: Steen and others have captured many cases of the video being run by monetized accounts. He sent screenshots and video showing ads from Squarespace and the Motley Fool running ahead of the video of McNutt.

It’s disappointing that the largest video platforms on the planet, which seem to never cease crowing about their prowess in shutting down this kind of content, don’t seem to have any serious response. TikTok, for instance, bans any account that makes multiple attempts to upload the clip. What’s the point of giving people a second or third chance here?

Facebook couldn’t seem to decide whether the content is in violation or not, as evidenced by several re-uploads of the content in various forms that were not taken down when flagged. Perhaps these are just the ones slipping through the cracks, while thousands more are nipped in the bud, but why should we give a company like Facebook, which commands billions of dollars and tens of thousands of employees, the benefit of the doubt when they fail for the nth time on something so important?

“Facebook went on record in early August saying they were returning back to normal moderation rates, but that their AI tech actually had been improved during the COVID slow downs,” Steen said. “So why’d they totally blow their response to the livestream and the response time after?”

“We know from the Christchurch Live incident that they have the ability to tell us a couple of things that really need to be divulged at this point because of the viral spread: how many people in total viewed the livestream and how many times was it shared, and how many people viewed the video and how many times was it shared? To me these stats are important because it shows the impact that the video had in real time. That data will also confirm, I think, where the viewships spiked in the livestream,” he continued.

On Twitter and Instagram, entire accounts have popped up just to upload the video, or impersonate McNutt using various transformations of his username. Some even add “suicide” or “dead” or the like to the name. These are accounts created with the singular intent of violating the rules. Where are the fake and bot activity precautions?

Videos of the suicide have appeared on YouTube and are indifferently taken down. Others simply use McNutt’s image or the earlier parts of his stream to attract viewers. Steen and others who knew McNutt have been reporting these regularly, with mixed success.

One channel I saw had pulled in more than half a million views by leveraging McNutt’s suicide, originally posting the live video (with preroll ad) and then using his face to perhaps attract morbid users. When I pointed these out to YouTube, they demonetized them and removed the one shown above — though Steen and his friends had reported it days ago. I can’t help but feel that the next time this happens — or more likely, elsewhere on the platform where it is happening right now — there will be less or no accountability because there are no press outlets making a fuss.

The focus from these platforms is on invisible suppression of the content and retention of users and activity; if stringent measures reduce those all-important metrics, they won’t be taken, as we’ve seen on other social media platforms.

But as this situation and others before it demonstrate, there seems to be something fundamentally lacking from the way this service is provided and monitored. Obviously it can be of enormous benefit, as a tool to report current events and so on, but it can be and has been used to stream horrific acts and for other forms of abuse.

“These companies still aren’t fully cooperating and still aren’t really being honest,” said Steen. “This is exactly why I created #ReformForRonnie because we kept seeing over and over and over again that their reporting systems did nothing. Unless something changes it is just going to keep happening.”

Steen is feeling the loss of his friend, of course, but also disappointment and anger at the platforms that allow his image to be abused and mocked with only a perfunctory response. He’s been rallying people around the hashtag to put pressure on the major social platforms to say something, anything substantial about this situation. How could they have prevented this? How can they better handle it when it is already out there? How can they respect the wishes of loved ones? Perhaps none of these things are possible — but if that’s the case, don’t expect them to admit it.

If you or someone you know needs help, please call the National Suicide Prevention Lifeline at 800-273-TALK (8255) or text the Crisis Text Line at 741-741. International resources are available here.

Written by Devin Coldewey
This news first appeared on https://techcrunch.com/2020/09/13/graphic-video-of-suicide-spreads-from-facebook-to-tiktok-to-youtube-as-platforms-fail-moderation-test/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+Techcrunch+%28TechCrunch%29 under the title “Graphic video of suicide spreads from Facebook to TikTok to YouTube as platforms fail moderation test”. Bolchha Nepal is not responsible or affiliated towards the opinion expressed in this news article.