Justice

Why It's So Hard to Stop Violence From Being Streamed Online

April 17th 2017

After another horrifying act of violence was shared on social media this weekend, many are not only questioning why it happened but how it managed to happen, online — with the grisly video remaining up hours after it was posted.

In a statement, Facebook conceded that the Cleveland video shows they "need to do more." The company is expected to face some tough questions about the issue this week when it kicks off its annual developers conference in San Jose, California. (The full statement is at the bottom of this piece.)

Cleveland police are still searching for Steve Stephens, who allegedly recorded a video of himself Sunday fatally shooting a 74-year-old man and then posted it on Facebook.

The video stayed up for about three hours before it was taken down, but the original, unedited version is still making the rounds on other sites, according to a Twitter user who claims to the victim's grandson.

The incident has exposed a glaring problem that many social media companies have been struggling with in recent years, particularly with the roll-out of video streaming products such as Facebook Live, Instagram Live, and Periscope.

Part of the problem lies in how social media sites find and delete such content.

Facebook and other companies use artificial intelligence to filter through the countless number of videos and posts that are added to their platforms every day. However, the technology itself is still relatively young and requires plenty of trial-and-error for the software to "learn" which content is appropriate or not — and why.

While Facebook's AI is reportedly able to flag more content than human screeners, there's lingering skepticism over whether current technology can adequately parse out subtleties in language, especially considering that even human beings have a hard time picking up on certain nuances — like sarcasm or irony — when talking to people online.

Humans and AI, then, will have to work together to catch an offending video or post. One case in point is the new, real-time crisis feature added to Facebook Live. It aims to address the problem of suicides being live-streamed since the feature was introduced last year.

Facebook did not immediately return a request for comment, but it's been reported that the company still relies heavily on its user base to flag and report offending content throughout the site. Those reports go to a community standards team for review, which in turn has made its own controversial decisions about content in the past.

Google relies on human screeners to filter out violent, offensive, or otherwise illegal content on YouTube, but it still counts on users to report questionable activity. Twitter, meanwhile, recently rolled out a new algorithm to find and suspend problematic accounts (the company has long been criticized for being slow to address harassment).

Here is Facebook's full statement:

"On Sunday morning, a man in Cleveland posted a video of himself announcing his intent to commit murder, then two minutes later posted another video of himself shooting and killing an elderly man. A few minutes after that, he went live, confessing to the murder. It was a horrific crime — one that has no place on Facebook, and goes against our policies and everything we stand for.

"As a result of this terrible series of events, we are reviewing our reporting flows to be sure people can report videos and other material that violates our standards as easily and quickly as possible. In this case, we did not receive a report about the first video, and we only received a report about the second video — containing the shooting — more than an hour and 45 minutes after it was posted. We received reports about the third video, containing the man’s live confession, only after it had ended.

"We disabled the suspect’s account within 23 minutes of receiving the first report about the murder video, and two hours after receiving a report of any kind. But we know we need to do better.

"In addition to improving our reporting flows, we are constantly exploring ways that new technologies can help us make sure Facebook is a safe environment. Artificial intelligence, for example, plays an important part in this work, helping us prevent the videos from being reshared in their entirety. (People are still able to share portions of the videos in order to condemn them or for public awareness, as many news outlets are doing in reporting the story online and on television). We are also working on improving our review processes. Currently, thousands of people around the world review the millions of items that are reported to us every week in more than 40 languages. We prioritize reports with serious safety implications for our community, and are working on making that review process go even faster.

"Keeping our global community safe is an important part of our mission. We are grateful to everyone who reported these videos and other offensive content to us, and to those who are helping us keep Facebook safe every day.

"Timeline of Events
11:09AM PDT — First video, of intent to murder, uploaded. Not reported to Facebook.
11:11AM PDT — Second video, of shooting, uploaded.
11:22AM PDT — Suspect confesses to murder while using Live, is live for 5 minutes.
11:27AM PDT — Live ends, and Live video is first reported shortly after.
12:59PM PDT — Video of shooting is first reported.
1:22PM PDT — Suspect’s account disabled; all videos no longer visible to public."

Share your opinion

Do you think Facebook should do more to protect users from violent content?

No 19%Yes 81%