Stories

Facebook shows abuse task ahead

In a show of transparency this week, Facebook said it had deleted 866m posts in the first quarter of 2018 alone. But sampling released by the tech giant at the same time suggests tens of millions of violent posts are still getting through its system.

Facebook abusive contentThe news of Facebook’s take-downs – revealed in a ‘transparency report’ – comes as the platform faces growing pressure following revelations that its data has enabled crime, and been used to change the course of elections.

Facebook said it had shut down 583m fake accounts in the first quarter of 2018 and taken down 837m pieces of spam content over the same period – with “nearly 100%” of that content discovered and removed before it was reported.

The tech giant said it took action against 3.4m posts containing graphic violence during the first quarter, compared to 1.2 posts the quarter before. It acted against 2.5m pieces of hate content, against 1.6m before.

But it said random sampling had shown that as many 27 in every 10,000 posts contained graphic violence, and seven in 10,000 had nudity or sexual content. Extrapolated to its global audience, tens of millions of posts get through daily.

Artificial intelligence was able to spot 96% of nudity and sexual activity-related content before being reported, said Facebook, with Ai spotting 86% of graphic violence posts.

But it admitted that under four in 10 hate posts that were removed were spotted by Ai – meaning six in 10 were dealt with after complaints.

“Technology like artificial intelligence, while promising, is still years away from being effective for most bad content because context is so important,” Facebook product management VP Guy Rosen said.

“Artificial intelligence isn’t good enough yet to determine whether someone is pushing hate or describing something that happened to them so they can raise awareness of the issue.”

Rosen said the platform was “investing heavily in more people and better technology to make Facebook safer for everyone”. The company has an estimated 15,000 human moderators.

Reacting to the report, Doteveryone policy director Catherine Miller told the BBC that the figures from the 10,000 random sampling “give you a better sense of the proliferation of this kind of content across the platform than the numbers of take-downs.

“It’s important to mention that these take-downs happen through automated processes and through human moderation. But it’s according to a set of standards that Facebook has set itself.”

Doteveryone said a set of standards was needed around transparency in reporting so users could see how it compares to other social media networks.

facebookFacebook has community standards designed to protect the platform against nudity, hate speech and vlolence. It also urges users to contact the police if they feel threatened by anything they see on the platform.

In Germany, state regulators have clamped down on illegal social media content – with its NetzDG law coming into effect from this year.

Writing on the German move last week, rights expert Stefan Thiel pointed to misconceptions, including that the effects of the regulation on social media platforms could be overstated.

Facebook: how to report abusive content.

Share:            

Leave a Reply

Your email address will not be published. Required fields are marked *