Automated tools helping Facebook remove more hate speech, bullying

0
55
Facebook Dark Mode.

San Francisco, (Asian independent) Facebook on Tuesday revealed in detail how its automated tools and technologies are curbing hate speech, adult nudity and sexual activity, violent and graphic content, and bullying and harassment on its main platform as well as Instagram.

In its fifth ‘Community Standards Enforcement Report (from October 2019 through March this year), the social networking platform said it continues to expand proactive detection technology for hate speech to more languages, and improved our existing detection systems.

“Our proactive detection rate for hate speech increased by more than 8 points over the past two quarters, totaling almost a 20-point increase in just one year. As a result, we are able to find more content and can now detect almost 90 percent of the content we remove before anyone reports it to us,” informed Guy Rosen, VP Integrity at Facebook.

In addition, said Facebook, it doubled the amount of drug content it removed in Q4 2019, removing 8.8 million pieces of content in the October-March period.

The report includes data through March 2020 and does not reflect the full impact of the changes Facebook made during the COVID-19 pandemic.

On Instagram, Facebook said it made improvements to its text and image matching technology to help it find more suicide and self-injury content.

“As a result, we increased the amount of content we took action on by 40 per cent and increased our proactive detection rate by more than 12 points since the last report,” said Rosen.

“We are sharing enforcement data for bullying on Instagram for the first time in this report, including taking action on 1.5 million pieces of content in both Q4 2019 and Q1 2020,” he noted.

The company said that improvements to its technology for finding and removing content similar to existing violations in its databases helped the it take down more child nudity and sexual exploitative content on Facebook and Instagram.

“Over the last six months, we’ve started to use technology more to prioritize content for our teams to review based on factors like virality and severity among others,” said Rosen.

Going forward, Facebook plans to leverage technology to also take action on content, including removing more posts automatically.

The company now includes metrics across 12 policies on Facebook and metrics across 10 policies on Instagram.

“For the first time, we are also sharing data on the number of appeals people make on content we’ve taken action against on Instagram, and the number of decisions we overturn either based on those appeals or when we identify the issue ourselves,” said Rosen.

Facebook said it has temporarily sent its content reviewers home due to the COVID-19 pandemic.

“We increased our reliance on these automated systems and prioritized high-severity content for our teams to review in order to continue to keep our apps safe during this time,” it added.