Facebook on Tuesday said with respect to fake accounts, Facebook said it removed 1,5 billion such accounts in Q2, down from 1.7 billion accounts in Q1.
The reason for the decline the social network gave was that when it blocks more attempts, there are fewer fake accounts for it to disable, “which has led to a general decline in accounts actioned since Q1 2019”.
“We estimate that fake accounts represented approximately 5 per cent of our worldwide monthly active users (MAU) on Facebook during Q2,” Guy Rosen, VP Integrity at Facebook, said in a blog post detailing the sixth edition of ‘Community Standards Enforcement Report’.
The company admitted that the number of appeals against the removal of accounts etc was much lower in Q2 “because we couldn’t always offer them”.
“We let people know about this and if they felt we made a mistake, we still gave people the option to tell us they disagreed with our decision”.
Face book also claims 22.5 million pieces of hate speech content in the second quarter (April-June) this year, increased from 9.6 million pieces of content in Q1, and its proactive detection rate for hate speech increased 6 points from 89 per cent to 95 per cent.
On Instagram, the proactive detection rate for hate speech increased 39 points from 45 per cent to 84 per cent in the June quarter, said Facebook.
It resulted in action on 3.3 million pieces of hate speech content in Q2, up from 808,900 in Q1.
“These increases (on Instagram) were driven by expanding our proactive detection technologies in English and Spanish,” Rosen said.
Another area where Facebook claimed it saw improvements was content related to terrorism.
“On Facebook, the amount of content we took action on increased from 6.3 million in Q1 to 8.7 million in Q2,” Rosen informed.
“We saw increases in the amount of content we took action on connected to organized hate on Instagram and bullying and harassment on both Facebook and Instagram”.
Facebook said that since it prioritised removing harmful content over measuring certain efforts during this time, it was unable to calculate the prevalence of violent and graphic content, and adult nudity and sexual activity.
“We want people to be confident that the numbers we report around harmful content are accurate, so we will undergo an independent, third-party audit, starting in 2021, to validate the numbers we publish in our Community Standards Enforcement Report,” Rosen said.
Due to the COVID-19 pandemic, Facebook sent its content reviewers home in March and relied more heavily on the technology to review content.
Facebook said it has since brought many reviewers back online from home and, where it is safe, a smaller number into the office.