Facebook: A new age catalyst for hate crimes and misinformation?

A former employee of Facebook has filed complaint before the US SEC which indicates that Facebook CEO Mark Zuckerberg made false claims in his Congressional testimony about taking action against objectionable content


Facebook whistleblower Frances Haugen’s complaints have revealed what was already suspected of Facebook in terms of its inaction towards hateful and objectionable content on its platform. Internal company documents citing “fear-mongering” content promoted by “Rashtriya Swayamsevak Sangh (RSS) users, groups and pages” form a part of a complaint filed by former employee and whistleblower Frances Haugen filed with the US Securities and Exchange Commission (SEC), reported The Wire.

Haugen was working with Facebook until May 2021 as a product manager on the civic misinformation team. The documents cited by her through her lawyers on the outset indicate that Facebook officials are aware of the structural factors that cause the spread of hate speech and harmful political rhetoric on its platform. These revelations only further strengthen the Wall Street Journal investigation of August 2020. The report, among other things had stated that a top Facebook official in India was opposed to applying the social media platform’s hate speech rules to at least one Bharatiya Janata Party (BJP) politician and other “Hindu nationalist individuals and groups”.

Hate speech and misinformation

On the basis of these internal documents, Haugen has filed several complaints with the SEC through non-profit legal organisation Whistleblower Aid. These complaints revealed that Facebook apparently is only able to crack down on 3-5% of hate speech and 0.6% of ‘violence and inciting’ content on the platform while it misled investors and the public of proactive removal of over 90% of identified hate speech.

In these documents is also one titled ‘Adversarial Harmful Networks – India Case Study’ albeit undated. “Anti-Muslim narratives targeted pro-Hindu populations with [violent and incendiary] intent… There were a number of dehumanizing posts comparing Muslims to ‘pigs’ and ‘dogs’ and misinformation claiming the Quran calls for men to rape their female family members,” the internal document says.

The documents also point out that Facebook does not have Hindi and Bengali classifiers or algorithms to identify hate content in these languages. However, since the document is undated it is unclear whether Facebook’s claims that it covers Hindi, Bengali, Urdu and Tamil in its hate speech algorithms were made before or after this document was published.

The complaint states that Facebook struggled to designate any pro-RSS groups on its platform due to “political sensitivities” despite their incessant hate content.

It was also highlighted that in one of the documents that there are “1 to 1.5 million predicted misinfo VPVs [view port views, or impressions] per hour in India, Indonesia and Philippines in peak hours”. Their internal documents also show that Facebook is well aware that notifications for posts in polarised groups encourage users to view false/hateful content and the prominent “like” page buttons on re-shares encourage users to follows these sources of hateful content.

Further, Facebook has failed to deal with duplicate accounts or single user multiple accounts which has admittedly facilitated BJP IT cells to mushroom all over the platform. Also, these documents show that Facebook was aware of the same that these were being used by BJP cells for propaganda.

The internal document from November 2020 also states that Facebook has been aware that it is creating products that have become the reason for flourishing of hate speech on the platform:

Graphical user interface, text, applicationDescription automatically generated

Mark Zuckerberg, responding to suggestions for “soft actions” to reduce the prevalence of hateful content in the News Feed, refused to adopt any such measures that impacted the “metrics” or the “meaningful social interactions” (MSI) model of Facebook. This MSI model furthers misinformation and other divisive, low-quality content as per its internal documents. One report reads, “the more negative comments a piece of content instigates, the higher likelihood for the link to get more traffic.”

One of the documents states, “We (Facebook) only take action against approximately 2% of the hate speech on the platform. Recent estimates suggest that unless there is a major change in strategy, it will be very difficult to improve this beyond 10-20% in the short-medium term.”

The interview

In an interview Haugen gave to CBS on October 4, she said, Facebook “picks metrics that are in its own benefit” when it comes to publishing data about hateful content and misinformation.

“The thing I saw at Facebook over and over again was there were conflicts of interest between what was good for the public and what was good for Facebook,” Haugen told CBS. “And Facebook, over and over again, chose to optimize for its own interests, like making more money.” 

Reporting objectionable content

The complaint states that Facebook has made it more difficult for users to report hate speech by changing reporting flows which includes ignoring benign user reports: