Meta Fact Check | SabrangIndia News Related to Human Rights Fri, 17 Jan 2025 06:23:55 +0000 en-US hourly 1 https://wordpress.org/?v=6.2.2 https://sabrangindia.in/wp-content/uploads/2023/06/Favicon_0.png Meta Fact Check | SabrangIndia 32 32 Meta’s policy shift: Fuelling hate in an era of LGBTQIA+ inclusion https://sabrangindia.in/metas-policy-shift-fuelling-hate-in-an-era-of-lgbtqia-inclusion/ Fri, 17 Jan 2025 06:23:55 +0000 https://sabrangindia.in/?p=39694 Meta’s new hate speech policies allowing dehumanising rhetoric against LGBTQIA+ individuals mark a troubling regression, undermining global strides toward equality, dignity, and inclusivity

The post Meta’s policy shift: Fuelling hate in an era of LGBTQIA+ inclusion appeared first on SabrangIndia.

]]>
Meta’s recent revisions to its hate speech guidelines mark a troubling shift towards normalising harmful narratives targeting marginalised communities. By explicitly permitting users to accuse LGBTQIA+ individuals of being “mentally ill” or to compare women to household objects, Meta’s policies not only put inclusivity on stakes but risk inciting real-world violence against these communities thereby disturbing the harmony in the society.

Quoting the Guidelines: An Ethical Dilemma

Under the new policy, Meta states:

“We do allow allegations of mental illness or abnormality when based on gender or sexual orientation, given political and religious discourse about transgenderism and homosexuality and common non-serious usage of words like ‘weird.’”

Additionally, the revised policy allows content such as:

“Comparing people to household objects, calling entire ethnic groups ‘filth,’ or arguing that LGBTQIA+ individuals should be excluded from certain spaces or professions.”

This represents a stark departure from previous hate speech policies that prohibited such dehumanising language, recognising its potential to create “an environment of intimidation and exclusion.”

Employee and Advocacy Group Backlash

Meta’s own employees have criticised the decision as “appalling,” with one post reading:

“I am LGBT and “mentally ill”. Just to let you know that I’ll be taking time out to look after my mental health.”

Advocacy groups have been equally vocal. GLAAD, for instance, stated:

“Meta is giving the green light for people to target LGBTQ people, women, immigrants, and other marginalised groups with violence, vitriol, and dehumanising narratives.”

The Consequences of hate normalisation

Meta’s history provides troubling evidence of its platforms enabling real-world atrocities, most notably the Rohingya genocide in Myanmar and the Capitol riots in the United States. In Myanmar, Facebook was identified by UN investigators as a key tool in spreading dehumanising rhetoric against the Rohingya Muslim minority, with hate-filled posts labelling them as “vermin” and “threats.” This unchecked hate speech incited widespread violence, resulting in over 700,000 people being displaced and thousands killed. Similarly, in the U.S., Meta’s platforms played a significant role in facilitating the organisation of the January 6 Capitol riots by allowing misinformation and extremist content to proliferate unchecked. These events demonstrate how Meta’s platforms, when deregulated or permissive, become breeding grounds for hatred and violence. With its new policies permitting users to call LGBTQIA+ individuals “mentally ill” or compare women to “household objects,” Meta risks repeating these disastrous patterns. By legitimising dehumanising rhetoric, these policies pave the way for escalating offline violence, societal polarisation, and the erosion of public safety. Without decisive corrective action, Meta could again find itself at the centre of global crises fuelled by its own platforms.

Way forward

While the world moves forward to embrace inclusivity and champion LGBTQIA+ rights, Meta’s recent policy changes reflect a regressive step reminiscent of the discriminatory attitudes of past generations. The global momentum for LGBTQIA+ equality is evident in initiatives like the United Nations’ Free and Equal campaign, which tirelessly works to combat harmful practices, promote legal protections, and foster societal acceptance of LGBTQIA+ individuals in regions as diverse as Africa, Albania, Brazil, and Vietnam​. These efforts underscore a commitment to ensuring dignity and equality for all, yet Meta’s decision to permit users to call LGBTQIA+ individuals “mentally ill” directly undermines this progress. By sanctioning such language, Meta is aligning itself with outdated, oppressive ideologies at a time when the global community is advocating for inclusion and acceptance. Human rights activists and allies worldwide must stand in solidarity to condemn this policy and demand accountability from Meta. It is imperative that Meta rescind these harmful changes and reaffirm its commitment to safeguarding dignity, equality, and respect for all users.

Related:

India’s LGBTQIA+ struggle: beyond legal victories, battle for true equality remains

From Judgments to Handbook: India’s Transformative Journey towards LGBTQIA+ Equality

The post Meta’s policy shift: Fuelling hate in an era of LGBTQIA+ inclusion appeared first on SabrangIndia.

]]>
From fact-checking to chaos: How meta’s new moderation model risks eroding trust and democracy https://sabrangindia.in/from-fact-checking-to-chaos-how-metas-new-moderation-model-risks-eroding-trust-and-democracy/ Fri, 17 Jan 2025 05:23:02 +0000 https://sabrangindia.in/?p=39682 Meta’s shift to community-driven moderation under the "community notes" model raises alarms, risking manipulation, misinformation, and further eroding trust in a rapidly polarizing digital landscape.

The post From fact-checking to chaos: How meta’s new moderation model risks eroding trust and democracy appeared first on SabrangIndia.

]]>
Meta’s decision to replace professional fact-checking with a community-driven moderation system under the “community notes” model is a regressive move that undermines the fight against misinformation. This policy change prioritizes a veneer of free speech over the pressing need for content accuracy, leaving the platform more vulnerable to manipulation, misinformation, and societal harm.

The False Equivalence of Free Speech and Misinformation

Meta justifies the shift as a step towards fostering free expression, as echoed in Mark Zuckerberg’s Georgetown speech about empowering individuals to voice their opinions. However, unmoderated free speech often becomes a breeding ground for falsehoods and malicious narratives. Professional fact-checking, though imperfect, provided a critical layer of accountability by separating genuine discourse from deliberate misinformation. Community-driven models, on the other hand, often amplify the loudest or most popular opinions, regardless of their genuinity.

Challenges of Crowdsourcing Moderation

Meta’s shift to community-driven moderation under the “Community Notes” model presents several critical challenges. These systems are frequently vulnerable to partisan bias, enabling dominant narratives to suppress minority perspectives, and organized manipulation, where bots and coordinated groups distort facts. This was starkly evident during the 2018 Cambridge Analytica scandal, where Facebook data was exploited to influence political outcomes, raising serious concerns about digital democracy. Another glaring example is Facebook’s involvement in the Myanmar Rohingya crisis, where unchecked hate speech on the platform contributed to widespread violence, with the UN citing Facebook as having a “determining role.” Similarly, during the COVID-19 pandemic, the platform became a hub for anti-vaccine propaganda, undermining global public health initiatives. During the 2024 Indian general election, Mark Zuckerberg inaccurately stated that the incumbent government lost due to its handling of the COVID-19 pandemic. This claim was incorrect as Prime Minister Narendra Modi’s government was re-elected for a third term. The misinformation sparked outrage, leading Union Minister Ashwini Vaishnaw to publicly refute the statement. In response, Meta India’s Vice President Shivnath Thukral issued an apology for the “inadvertent error” and reaffirmed Meta’s commitment to fostering accurate information.

If Meta introduces the Community Notes system, it risks being hijacked by organized political groups like the BJP IT cell, which has previously demonstrated its ability to exploit similar systems on platforms such as Twitter. Numerous reports have documented coordinated campaigns by the BJP IT cell to spread propaganda, disinformation, and polarizing narratives, often under the guise of organic community engagement. This manipulation not only distorts public discourse but also influences public perception on critical matters. Replicating such tactics on Meta’s platforms could lead to a systematic spread of partisan falsehoods, eroding democratic processes and undermining the platform’s credibility as a space for truthful and balanced discussions.

In regions governed by strict regulations like the European Union’s Digital Services Act (DSA), this policy could lead to significant regulatory challenges and possible sanctions.

A Reputational Risk for Meta

Meta’s decision also jeopardizes its own credibility and the trust of advertisers. By downgrading professional oversight, the platform risks becoming a hub for disinformation, deterring reputable companies from associating with it. Advertisers may hesitate to place their brands in an environment where false claims could damage their reputation.

The Need for a Hybrid Model

While professional fact-checking alone is not a panacea, it serves as a vital deterrent against the unchecked spread of misinformation. A more effective solution would be a hybrid model that combines expert oversight with community involvement, enhanced by transparent algorithms and robust accountability mechanisms. This approach could ensure that free expression does not come at the expense of truth.


Related:

Report: Meta reportedly monetising on ads calling for the killing of Muslims as well as opposition leader

After EU, US senator raises concerns about misinformation to Google, X, Meta

BUJ deplores attempts to censor online content by Government fact check unit

The post From fact-checking to chaos: How meta’s new moderation model risks eroding trust and democracy appeared first on SabrangIndia.

]]>