Meta | SabrangIndia News Related to Human Rights Fri, 17 Jan 2025 06:23:55 +0000 en-US hourly 1 https://wordpress.org/?v=6.2.2 https://sabrangindia.in/wp-content/uploads/2023/06/Favicon_0.png Meta | SabrangIndia 32 32 Meta’s policy shift: Fuelling hate in an era of LGBTQIA+ inclusion https://sabrangindia.in/metas-policy-shift-fuelling-hate-in-an-era-of-lgbtqia-inclusion/ Fri, 17 Jan 2025 06:23:55 +0000 https://sabrangindia.in/?p=39694 Meta’s new hate speech policies allowing dehumanising rhetoric against LGBTQIA+ individuals mark a troubling regression, undermining global strides toward equality, dignity, and inclusivity

The post Meta’s policy shift: Fuelling hate in an era of LGBTQIA+ inclusion appeared first on SabrangIndia.

]]>
Meta’s recent revisions to its hate speech guidelines mark a troubling shift towards normalising harmful narratives targeting marginalised communities. By explicitly permitting users to accuse LGBTQIA+ individuals of being “mentally ill” or to compare women to household objects, Meta’s policies not only put inclusivity on stakes but risk inciting real-world violence against these communities thereby disturbing the harmony in the society.

Quoting the Guidelines: An Ethical Dilemma

Under the new policy, Meta states:

“We do allow allegations of mental illness or abnormality when based on gender or sexual orientation, given political and religious discourse about transgenderism and homosexuality and common non-serious usage of words like ‘weird.’”

Additionally, the revised policy allows content such as:

“Comparing people to household objects, calling entire ethnic groups ‘filth,’ or arguing that LGBTQIA+ individuals should be excluded from certain spaces or professions.”

This represents a stark departure from previous hate speech policies that prohibited such dehumanising language, recognising its potential to create “an environment of intimidation and exclusion.”

Employee and Advocacy Group Backlash

Meta’s own employees have criticised the decision as “appalling,” with one post reading:

“I am LGBT and “mentally ill”. Just to let you know that I’ll be taking time out to look after my mental health.”

Advocacy groups have been equally vocal. GLAAD, for instance, stated:

“Meta is giving the green light for people to target LGBTQ people, women, immigrants, and other marginalised groups with violence, vitriol, and dehumanising narratives.”

The Consequences of hate normalisation

Meta’s history provides troubling evidence of its platforms enabling real-world atrocities, most notably the Rohingya genocide in Myanmar and the Capitol riots in the United States. In Myanmar, Facebook was identified by UN investigators as a key tool in spreading dehumanising rhetoric against the Rohingya Muslim minority, with hate-filled posts labelling them as “vermin” and “threats.” This unchecked hate speech incited widespread violence, resulting in over 700,000 people being displaced and thousands killed. Similarly, in the U.S., Meta’s platforms played a significant role in facilitating the organisation of the January 6 Capitol riots by allowing misinformation and extremist content to proliferate unchecked. These events demonstrate how Meta’s platforms, when deregulated or permissive, become breeding grounds for hatred and violence. With its new policies permitting users to call LGBTQIA+ individuals “mentally ill” or compare women to “household objects,” Meta risks repeating these disastrous patterns. By legitimising dehumanising rhetoric, these policies pave the way for escalating offline violence, societal polarisation, and the erosion of public safety. Without decisive corrective action, Meta could again find itself at the centre of global crises fuelled by its own platforms.

Way forward

While the world moves forward to embrace inclusivity and champion LGBTQIA+ rights, Meta’s recent policy changes reflect a regressive step reminiscent of the discriminatory attitudes of past generations. The global momentum for LGBTQIA+ equality is evident in initiatives like the United Nations’ Free and Equal campaign, which tirelessly works to combat harmful practices, promote legal protections, and foster societal acceptance of LGBTQIA+ individuals in regions as diverse as Africa, Albania, Brazil, and Vietnam​. These efforts underscore a commitment to ensuring dignity and equality for all, yet Meta’s decision to permit users to call LGBTQIA+ individuals “mentally ill” directly undermines this progress. By sanctioning such language, Meta is aligning itself with outdated, oppressive ideologies at a time when the global community is advocating for inclusion and acceptance. Human rights activists and allies worldwide must stand in solidarity to condemn this policy and demand accountability from Meta. It is imperative that Meta rescind these harmful changes and reaffirm its commitment to safeguarding dignity, equality, and respect for all users.

Related:

India’s LGBTQIA+ struggle: beyond legal victories, battle for true equality remains

From Judgments to Handbook: India’s Transformative Journey towards LGBTQIA+ Equality

The post Meta’s policy shift: Fuelling hate in an era of LGBTQIA+ inclusion appeared first on SabrangIndia.

]]>
From fact-checking to chaos: How meta’s new moderation model risks eroding trust and democracy https://sabrangindia.in/from-fact-checking-to-chaos-how-metas-new-moderation-model-risks-eroding-trust-and-democracy/ Fri, 17 Jan 2025 05:23:02 +0000 https://sabrangindia.in/?p=39682 Meta’s shift to community-driven moderation under the "community notes" model raises alarms, risking manipulation, misinformation, and further eroding trust in a rapidly polarizing digital landscape.

The post From fact-checking to chaos: How meta’s new moderation model risks eroding trust and democracy appeared first on SabrangIndia.

]]>
Meta’s decision to replace professional fact-checking with a community-driven moderation system under the “community notes” model is a regressive move that undermines the fight against misinformation. This policy change prioritizes a veneer of free speech over the pressing need for content accuracy, leaving the platform more vulnerable to manipulation, misinformation, and societal harm.

The False Equivalence of Free Speech and Misinformation

Meta justifies the shift as a step towards fostering free expression, as echoed in Mark Zuckerberg’s Georgetown speech about empowering individuals to voice their opinions. However, unmoderated free speech often becomes a breeding ground for falsehoods and malicious narratives. Professional fact-checking, though imperfect, provided a critical layer of accountability by separating genuine discourse from deliberate misinformation. Community-driven models, on the other hand, often amplify the loudest or most popular opinions, regardless of their genuinity.

Challenges of Crowdsourcing Moderation

Meta’s shift to community-driven moderation under the “Community Notes” model presents several critical challenges. These systems are frequently vulnerable to partisan bias, enabling dominant narratives to suppress minority perspectives, and organized manipulation, where bots and coordinated groups distort facts. This was starkly evident during the 2018 Cambridge Analytica scandal, where Facebook data was exploited to influence political outcomes, raising serious concerns about digital democracy. Another glaring example is Facebook’s involvement in the Myanmar Rohingya crisis, where unchecked hate speech on the platform contributed to widespread violence, with the UN citing Facebook as having a “determining role.” Similarly, during the COVID-19 pandemic, the platform became a hub for anti-vaccine propaganda, undermining global public health initiatives. During the 2024 Indian general election, Mark Zuckerberg inaccurately stated that the incumbent government lost due to its handling of the COVID-19 pandemic. This claim was incorrect as Prime Minister Narendra Modi’s government was re-elected for a third term. The misinformation sparked outrage, leading Union Minister Ashwini Vaishnaw to publicly refute the statement. In response, Meta India’s Vice President Shivnath Thukral issued an apology for the “inadvertent error” and reaffirmed Meta’s commitment to fostering accurate information.

If Meta introduces the Community Notes system, it risks being hijacked by organized political groups like the BJP IT cell, which has previously demonstrated its ability to exploit similar systems on platforms such as Twitter. Numerous reports have documented coordinated campaigns by the BJP IT cell to spread propaganda, disinformation, and polarizing narratives, often under the guise of organic community engagement. This manipulation not only distorts public discourse but also influences public perception on critical matters. Replicating such tactics on Meta’s platforms could lead to a systematic spread of partisan falsehoods, eroding democratic processes and undermining the platform’s credibility as a space for truthful and balanced discussions.

In regions governed by strict regulations like the European Union’s Digital Services Act (DSA), this policy could lead to significant regulatory challenges and possible sanctions.

A Reputational Risk for Meta

Meta’s decision also jeopardizes its own credibility and the trust of advertisers. By downgrading professional oversight, the platform risks becoming a hub for disinformation, deterring reputable companies from associating with it. Advertisers may hesitate to place their brands in an environment where false claims could damage their reputation.

The Need for a Hybrid Model

While professional fact-checking alone is not a panacea, it serves as a vital deterrent against the unchecked spread of misinformation. A more effective solution would be a hybrid model that combines expert oversight with community involvement, enhanced by transparent algorithms and robust accountability mechanisms. This approach could ensure that free expression does not come at the expense of truth.


Related:

Report: Meta reportedly monetising on ads calling for the killing of Muslims as well as opposition leader

After EU, US senator raises concerns about misinformation to Google, X, Meta

BUJ deplores attempts to censor online content by Government fact check unit

The post From fact-checking to chaos: How meta’s new moderation model risks eroding trust and democracy appeared first on SabrangIndia.

]]>
Report: Meta reportedly monetising on ads calling for the killing of Muslims as well as opposition leader https://sabrangindia.in/report-meta-reportedly-monetising-on-ads-calling-for-the-killing-of-muslims-as-well-as-opposition-leader/ Thu, 23 May 2024 13:44:17 +0000 https://sabrangindia.in/?p=35583 A recent report by ICWI (Indian Civil Watchlist) and Ekō has revealed that Meta has been platforming hate content generated by AI on its platforms, despite promising to be vigilant before India’s Lok Sabha elections.

The post Report: Meta reportedly monetising on ads calling for the killing of Muslims as well as opposition leader appeared first on SabrangIndia.

]]>
Meta has reportedly approved ads calling for the killing of Muslims. This was disclosed in an exclusive coverage by The Guardian of a report by ICWI (Indian Civil Watchlist) and Ekō  that recently revealed Meta, which owns Facebook and Instagram, has come under the limelight for approving a series of AI generated political advertisements during India’s ongoing Lok Sabha elections. These ads reportedly spread disinformation and also heightened communal tensions that were ‘violent’ and ‘Islamophobic’ in nature, according to the report.

Some of the examples of the hate speech contained in the posts, which are stored in Meta’s Ad Library, include phrases such as “let’s burn this vermin” and “Hindu blood is spilling, these invaders must be burned, as per The Guardian report. The report also reveals that Meta approved posts about executing an opposition leader. 

The report points out that this is also a failure by Meta to comply with the laws related to elections in India, and the ‘silence’ period deployed by the Model Code of Conduct was also violated. 

The report may be read here:

 

Several of these ads reportedly targeted opposition parties (to the BJP) by alleging Muslim “favouritism,” and other conspiracy theories promoted by India’s far-right that declares Congress as a party only for Muslims. Other ads on the site also reportedly portrayed the bogey of a Muslim invasion

Two advertisements even argued for a “stop the steal” narrative, and alleged the destruction of electronic voting machines. Multiple ads have used Hindu supremacist language to demonise Muslims. 

Meta is, it reveals, monetising from ads related to killings of Muslims, and these ads also violate Meta’s own policy on hate speech and other offences. Meta states on its website that it does not “allow” Hate Speech on its platforms, Facebook and Instagram. 

The researchers discovered that between May 8 and May 13, the platform approved 14 such incendiary ads. These incidents come even after, as the report notes, Meta had stated that they would be more vigilant about violent AI content prior to the 2024 Lok Sabha elections. The report further highlights that this “underscores that the platform is ill-equipped to deal with AI-generated disinformation.”

ICWI, Ekō and Foundation the London Story (TLS) had earlier released an expansive report, titled ‘Slander, Lies, and Incitement: India’s million dollar election meme network’, prior to the Lok Sabha 2024 elections. It states that before the elections, Indian advertisers spent a total of ₹407,709,451 (over 4 million US dollars) on “Issues, elections or politics” for ads on Meta platforms. Identifying the buyers from the GST, the report notes that 100 top buyers bought 75 % of the total ads. Out of these 100, there were 22 ‘shadow pages’ linked to the BJP who spent about ₹88 million, buying these ads for the party or its leaders. The same report highlighted that hate-spreading posts were able to boost their reach by simply paying 10 dollars and in consequence, reaching thousands of people. 

Meta_AI_ads_investigation

This is not the first time Meta has come under scrutiny. In February 2024, the CEO’s of major tech companies faced the US Senate Judiciary Committee. Mark Zuckerberg from Meta, along with leaders from TikTok, X, Discord, and Snap, were grilled by senators about how social media affects the safety of kids and teens in the US. The tech companies were accused of not doing enough to protect young users from online dangers.

Google has also not been spared. In a report by Access Now and Global Witness, it was discovered that YouTube, which is owned by Google, permitted incendiary, violent content to be hosted by the platform right before the elections. In the experimental study by Access Now and Global Witness, it was revealed that ads containing false news, such as 2024 elections being cancelled, passed through the platform’s gaze and were published. 

Related

India third highest across the world to enforce internet shutdowns

Social media giants summoned at US Senate hearing for internet safety

Rising tide of hate speech sours election climate, targeting religious minorities

The post Report: Meta reportedly monetising on ads calling for the killing of Muslims as well as opposition leader appeared first on SabrangIndia.

]]>
Hate a political tool, now a state project: India 2023 https://sabrangindia.in/hate-political-tool-now-state-project-india-2023/ Thu, 01 Jun 2023 10:13:39 +0000 https://sabrangindia.com/?p=26602 There is a chance to make Meta Facebook accountable for its hate generating content on May 31, by voting YES for Proposal 7 titled “Assessing Allegations of Biased Operations in Meta's Largest Market” which is to be presented at Meta's AGM on May 31, 2023. It highlights allegations against Facebook for disseminating hate speech, its failure to address risks and political bias, voices concerns around inadequate content moderation and lack of transparency in platform practices. The writer calls on readers to participate in this campaign on social media to make our republic hate-free

The post Hate a political tool, now a state project: India 2023 appeared first on SabrangIndia.

]]>

UPDATE:  June 1, 2023

Unfortunately, the shareholders of Meta Platforms Inc. have voted against an inquiry into allegations of hate speech dissemination and concerns about content moderation in India at their annual general meeting on May 31. At the AGM meeting which was attended by Founder Mark Zuckerberg, senior executives and nine members of Meta’s board, among others, shareholders of the company voted against Proposal 7, which was titled , which was titled ‘Assessing Allegations of Biased Operations in Meta’s Largest Market’ as a voting matter. Details of the numbers for and against are not available yet.

The proposal was put forth by Eko, a non-profit advocacy that campaigns to hold corporations accountable on social issues. The details of the vote were tweeted by the Internet Freedom Foundation, which has been campaigning alongside Eko to raise awareness on Proposal 7 in India.

Internet Freedom Foundation, part of the campaign vowed to carry on the fight for accountability.

 

The Proposal also outlined how, content moderation in India is undercut by poor capacity of Meta’s “misinformation classifiers” (algorithms) and its human moderators to recognize many of India’s 22 officially recognized languages,” the proposal had said.

It is however noteworthy that the Meta’s board had already recommended shareholders to vote against the proposal, citing that the company already has been undertaking efforts to address these.

“The requested report is unnecessary and would not provide additional benefit to our shareholders,” it had said in a proxy statement prior to the AGM


In close to four decades as a journalist and civil rights activist working across India, I have witnessed my fair share of religious polarisation and attendant violence. In this period, I have covered the Bombay-Bhiwandi communal violence of 1984, seen and witnessed from afar the anti-Sikh pogrom in New Delhi in 1984, the Bombay Riots of 1993, in which over 900 people (mostly Muslims) were killed and the 2002 Gujrat Pogrom, in which over 2000 people (again, predominantly Muslims), the Muzaffarnagar violence 2013 communal flare-ups in Malegaon, Nasik, Dhule and Akola over the years, among others. The experience of on-ground coverage of communal violence has its lessons for the reporter that unfortunately escape today’s television studio based and social media driven journalism. The non–negotiables: visit the spot of the conflict, talk to all sides despite the mental and physical borders constructed by society and state, do not rely on police tweets, press releases and versions; watch out for the pre-violence outbreak rumour, hate mongering through speech and writing.

Who cast the first stone is a time-tested journalistic ethic developed by me through this hard experience, buffeted by the findings of three dozen or more judicial commission reports appointed to Inquire into bouts of Communal Violence since the 1960s, all overseen by sitting and retired senior judges that I have closely studied. The learning: hate speech plays a crucial role in escalating the conflict, provocative words and writing and through their systemic use and dissemination, stigmatization carefully nurses a social atmosphere conducive to the outbreak of targeted violence. The majority, made complicit by this hate-mongering stays silent, the police infected by this steady dose of prejudicial ideas, manipulated histories and verbally violent stigmatization, fails to act to protect lives, in a more acute stage of complicity even participates in the violence.

Yet nothing in my lived experience quite prepared me for the scale of hatred against Muslims (and Christians, even Dalits and Women) that has been unleashed after Narendra Modi, of the majoritarian Bharatiya Janata Party, was elected as Prime Minister in 2014, and especially since his re-election in 2019. Islamophobia, and other anti-minorities hate has not only become “the new normal” in the New India, we see empirical evidence of this everyday as our teams at Citizens for Justice and Peace monitor and document in a series of reports, as part of our campaign “Hate Hatao.” Hate generation through unchecked alogrithms on social media, especially Meta Facebook with 314 million users in India has made the amplification seriously threatening.

Hate is today a State Project in India where the political formation in power, its vigilante organisations & brown shirts are mentally and physically armed through hate propaganda to violently harm religious minorities, women, Dalits targets. Prejudiced Ideas, Acts of Prejudice, Discrimination, Violence – four stages prior to Genocide—have been breached.

Hateful rhetoric against Muslims most particularly –though the Christian minority, Dalits, Women and other Sexual minorities are far from immune–is broadcast through various channels: Whatsapp forwards, television shows, digital media, political rallies, even some newspaper articles authored by votaries of an altered nation state, proponents of a theocratic autocracy (Hindu Rashtra). A notable change in the editorial pages of print media is the column space given to these “ideologues”, space that affords them a legitimacy in the Indian media and public spectrum. Never mind that the articulation of such an altered state is also anti-Constitutional. By far the most significant outlet for hate speech is social media, in particular, are arguably Facebook and WhatsApp, both platforms that are owned by Meta Inc. Though Musk-owned Twitter and other newer versions are fast catching up!

The women and girls of India’s largest minority have been a debasing target –through 2021 and 2022–through twitter accounts, Github and Clubhouse platforms where the macabre and shameful phenomenon of their auctions has taken place. A Radio Silence from the political leadership in power in New Delhi through all of this clearly signifies consent. Hate Crimes therefore enjoy a high level of impunity. That Facebook can be a participant-platform for this escalation up the genocidal pyramid is both shocking & unacceptable.

Between 1983 when I first began as a reporter of conflict and now, the change is marked. Social media platforms and digital media is the new reality. Both reach a far wider audience that traditional media outlets like newspapers and television. Which is to also say that allow a far wider number of people to both access and—importantly—produce content than mainstream media. India today has over 314 million Facebook users, by far the largest of any country in the world, and over twice of the next largest, the United States, which has 175 million users. This makes social media platforms the ideal medium through which hate-mongering Hindu supremacist politicians and activists can gain a following. To create a political constituency for majoritarianism manipulate FB and create multipliers through content.

Many members of the BJP, its parent organisation, the Rashtriya Swayamsevak Sangh, and dozens of the spawn outfits that are created with multiple identities (Sakal Hindu Samaj, Hindu Jan Jagruti Sena, Ram Sene are just a few) have spoken candidly about the importance of the use of social media to Hindutva organising and mobilisation.

Some quick examples: In October 2018 we complained to Ms. Ankhi Das, the Public Policy Director, India, South & Central Asia, Facebook about the vandalisation of a Church in Varanasi, St. Thomas Church in the prime minister’s parliamentary constituency, by extremists, some of whom had also previously posted –on Facebook –inflammatory content targeting the Christian community. We received no response.

In 2019, our HateWatch programme had analysed how one elected official of the influential ruling BJP party from a state in the south, Telangana amplified a rumour and added his own hate-filled speech on Facebook where he had half a million viewers. A year earlier, he had called for a vicious economic boycott of “terrorist Kashmiris” during the Amarnath Yatra on a video that has been viewed 3,00,000 times. Finally, he was the central figure flagged in the August 2020 WSJ Report on how the corporation ignored hate speech by BJP leaders in India to protect its business interests. Welcome to T Raja Singh.

By March 2021, when FB finally concluded that he, Raja Singh, had, in fact, violated its own Community standards (Objectionable Content) and Violence and Criminal Behaviour rules, he was finally removed from FB. His Fan Pages with 2,19,430 and another with 17,018 followers, however continue to operate and generate provocative content.

Today, Raja Singh, “suspended” MLA of the ruling BJP has re-emerged in a new on-ground avatar, as one of the latest poster boys of hate for the ruling regime, spreading his venom across the states of Maharashtra, Karnataka and Rajasthan. In Maharashtra where the regime faces a tough electoral contest next year (general elections May 2024, state assembly elections Aug-September 2024) he has addressed seven gatherings and has four FIRs against him, in Rajasthan he has one. For his vituperative election speech, CJP has filed a complaint with the state election commission that has been forwarded for further action. Another is Sudarshan News’s notorious Suresh Chavhanke.

Truly emboldened by an all round immunity that he, this “suspended” MLA enjoys, in May 2023, T. Raja Singh, who has in his earlier speeches called for violence against Muslims on multiple occasions, declared the following to an audience in Kota, Rajasthan:

“I want to tell Prime Minister Modi and other ministers that now, no one can stop us from establishing a Hindu nation. India will be an undivided Hindu nation. Through social media, we have to ensure that this message reaches PM Modi. We have to make sure this reaches those Ministers of India that are secular so that they know that secularism will not work in India it will not work in Rajasthan. Now, only the rule of Hindus and Hinduvta will be there.”

In other words, Singh was calling on his followers to take to social media and ask the prime minister to establish a Hindu ethnocracy in India. That an elected member of the legislature, who takes oath under the Constitution to abide by its republican and inclusive principles, is turning to social media to advance his agenda speaks volumes about the important role it plays in Hindutva mobilisation.

Similar stark examples around the Delhi 2020 violence in the capital, Delhi abound. Among these, the Ragini Towari (“kill or die” call), Kapil Mishra, Anjali Verma shrill use of social media, all show that it is the unchecked use of Facebook in non-English languages that is instrumental in the spill and spiral of targeted of violence on the streets. Facebook Inc has formally responded to two complaints sent by Citizens for Justice and Peace (CJP) against hate content made by Ragini Tiwari, stating that they are not in a position to take any action against Tiwari. Instead, Facebook suggested that CJP contact the party directly to get a resolution on the issue!  Then there is also a serial hate offender, Deepak Sharma who Facebook is extremely reluctant to disengage with: we developed a detailed profile of his activities and character through Facebook. We complained, brought it up in writing and at round-tables. With thousands of followers he still enjoys space on the platform. 

Four years before the genocidal call to kill Muslims was made by him in December 2021 which led to a spurt of outrage among some Indians and even some movement in the hate speech case in the Supreme Court, we had been steadily tracking, documenting, reporting and complaining about the man at the centre of the genocidal hate story, Yati Narsinghanand Saraswati, pointing out the eco-system of hate he has created. During this painstaking process, in November 2018, four years before the genocidal call to kill Muslims was made by him in December 2021, when a CJP member complained about his FB post where he said Hindus should be armed 24X7 to protect their religion and that Islam is cancer, we were told by FB India that this does not go against their community standards but if we have an issue we can either block YATI or unfollow his page. 

In short, we have tried to engage however and whenever given the chance, have had detailed correspondences, have offered more than a dozen and a half of minute case studies and many, more complaints that, have unfortunately resulted in unsatisfactory results. All this work has also been at a risk and cost as the government targeted us venomously.

Where lies the stumbling block?

Despite the FB mega corporation’s own set standards against public safety, hate speech, violence, discrimination, is that Facebook India fails to take cognisance of the local context of supremacist and communally charged politics. Comprehending the difference between hate speech and free speech requires a candid engagement with an understanding of India’s diversity and India’s track record of vicious, targeted communal violence. Allowing such hate content on Facebook also legitimises such content that, even courts have –albeit slowly –recognised.

Facebook’s automated filters which are supposed to filter hate speeches too, falter in India in the non-English languages: Any user can today search for hate content through a handful of ‘key words’, which Facebook does not filter out. (words or terms like “Kattar Hindu” (rigid or fanatical Hindu) पंचर पुत्र पंचर छापमुल्लेमुल्लाकटुआहलालाहलाला की औलादबाबर की औलाद which are particular derogatory/slang terms devised simply escape all filters. (“Panchar”(slang/derogatory term for Muslims who work in automobile garages). In fact there are individuals, groups and pages with the ID Kattar Hindu, they have hundreds of thousands of followers. These. can be found on FB, WhatsApp, Twitter. By the way, all such usage is also violative of Indian Law and Jurisprudence, international law and conventions including the UN’s 2019 Call against Xenophobia and Hate Speech and the 2011 UN Guiding Principle on Business & Human Rights

This then is the other major reason that social media is central to the spread of Islamophobic hate speech is that companies like Meta have been egregiously lax in moderating content on their platforms. At CJP, we      have documented [1] numerous instances of “viral” Islamophobic content on Facebook and WhatsApp that was not taken down, despite violating Meta’s own content moderation regulations, which explicitly debar any speech that vilifies a particular community.

Why is it that Meta tolerates hate speech on its platform? Partly, this is because the company has not invested in content moderation for its India operations, which means that much of the posts published in the country are not properly vetted, especially those in regional languages. At the same time, Meta has faced repeated allegations that its Indian staffers are sympathetic towards the BJP and its agenda and are thus turning a blind eye towards Islamophobic content. This came to the fore during the 2020 Delhi riots, when a video of a Hindu religious leader openly calling for “ethnic cleansing” of Muslims was shared widely on various Meta platforms, and was not taken down, despite numerous reports[2] .

I have mentioned just a few examples. Every day that Hindutva supremacists take to Facebook and WhatsApp to post inflammatory and violent posts targeting Muslims. They do this because they are confident that Meta will not hold them accountable. In effect, then, Meta has created a public space where Islamophobia can flourish with impunity. Indian civil society groups like CJP, Alt News, Hate Speech Beda (based in Karnataka), and others have dedicated significant resources to flagging and reporting hate speech on Meta’s platforms. But these actions can only go so far—indeed, our actions will always be inadequate—until Meta itself takes responsibility for its India platforms. At the end of the day, the company has far greater power than any groups or individuals.

For all these reasons, it is a very significant marker that tomorrow, May 31, hate speech on Meta’s India platforms will be on the agenda at the company’s annual general board meeting. “Proposal 7”—one of thirteen proposals that will be discussed at the meeting—presents the evidence against Meta for spreading Islamophobic hate speech, its inadequate content moderation, and the general lack of transparency around the company’s practices. The shareholders who are attending the meeting have a great opportunity to pressure to act to uphold the rights of Indian Muslims and hold Hindu hate speech mongers to account. . Notably, out of the 13 proposals being put to vote, this is the only one that relates to India, and to the inbuilt bias in AI.  Proposal 7 titled “Assessing Allegations of Biased Operations in Meta’s Largest Market” is to be presented at Meta’s AGM on May 31, 2023. It highlights allegations against Facebook for disseminating hate speech, its failure to address risks and political bias, voices concerns around inadequate content moderation and lack of transparency in platform practices.

This campaign, jointly launched by Ekō, India Civil Watch International (ICWI), and Internet Freedom Foundation (IFF), aims to raise awareness about Proposal 7 among users of Meta platforms, the relevant concerns highlighted in the proposal, and urge the shareholders to vote in favour of Proposal 7 by May 31. As part of the campaign, IFF will post everyday, from May 26 till May 31, highlighting instances where Meta has failed to address critical issues effectively. 

The Meta leadership might not care what Indian civil society groups think, but it certainly cares about the opinion of its shareholders.

This piece then ends with an unorthodox appeal from a senior journalist: We call on them to vote YES on Prop 7.

This article first appeared in the print and online The Telegraph edition on May 31, 2023

The post Hate a political tool, now a state project: India 2023 appeared first on SabrangIndia.

]]>
Meta’s upcoming AGM & global calls for accountability against hate in India: Voting on Proposal 7 https://sabrangindia.in/metas-upcoming-agm-global-calls-accountability-against-hate-india-voting-proposal-7/ Tue, 30 May 2023 04:16:33 +0000 https://sabrangindia.com/?p=26455 Meta (Facebook) is set to hold its Annual General Meeting (AGM) on May 31. Among several proposals to be discussed and voted upon on May 31, Proposal 7 and the outcome of the vote bear significance for the Indian audience. Notably, out of the 13 proposals being put to vote, this is the only one […]

The post Meta’s upcoming AGM & global calls for accountability against hate in India: Voting on Proposal 7 appeared first on SabrangIndia.

]]>
Meta (Facebook) is set to hold its Annual General Meeting (AGM) on May 31. Among several proposals to be discussed and voted upon on May 31, Proposal 7 and the outcome of the vote bear significance for the Indian audience. Notably, out of the 13 proposals being put to vote, this is the only one that relates to India, and to bias in AI.

Proposal 7 titled “Assessing Allegations of Biased Operations in Meta’s Largest Market” is to be presented at Meta’s AGM on May 31, 2023. It highlights allegations against Facebook for disseminating hate speech, its failure to address risks and political bias, voices concerns around inadequate content moderation and lack of transparency in platform practices.

This campaign, jointly launched by Ekō, India Civil Watch International (ICWI), and Internet Freedom Foundation (IFF), aims to raise awareness about Proposal 7 among users of Meta platforms, the relevant concerns highlighted in the proposal, and urge the shareholders to vote in favour of Proposal 7 by May 31. As part of the campaign, IFF will post everyday, from May 26 till May 31, highlighting instances where Meta has failed to address critical issues effectively. Citizens for Justice & Peace (https://cjp.org.in/hate-hatao) campaign has formed the basis for much of the intervention analyses and report.


New Delhi, May 26, 2023: Meta, the parent company of Facebook, Instagram, and other widely used platforms, is set to hold its AGM on May 31. Founder, Chairman, Chief Executive Officer, and largest shareholder (13.4%), Mark Zukerberg, senior executives, 9 members of Meta’s Board, and other members of the leadership team will be in attendance at the AGM. Other large shareholders are asset managers Vanguard with 6.9% share, BlackRock with 5.8% share, and Fidelity with 4.7% share. While Zuckerberg is not only the largest shareholder, he controls Meta with 61.9% of all votes thanks to super-voting shares.

Amidst the various proposals to be discussed, Proposal 7 tackles the critical issue of how Meta handles content regulation in India, a matter with profound implications for our society. It delves into the concerning role played by Meta’s platforms in disseminating hate speech, fostering divisions, and even instigating real-world violence. Ekō, ICWI, and IFF, have jointly launched a campaign to increase awareness among Meta shareholders and Meta users about the upcoming Meeting and Proposal 7. The initiative has called on shareholders to vote ‘Yes’ on Proposal 7 by May 31.

Glass Lewis, a leading advisory service, which manages more than 40 trillion in assets, as well as provides institutional investors with guidance on resolutions has recommended shareholders vote ‘Yes’ on the proposal. For years, Glass Lewis has brought Environmental, social, and corporate governance (ESG) expertise and analysis across approximately 100 global markets.

This proposal and the outcome of the vote bear significance for the Indian audience. During the 2020 Delhi riots, Facebook faced numerous allegations that hate speech spread on the platform had fueled the violence. In another incident, Facebook’s role in the communal riots which erupted in Delhi was investigated, after a video of a religious leader openly calling for ‘ethnic cleansing’ was shared widely on various Meta platforms.

Of particular concern is Meta’s consistently disappointing approach in such instances. Rather than promptly addressing divisive content, they have prioritised potential business interests over removing a source of hate speech, arguing that the latter could negatively impact their business in India.

Reports also indicate that Facebook may have allowed political parties to promote surrogate advertisements to boost their visibility. Furthermore, the content moderation system, which serves as our defence against hate speech, is ineffective in handling India’s diverse range of official languages.

Accusations have surfaced from individuals across the political spectrum over the years, with the most significant impact often affecting those without power. While social networks enable users to exercise their right to free expression, a goal worth protecting, we are frequently confronted with the harms they cause. This calls for systemic fixes and genuine accountability in a transparent, proportional, and certain manner.

Teesta Setalvad, senior activist and participant in Meta’s Human Right Impact Assessment said, “At Citizens for Justice and Peace we have used every method available to track and report hate speech that is so harmful to our society. Social media, and the particular algorithm of Meta in India has made everything worse. It has given a megaphone to the worst elements in our society, and further disempowered institutional mechanisms to hold them to account. It is with good faith that we participated in the Human Rights Impact Assessment and are extremely disappointed in Meta’s response. Not only was the report not made public, there has been absolutely no change based on our suggestions. India is the only country that has been subject to this degree of lack of transparency. This double standard needs to stop. Indian users of Meta are subjected to viral hate speech fed by its biased algorithms, while American users of Meta have checks and balances engineered to protect its users from the same thing. We urge the shareholders of Meta to use this opportunity to vote yes on Resolution 7.”

Apar Gupta, Founding Director of IFF, expressed disappointment over Meta’s failure to fulfil its obligations to shareholders and the Indian republic, stating, “Today, a crisis affects Meta’s reputation, operations, ESG commitments, and, ultimately, its investments. Meta platforms Facebook, Instagram and Whatsapp with a rising teleconnectivity are used by most, if not all Indians with internet connections. The widespread use of these social platforms by its very nature bears the weight of social responsibility by Meta, in the company’s largest market.”

Ekō has submitted a shareholder proposal that demands that Meta commission a non-partisan assessment of these allegations and disclose the results in a report to investors. The assessment would evaluate political biases, content management capabilities, and the effectiveness of mechanisms in combating hate speech and disinformation. Meta has failed to publish the full report of the Human Rights Impact Assessment (HRIA) for India which leads to concerns about Meta stifling transparency and accountability. Further, the four page summary on India’s assessment published in Meta’s first annual Human Rights Report is not reflective of the inputs provided by several civil society organisations who participated in the assessment.

Meta’s Board has already cast the Proposal in an unfavourable light, stating their justification for such limited and insufficient disclosure as necessary to mitigate security risks for Meta’s employees. The Board of Directors, have thus, recommended that shareholders vote against this proposal.

To prevent the misuse of platforms for divisive agendas, the civil society organisations including IFF will post everyday, from May 26 till the day of the meeting, highlighting instances where Meta has failed to address critical issues effectively. Campaigners will use the hashtag #VoteYesonProposal7 on social media to encourage shareholders to #VoteForABetterMeta. As an introduction to the campaign, IFF has published this video to further raise awareness among shareholders and users.

About Ekō: Ekō is a community of people from around the world committed to curbing the growing power of corporations. They wish to buy from, work for and invest in companies that respect the environment, treat their workers well and respect democracy.

About ICWI: ICWI is a non-sectarian left diasporic membership-based organisation that represents the diversity of India’s people and anchors a transnational network to building radical democracy in India.

About IFF: IFF is a digital rights advocacy organisation registered as a public charitable trust which aims to ensure that technology respects and furthers the fundamental rights of internet users in India. We work across a wide spectrum of issues, with expertise in free speech, electronic surveillance, data protection, net neutrality and innovation.

Related:

Is Facebook shirking responsibility for enabling the spread of hate in India?

A social media account that promotes Hindu supremacy and fear mongering

The post Meta’s upcoming AGM & global calls for accountability against hate in India: Voting on Proposal 7 appeared first on SabrangIndia.

]]>