Arshad Shaikh probes the accusations against social media giant Facebook for spreading hate and not doing enough to curb its platform being misused for the dissemination of Islamophobia and polarising content. 

American social media giant Facebook (FB) is no stranger to controversy. It has been targeted over issues such as user privacy, election manipulation, the spread of fake news and copyright infringement. Facebook, a company that crossed a  market cap of $ 1 Trillion in June this year and which claims to have more than a third of the entire global population as its ‘active monthly users’ was again in the news recently over accusations by whistle-blower Frances Haugen. Her revelations called ‘Facebook Papers’ made to the Securities and Exchange Commission and obtained by a consortium of news organisations show among many other things that FB did little to clamp down on reported instances of hate content against minorities in India.

According to a story by The New York Times (“In India, Facebook Grapples with an Amplified Version of Its Problems” by Sheera Frenkel and Davey Alba dated October 23, 2021), Facebook did not maintain enough resources in India to tackle anti-Muslim posts (Islamophobia) and misinformation about a myriad of issues. The leaked information shows some extremely revealing and yet damaging facts about the internal workflows of FB and its policies when it comes to dealing with countries besides the United States.

For example, in February a pair of Facebook employees set up a dummy account of a 21-year-old woman from North India. They began to document what the FB app was throwing into her timeline. After some general stuff, she was flooded with pro-Modi propaganda and Islamophobic posts. An internal memo accessed by the Washington Post called the dummy account an “integrity nightmare” and the stark disparity in the FB user experience between the US and India.

It also became known through the ‘Facebook Papers’ that FB was very much aware of the anomaly that left the platform susceptible to abuse by hatemongers and authoritarian regimes and yet did little to address the problem. It is reported that in 2020, FB spent 84% of its allocated resources to tackle misinformation in the United States although it makes up only 10% of its user base. The rest of the world gets a measly 16%. So how exactly does this sharing of content work out on the Facebook platform? Why is the bad content being shared more than the good content? Why the disparity in tackling hate? Will Facebook do anything about it and why is the Government of India silent about this so far?


The Facebook timeline is the place where you post your messages, images and videos so that your friends get a glimpse about you and your life story after they land on your page. They can leave public messages (text and photos) for you; so in way, their posts become part of your history. There is a section called Newsfeed that is controlled by the Facebook algorithm that is supposedly designed to select and share the most relevant and engaging stories out of the several thousand potential stories. The official version of how the algorithm works or what is thrown at your newsfeed depends on the following factors. The first priority is to those stories that you comment on, share, click and spend time reading (called engagement).

The four main factors that shape your newsfeed are (1) Who posted the story: if you have engaged with the author before, Facebook thinks you would be interested in their posts. (2) How other people engaged with the post: the more others have engaged with that post, the more likely it is for FB to show it to you too. (3) What type of post is it: different people engage with and spend time on different types of posts. Some love watching videos while others like reading news stories. (4) When it was posted: the more recent a story was posted, the more likely it will be shown to you by Facebook. Although this scheme appears quite innocent and logical, it is creating havoc in the world.


To appreciate this phenomenon, let us tune in to what Tristan Harris, Cofounder and President of the Centre for Humane Technology as he testified before a United States Senate Subcommittee hearing on Privacy, Technology and Law titled ‘Algorithms and Amplification: How Social Media Platforms’ Design Choices Shape Our Discourse and Our Minds’.

Harris testified: “A business model that preys on human attention, which means that we are worth more as human beings and citizens of this country when we are outraged, polarised narcissistic and misinformed. It means the business model was successful at steering our attention using automation. There is a decentralised incentive for yellow journalism that wants to make each of us yellow journalists because we are more rewarded the more extreme things we say. We are raising entire generations of young people who will have come up under these exaggerated prejudices, division, mental health problems, and an inability to determine what’s true. They walk around as a bag of cues and triggers that can be ignited. If this continues, we will see more shootings, more destabilisation, more children with ADHD, more suicides and depression, deficits that are cultivated and exploited by these platforms. We should aim for nothing less than a comprehensive shift to a humane, clean “Western digital infrastructure” worth wanting.”


At the heart of this challenge of the inability or reluctance exhibited by Facebook and other social media platforms to control polarising posts and hate content on its feeds lies the debate about the ‘inviolable right to free speech’ and the extent to which that right can be curbed by society and the state through regulation and legislation. But more specifically, the debate about algorithms and amplification is all about the business or revenue model of social media platforms that work on proof of consumption of its content (engagement) to those who advertise on those platforms.

The “Like Share Subscribe” culture established by social media results in giving precedence to content that is partisan, controversial and polarising over stuff that is balanced, clean and accommodative. It gives unprecedented power to those who divide society and shun uniting it, who prefer elimination over assimilation and believe in hurting rather than healing. Since their ideas are peddled on a higher scale because of the nature of the algorithm, their content gets a bigger audience, they are awarded more traction and prominence by the mainstream media and ultimately land up getting more votes if they are into politics.

The correct approach would be to remove the sacrosanct aura surrounding the right to free speech. Speech like any other human activity must be subject to the same moral standards that govern other forms of human behaviour governed by laws and protocols. This online Frankenstein feeding on hate and animosity delivers the revenue to the platform, which cannot do away with the algorithm and so the vicious cycle continues. As the IE editorial (26 October) said: “For impartial and reasonable regulation of the digital sphere, the political class, too, must be willing to sacrifice the quick gains it has reaped on social media, sometimes at the expense of the guiding principles of constitutional democracy.”

Similar Posts