AI –The New Frontier for Islamophobia

AI deployment now extends beyond surveillance and anti-minority hate content. It has entered the ‎welfare system. It impacts the most vulnerable through opaque algorithmic decisions which cannot be seen, analysed, or challenged.

Written by

Arshad Shaikh

Published on

Hosting grand international summits is the way of choice for states to show their power and prestige to local audiences. This becomes especially imperative for governments that do not have any grand achievements to show their citizens. This applies especially to India given the state of our economy, jobs, inflation, infrastructure, education, health and social ‎and religious harmony. Hence, one should analyse the AI Impact Summit 2026 in New Delhi from February 16 to 20, with this context in mind, else there is a possibility of being intoxicated with the glitz and glamour of seeing global CXOs and heads of states of the rich and powerful nations being hosted by India and lavishing praise on our leaders and great market potential.

While the government and its pliant media were busy in displaying the AI Impact Summit 2026 as one of the many proofs that India now leads the Global South in cutting edge technology and brandishing the influence that its political leadership enjoys among the Big-Tech companies and the international community, two prominent watchdog ‎organisations – the Internet Freedom Foundation (IFF) and the Centre for the Study of ‎Organised Hate (CSOH) released a damning report titled “AI Governance at the Edge of ‎Democratic Backsliding”. The report documents how artificial intelligence is being systematically ‎weaponised against religious minorities, particularly Muslims, in India.

AI – the New Frontier for Islamophobia

Just days before world leaders gathered in New Delhi to ‎celebrate AI as a tool of inclusive progress, BJP’s Assam unit uploaded an AI-generated video ‎on its official X account depicting Chief Minister HimantaBiswaSarma shooting at two visibly ‎Muslim men under the title “No Mercy.” One of the men appeared to be a morphed image of ‎Opposition leader Gaurav Gogoi wearing a skullcap. It was a diabolical representation of Muslim identity as ‎enemy. The video was deleted after widespread condemnation. But, the damage done. This was not an isolated untoward social media “error”. It was ‎part of a broader, systematic pattern in which the ruling party’s state units in Delhi, ‎Chhattisgarh, and Karnataka have used AI-generated imagery to portray Muslims as infiltrators, ‎demographic threats, and enemies of the Hindu nation.

A study of Assam BJP’s official X ‎account by fact-checking outlet AltNews found that nearly 40% of posts target Muslim ‎minorities. A significant number use synthetic AI-generated images and videos accompanied ‎by communal slurs. Researchers tested popular text-to-image tools like Meta AI, ‎Microsoft Copilot, ChatGPT, and Adobe Firefly. They found that harmful prompts in local Indian ‎languages could easily generate Islamophobic imagery.

This showed that the required checks or safeguards in non-English contexts were inadequate. The CSOH’s own earlier report had documented the ‎proliferation of AI-generated images depicting Muslim men as violent criminals and Muslim ‎women treated as sexual objects.  In the words of the report, Generative ‎AI had become the new frontier of Islamophobia.

‎The Surveillance State Gets Smarter

The report raises certain red flags about how AI is being ‎deployed by the state itself as a tool of bigoted surveillance. Maharashtra Chief Minister ‎Devendra Fadnavis announced an AI tool, developed in collaboration with IIT Bombay. The tool has been designed ‎to identify alleged Bangladeshi immigrants and Rohingya refugees through speech pattern ‎analysis. The IFF-CSOH report also points out to the ‎rapid expansion of facial recognition technology (FRT) being deployed by the police in cities like ‎Hyderabad, Bengaluru, Lucknow, and Delhi.

An ethnographic study of Delhi’s ‎Crime Mapping Analytics and Predictive System (CMAPS) demonstrated that historical data ‎inputs embed caste and religious biases. This creates feedback loops that institutionalise ‎discrimination as data-driven policing. FRT was used during the 2019 anti-Citizenship ‎Amendment Act protests to track “habitual protesters”, and in the aftermath of the 2020 North-‎East Delhi riots. We all know which community will likely face the brunt of this tech-driven high-handedness by the state.

Across the country, AI policing programmes with names like Project SHIELD in ‎Odisha and MARVEL in Maharashtra are spreading fast. The sad part is that there is not a single binding regulation ‎or oversight authority governing their usage.

Welfare Exclusion by Algorithm

Sadly, AI deployment now extends beyond surveillance and anti-minority hate content. It has entered the ‎welfare system. It impacts the most vulnerable through opaque algorithmic decisions which cannot be seen, analysed, or challenged. The Ministry of Women and Child Development ‎made facial recognition mandatory from July 2025 for accessing take-home rations under the ‎Integrated Child Development Services Scheme for nutritional support for pregnant and lactating ‎mothers, infants, and adolescent girls.

Anganwadi workers have reported that the system fails ‎frequently due to poor lighting, network failures, and technical glitches. Many rural ‎women do not possess smartphones. In most cases, their Aadhar-linked numbers belong to ‎male relatives or are outdated. This makes OTP-based verification difficult.

Worker ‎unions have challenged the mandate in the Bombay High Court. In ‎Telangana, errors in the SamagraVedika database led to denial of food rations for those below ‎the poverty line. In Haryana, the Parivar Pehchan Patra database incorrectly declared ‎beneficiaries as dead, leading to the cutting off old-age and widow pensions.

What Needs to Change

The recommendations of the IFF-CSOH report need urgent attention: (1) For states, the authors demand a move ‎beyond voluntary commitments toward legally binding AI regulations with enforceable ‎transparency obligations and clear liability regimes. (2) Predictive policing and mass facial ‎recognition must be prohibited. (3) AI deployments in welfare and public services must be subject to ‎human oversight, fundamental rights impact assessments, and accessible grievance redressal. (4) ‎For the technology industry, demands include independent third-party audits, public transparency ‎reports on harmful content, diversity in development teams, and meaningful engagement with ‎affected communities. (5) For generative AI specifically, platforms must be mandated to disclose ‎the effectiveness of safety filters across languages, remove harmful synthetic content rapidly, ‎and ensure content moderation teams reflect the diversity of users they serve. (6) Civil society, the ‎report urges, must question the techno-solutionist narrative promoted by states and corporations, ‎document harms, and build Global South coalitions to ensure that communities most affected by ‎AI have a genuine voice in governing it. (7) India’s AI Governance Guidelines of November 2025, ‎the report notes, reject “compliance-heavy regulation” in favour of voluntary standards, a ‎position that effectively shields powerful actors from accountability while leaving the most ‎vulnerable without recourse.

Even Courts are Sounding Alarm

Another striking news that appeared around the same time as the AI Impact Summit was the serious concern raised by the Supreme Court of India about the misuse of AI in the judiciary. A bench led by ‎Chief Justice Surya Kant, along with Justices JoymalyaBagchi and B V Nagarathna, expressed ‎alarm at a growing trend of lawyers filing petitions drafted using AI tools that contain fabricated ‎case citations. Justice Nagarathna described encountering a citation to a case titled Mercy v. ‎Mankind, a judgment that simply does not exist.

The Chief Justice noted that in Justice ‎Dipankar Datta’s court, not one but a series of non-existent judgments had been cited. “We have ‎been alarmingly told that some lawyers have started using AI for drafting. It is absolutely uncalled ‎for,” the Chief Justice said. The bench stressed that fake citations an additional burden ‎on judges, who have to spend additional time verifying the existence of referenced judgments.