Updated 26 February 2026 at 17:34 IST
Not Just an Indian Problem: Algorithmic Harm as a Global Crisis
India's crisis is a local manifestation of a global emergency—and India may be uniquely positioned to solve it.
- Initiatives News
- 11 min read

From Detroit to the Netherlands to Australia, algorithms are destroying lives with the same indifference. India's crisis is a local manifestation of a global emergency—and India may be uniquely positioned to solve it.
Robert Williams was on his lawn in Farmington Hills, Michigan, playing with his daughters when the police cruiser pulled up. It was January 9, 2020. The officers asked him to step forward. They handcuffed him in front of his children, ages 2 and 5, as his wife Melissa watched in shock from the doorway.
His crime? A facial recognition algorithm had matched his driver's license photo to surveillance footage of a shoplifter at a Shinola watch store in Detroit. Williams was not the shoplifter. He had never been to the store. The two men didn't even look alike to the human eye. But the algorithm's confidence was enough for a warrant, an arrest, and 30 hours in a holding cell.
During interrogation, Detective Donald Bussa showed Williams a grainy surveillance photo. Williams looked at the image, then at the detective. 'No, this is not me,' he said. 'I hope you don't think all Black people look alike.' The detective's response: 'So I guess the computer got it wrong.'
Advertisement
Williams replied: 'I guess the computer got it wrong, but I'm the one who's sitting here.'
❝ 'I guess the computer got it wrong, but I'm the one who's sitting here.' — Robert Williams, wrongly arrested based on facial recognition ❞
Advertisement
Williams became the first documented case of a wrongful arrest based on facial recognition in the United States. He would not be the last. In August 2020, Michael Oliver was arrested in Detroit for an alleged assault; the charges were dropped when the actual perpetrator was identified. Nijeer Parks spent 10 days in jail in New Jersey and nearly three years fighting felony charges for a crime committed by someone else in a town he had never visited. Randal Reid was arrested in Louisiana in November 2022 for a purse theft in Georgia—a state he had never set foot in.
All Black men. All victims of the same algorithmic failure. A 2019 National Institute of Standards and Technology study found that facial recognition algorithms misidentified Black and Asian faces 10 to 100 times more often than white faces. The algorithms that are reshaping policing worldwide were trained predominantly on white faces and fail systematically on the faces they were not designed to see.
The Netherlands: When Algorithms Destroy Families
Across the Atlantic, a Dutch government algorithm designed to detect childcare benefit fraud was producing what officials would later call 'unprecedented injustice.' The system, called SyRI (System Risk Indication), flagged tens of thousands of families as potential fraudsters based on factors including having a foreign-sounding name, living in a low-income neighbourhood, or making minor administrative errors on applications.
Once flagged, families were presumed guilty. The tax authority—the Belastingdienst—demanded repayment of benefits, sometimes €100,000 or more, regardless of whether fraud had actually occurred. Families who couldn't pay faced wage garnishment, home seizures, and complete financial ruin. More than 1,100 children were placed in foster care because their parents could no longer afford to care for them. Marriages collapsed under the strain. At least one person took their own life.
Radijya Mohamed was one of the victims. A single mother of three, she was accused of fraudulently receiving €49,000 in childcare benefits. The tax authority seized her wages, leaving her unable to pay rent. She lost her home. Her children were placed in foster care for eighteen months while she fought to prove her innocence. She had committed no fraud; the algorithm had simply flagged her name as suspicious.
"The rule of law has been seriously violated. Fundamental principles have been violated over a period of years. Parents have been labelled as fraudsters without evidence. Their lives have been destroyed. Families have been torn apart." — Dutch Parliamentary Inquiry Report, 'Unprecedented Injustice'
An investigation revealed that the algorithm disproportionately flagged families with dual nationality—predominantly immigrants from Morocco, Turkey, and Suriname. The system had encoded discrimination into its risk scores, treating foreign heritage as a risk factor. Internal documents showed that having a non-Dutch nationality was weighted as an indicator of potential fraud.
More than 26,000 families were affected. In January 2021, the entire Dutch cabinet resigned over the scandal—the first government in modern history to fall because of algorithmic harm. Prime Minister Mark Rutte called it 'a terrible injustice.' But the apology could not restore the years lost, the families separated, the lives destroyed by an algorithm that saw fraud where there was only poverty.
Australia: Robodebt and the Algorithm of Death
In Australia, the government's 'Robodebt' scheme used an automated system to accuse welfare recipients of owing money. The algorithm averaged annual income data from the Australian Taxation Office to estimate fortnightly earnings—a methodology so flawed that the government's own lawyers would later admit it was unlawful from the very beginning.
Between July 2016 and November 2019, the system sent 470,000 debt notices totaling $1.76 billion Australian dollars. Recipients—many of them university students who worked irregular hours, single parents with part-time jobs, and people with disabilities on limited income—were told they owed thousands of dollars. The burden of proof was reversed: the algorithm's calculation was presumed correct, and recipients had to prove otherwise using payslips from jobs they had held years earlier.
The human toll was catastrophic. Rhys Cauzzo, 28, from Queensland, took his own life in January 2017 after receiving a Robodebt notice demanding repayment. He had struggled with mental health issues, and the debt notice—which he believed he could not dispute—pushed him over the edge. His mother, Jennifer Miller, later testified to the Royal Commission: 'The Robodebt system killed my son.'
Kath Madgwick died by suicide after receiving a Robodebt notice. So did her son, Jarrad Madgwick, less than a year later—the grief compounded by his own debt notice. David Dains took his life in February 2017. The Royal Commission documented at least three suicides directly linked to the scheme and identified 'a significant number' of additional deaths that may have been connected.
❝ The algorithm sent 470,000 debt notices. At least three people took their own lives. The Australian government paid $1.8 billion in settlements. The officials who designed it faced no criminal charges. ❞
The Royal Commission, which delivered its report in July 2023, found that the scheme was 'a crude and cruel mechanism, neither fair nor legal, and it made many people feel like criminals.' Commissioner Catherine Holmes identified 'cruel' and 'heartless' behaviour by government officials who knew the system was unlawful but continued it anyway because it was generating revenue.
The scheme was ruled unlawful in November 2019. The government settled a class action for $1.8 billion—one of the largest government payouts in Australian history. But the money cannot bring back the dead, repair the families destroyed, or undo the years of anxiety, shame, and desperation inflicted by an algorithm that prioritized efficiency over accuracy, revenue over human dignity.
The United Kingdom: Universal Credit and Digital Exclusion
Britain's Universal Credit system, which consolidated six welfare benefits into one digital platform, has been accused of 'digital by default' discrimination. Claimants must apply and manage their benefits online—a requirement that systematically excludes those without internet access, digital literacy, or stable housing.
Errol Graham, 57, was found dead in his Nottingham flat in June 2018. He weighed just 4.5 stone—28.5 kilograms. His Universal Credit had been stopped eight months earlier when he failed to attend a Work Capability Assessment. The Department for Work and Pensions made no effort to check why he had missed the appointment. The algorithm simply flagged him as non-compliant and cut off his benefits. He starved to death.
"The DWP's processes are not designed for vulnerable people. They are designed for efficiency. When efficiency becomes the goal, human beings become acceptable collateral damage." — Disability Rights UK
A UN Special Rapporteur on extreme poverty and human rights, Philip Alston, concluded in 2018 that the UK government was in 'denial' about the impact of austerity and welfare reform on the poorest citizens. He found that Universal Credit's digital-by-default approach 'effectively marginalizes those least able to cope.'
The Global Taxonomy of Algorithmic Harm
The pattern is global. India's algorithmic harms—starvation deaths from Aadhaar, suicides from loan apps, wrongful imprisonment from facial recognition, discrimination from caste-encoding AI—are local manifestations of a worldwide crisis. The technology differs; the harm is the same.
Welfare Systems: From Aadhaar to Robodebt to the UK's Universal Credit, automated systems produce high false positive rates among the most vulnerable. The algorithm sees efficiency; it does not see the grandmother without internet access, the manual labourer whose fingerprints are worn smooth, the student whose irregular work hours confuse income averaging calculations.
Facial Recognition: From Detroit to Delhi, documented bias against minorities leads to wrongful accusations. MIT research found error rates of 34.7% for dark-skinned women compared to 0.8% for light-skinned men. Delhi Police's 2% accuracy rate is not an outlier—it's the norm for systems deployed on populations they weren't designed to recognize.
Predatory Lending: From India's loan apps to American payday lending algorithms, automated systems trap people in debt cycles. In the US, algorithms have been shown to charge Black and Latino borrowers higher interest rates than white borrowers with identical credit profiles—the same digital redlining that India's loan apps practice against economically vulnerable communities.
Gig Platforms: From Swiggy to Uber to DoorDash, algorithmic management produces dangerous conditions globally. In the UK, Uber drivers have died from exhaustion after the algorithm pushed them to work 90-hour weeks. In Brazil, delivery riders face the same impossible time pressures as their counterparts in Hyderabad. The algorithm that killed Adil Ahmed in Gachibowli operates on the same principles as the algorithm that killed a DoorDash driver in Los Angeles.
Why Every Solution Has Failed
The European Union passed the AI Act—the world's most comprehensive AI regulation—in March 2024. The United States issued Executive Order 14110 on AI safety in October 2023. India released its AI Governance Guidelines in November 2025. Canada, Brazil, Singapore, Japan, and a dozen other countries have enacted or proposed AI regulations.
None address the fundamental problem: AI systems are still built with single-metric optimization. The algorithm optimizes for efficiency, or accuracy, or profit—and treats human welfare as a constraint to be managed rather than a value to be embodied. Guardrails are easily circumvented when the underlying architecture is built for extraction.
❝ You cannot regulate compassion into a system designed for exploitation. You cannot add guardrails to architecture built for harm. The problem is not insufficient regulation. The problem is fundamental design. ❞
The EU AI Act prohibits 'social scoring'—but permits credit scoring algorithms that achieve the same discrimination through different means. The US executive order requires 'red teaming'—but doesn't require that systems be designed for human flourishing. India's guidelines call for 'people first'—but don't specify how that principle translates into computational architecture.
Why India Could Lead
India occupies a unique position in the global AI landscape—a position that could enable it to pioneer solutions that have eluded the rest of the world.
Scale: 1.4 billion people. The world's second-largest AI skill penetration rate after the United States. $15.4 billion in AI-related investments in 2024 alone. Over 2,200 AI startups and growing. If India builds ethical AI, it builds at population scale—and creates exportable solutions for the entire developing world.
Experience: India has experienced algorithmic harm at a scale few other nations can match. Santoshi Kumari is not hypothetical. Bhupendra Vishwakarma is not theoretical. Umar Khalid is not a thought experiment. The problems are documented in court records, litigated in the Supreme Court, mourned in families across the country. India knows what's at stake because India has paid the price.
Wisdom Traditions: The concepts underlying ethical AI—ahimsa (non-harm), karuna (compassion), nyaya (justice), satya (truth), daya (mercy), sama (equanimity)—are indigenous to Indian philosophy. They are not foreign imports requiring cultural translation but native resources awaiting computational expression. India doesn't need to borrow ethics from Silicon Valley; it needs to encode the ethics it already has.
Digital Infrastructure: India has built population-scale digital infrastructure—Aadhaar with 1.3 billion enrollees, UPI processing 13 billion transactions monthly, DigiLocker with 6.5 billion documents, CoWIN with 2.2 billion vaccination records. The architecture exists. The deployment capability exists. The question is whether India will use that infrastructure to build ethical AI or to amplify existing harms.
❝ The choice is not whether to regulate AI. The choice is what kind of AI to build. India can lead the world—if India chooses to lead. ❞
Robert Williams still lives in Michigan. His case was eventually dismissed, but his arrest record remains, flagged by background check algorithms every time he applies for a job. The Dutch families are still rebuilding—some will never recover their children, their marriages, their trust in government. The Australian dead are still dead, and the officials who designed Robodebt face no criminal charges despite a Royal Commission finding their conduct unlawful.
And the algorithms that destroyed their lives are still running—in Michigan, in Amsterdam, in Canberra, in Delhi. Still optimizing. Still processing humans as data points. Still producing the harms they were designed to produce.
The Angels propose a different path. Not constraints on bad systems but good systems by design. Not guardrails but architecture. Not regulation but transformation. The global crisis demands a global solution. India—with its scale, its experience, its wisdom traditions, and its infrastructure—could provide it.
Published By : Deepti Verma
Published On: 26 February 2026 at 17:34 IST