Let me tell you the story of 103-year-old Haryana man Dhuli Chand. One day in September 2022, dressed in wedding attire, he rode a chariot, with villagers and relatives accompanying him while a band played music. Chand was on his way to meet government officials in Rohtak to convince them that he was alive.
He was declared dead in the state’s Family ID database and his monthly pension was stopped. The database uses algorithms and Artificial Intelligence (AI) to determine if a person is eligible for welfare benefits.
The Family ID database also known as Parivar Pehchan Patra maps every family’s demographic and socio-economic information by linking several government databases to check their eligibility for welfare schemes. The algorithm had deduced he was dead.
Chand is not an isolated instance of algorithmic snafu. The Haryana government had declared more than three lakh people dead in three years in the state; thousands were wrongfully reflected as dead due to errors in the Family ID.
Old-age pension isn’t the only casualty of this. This algorithm has wrongly declared the alive as dead, the poor as well-off, the disabled as able-bodied, robbing them of old-age pension, disability pension, widow pensions, and other welfare benefits for the poor.
When people who had been wronged by the algorithm went to government officials to get the records corrected, they faced red tape like Dhuli Chand who had to do a stunt to get the officials’ to bring him alive on papers.
In the past few years, at least half a dozen states have adopted algorithmic systems that use AI and machine-learning to predict the eligibility of citizens for welfare schemes. Over the past year, The Collective’s member, Tapasya, along with Kumar Sambhav and Divij Joshi investigated the use and impact of such algorithms in partnership with the Pulitzer Centre’s Artificial Intelligence Accountability Network. Tapasya reported from Haryana, Telangana and Odisha.
What’s concerning is that governments that are using these pieces of profiling software are refusing to divulge crucial information on the design, the data points sourced to arrive at a decision, the source code used by the algorithms and whether these algorithms are evaluated and the steps taken to make them better.
Part 1 of the series revealed how an opaque and unaccountable algorithmic system deprived tens of thousands of poor of their rightful subsidised food.