Artificial Intelligence (AI) and Casteism in India

0
128

“Will AI support social justice or is it waiting in the wings to bury social justice once and for all?”

Babu DC Verma

  (Asian independent)   Historical biases already ingrained in present system of India will be ingrained and even amplified in AI if not addressed responsibly. Modern AI is heavily statistical in nature, it relies on real life data to learn and make decisions and we all know that real-life contains biases from historical inequalities, social structures and how people act. Therefore, AI can intentionally or unintentionally inherit and reinforce them. India has been a highly diverse country for centuries but the caste system stands as one of its most entrenched and harmful forms of discrimination, which is not hidden from the world. Its impact is so profound that many individuals have even changed their religion to escape it, often without success.

Oppressed communities remain marginalized in modern society due to systemic wealth disparities rooted in entrenched social inequities. The principle of “wealth attracts wealth” creates a feedback loop that further hinders the upliftment of these communities. Similarly, in the context of AI, the phrase “garbage in, garbage out” highlights how biased data leads to biased decisions. When these biased outputs are used as inputs for future decisions, they perpetuate a negative feedback loop, reinforcing systemic discrimination.

If we talk about Bias briefly, it is defined as prejudice in Favor of or against a person, thing, or idea. That preference influences understanding and outcomes in a way which prevent neutrality or objectivity. When it comes to caste, biases present in both explicit and implicit form in India. In blue- collar jobs, they are obvious, while in white-collar jobs, they are more hidden but quite noticeable. While individual with hidden caste biases may not openly show discrimination, they may still support the system of caste inequality, especially when part of majority group and when it comes to important matters. This passive complicity helps keep caste-based unfairness going in social and work environments. Technology like facial recognition in surveillance to find criminals, AI systems to spot fraud in bank accounts and transactions, and AI-based tools in hiring processes might not treat everyone fairly.

There is religious polarization which is also taking shape in India. Algorithms behind social media designed to spark our gaze are deepening divisions within communities. These forces could tear the fabric of society. It is no secret that technological advancements have often been used by those in power to tighten their grip. For example, the printing press helped control the flow of information and censorship, the radio was used for propaganda and television became a way to control information and shape culture and what people consume. The internet introduced digital gatekeeping, surveillance, and a growing divide between those with access to technology and those without. Social media now plays a role in spreading misinformation, manipulation, and bias, all while allowing for surveillance and censorship. These technologies, while initially lauded for their democratizing potential, have often been co-opted to serve the interests of dominant groups.

And AI, AI is much more powerful than earlier technologies, is not inherently good or bad—it is a tool that reflects the values and intentions of its creators and users. Stuart Russel, a computer scientist, once mentioned in his speech- A sad fact about the world is that many people are in the business of shaping other people’s preferences to suit their own interests so one class of people oppresses another but trains the oppressed to believe that they should be oppressed. So then should the AI system take the preference, those self-oppression preferences of the oppressed literally and contribute to further oppression of those people because they have been trained to accept their oppression.” Stephen Hawking once remarked on AI- “Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.”

Can we trust those who claim they are not responsible for the injustices their ancestors inflicted on the oppressed? Would they put in extra effort to ensure justice when designing, developing and using AI? Maybe yes, maybe no. But can we afford to take that risk? Definitely not. The only option is ethical development, inclusive governance, and proactive oversight to mitigate that risk and allow the potential of AI to support social justice. However, if it is left unchecked or controlled by those who care more about profit or power than fairness, AI could bury social justice once and for all. To ensure fair and inclusive governance, decision-making structures for AI must involve people from diverse backgrounds. Representation should encompass all religions, Scheduled Castes (SC), Scheduled Tribes (ST), Other Backward Classes (OBC), and women and dominant caste. This diversity is critical to prevent bias and uphold justice for all sections of society.

In the end, the future of social justice in the age of AI hinges on the collective efforts of policymakers, tech activists and society, united in the pursuit of fairness and equity.

(The writer Babu DC Verma, a graduate from IIT Delhi, is enthusiastic about AI but responsibly)

LEAVE A REPLY

Please enter your comment!
Please enter your name here