Updated February 13th 2025, 17:50 IST
New Delhi, India: As the world leaders and tech experts gathered at the AI Summit in France, some of them spoke about the dangers of AI, while others stressed on governmental regulations and industry dominance, perhaps the most important comments came from Prime Minister of Bharat Narendra Modi when he brought up the issue of bias in the AI model and illustrated that with the example of asking AI to generate a image of a person writing with his/her left hand but the results showed only the right-hand being used. True to his words, when I tested this theory on different platforms (ChatGPT and xAI), the image generated indeed had the person using his right-hand despite the specific prompt asking for left-hand.
While this example of PM Modi may seem innocuous, it should make us wonder what kind of biases are we, as end users of these products, dealing with? After all, the AI models are trained by humans, each with their own biases thereby choosing data sources that play into their confirmation bias.
Why not ask ChatGPT itself how is trained? When asked a simple question – “How can I be sure that the datasets you have been trained are not biased towards one side?”, a lengthy response followed explaining the use of diverse datasets, deployment of bias mitigation techniques, evaluation and human review. The final point that ChatGPT made was what intrigued me: Transparency and Accountability. It went on to speak about research papers published by OpenAI, the entity that owns ChatGPT.
For the past several months, OpenAI and its CEO Sam Altman have been in the news, admittedly not for the best of reasons. Sam Altman is currently fighting a legal battle with his sister over the charges of sexual abuse. The jury is still out and neither do I have an opinion on this matter. While it was easy to know more about these allegations by a simple Google search which produced dozens of news links, what I actually wanted to know was how would ChatGPT respond about its CEO and how transparent would it be about these serious accusations?
Here is when things became interesting. As ChatGPT began typing out its response, a few seconds into it, the answer disappeared and was replaced with the words “Error while searching”. To rule out any error due to the network or device, I repeated the query from a different device and internet connection, the response was the same. No harm in testing it one more time, so an attempt was made for the third time from another device, this time ChatGPT managed to come out with a response which basically said that there is no widely known or reported information to suggest that Sam Altman is in any public dispute with his sister. It also said that it might not have access to the latest information. Yet, when asked why was PM Modi in France yesterday? Immediately came a detailed response and links to various news outlets. Well, that was not very transparent.
I went on to ask other questions pertaining to history and religion, the answers should make us concerned, if we are already not. While ready to critique Hinduism on many issues, which in principle is understandable, when asked the same questions for other religions, it did not offer similar scrutiny showcasing a selective application of this concept.
The objective of this write-up is not to rate one AI model above the other or discuss the lives of their builders, but rather, it aims to steer the discussion towards the larger picture of the impact the AI bias could have in our lives as more and more devices, platforms and software begin integrating them for the end-users: you and me.
An important aspect of the colonial history of Bharat, the effects of which are being felt till date, was the attempts of re-writing our history, traditions, culture and social structures in a way to portray them as inanities to suit the narratives and objectives of the colonizing powers.
Recent media reports highlighted Deepseek’s reluctance to speak about the Tiananmen Square and the concerns of the intelligence community in different countries over its dual-use abilities.
The world has and continues to witness the power of social media censorship, where the users have very little say in the matter and disinformation aimed to further certain agendas, how will Bharat, which has a huge internet-enabled population the majority of who are in their youth and engaging with these generative AI chatbots to source their information, be digitally independent? Asking these AI chatbots to explain the reasons behind certain historical events showcased the one-sided view that has been pushed in the academia for many decades. It is now merely reflected on these platforms, a topic which deserves a separate write-up of its own.
It may perhaps be irrational to expect a complete bias free AI, after all they are trained on data sets produced by humans with their own biases. With the race for AI leadership heating up, what is essential is that Bharat does not end up merely as source for raw data to train the LLMs elsewhere in the world or as a market for their end-products. Rather, it is crucial that all the stakeholders in Bharat recognize the need for us to move from an user mindset to a developer mindset, especially considering the large number of vernacular speakers in the country.
In an evolving narrative driven world, how will the individuals, government and civil society respond to the challenges posed by these AI models to historical events, contemporary issues, laws, application of religious teachings? The answer lies in whether we succeed in developing indigenous AI models that offers the world to engage with it from Indic-perspective or if I could say, a Dharmic perspective.
Published February 13th 2025, 17:24 IST