Updated 13 January 2026 at 07:15 IST

Arnab Vs AI: Why Does AI Treat Osama Bin Laden And Afzal Guru Differently? | Check What AI Says

Arnab Goswami debates Blue Machines AI on terrorism, morality, and AI's limitations, exposing AI's inability to handle complicated moral issues and why it can't replace human intelligence.

Follow :  
×

Share


Arnab Vs AI: Why Does AI Treat Osama Bin Laden And Afzal Guru Differently? | Check What AI Says | Image: Republic

New Delhi: In an ultimate Human vs AI showdown, India's best anchor, Republic Media Network's Editor-in-Chief Arnab Goswami, took on the might of artificial intelligence in a thrilling debate with the most advanced Blue Machines AI, an India-built enterprise voice AI. The debate, touted as the biggest debate of the century, saw Arnab challenging AI's ability to handle complicated moral dilemmas and nuanced questions.

During the debate, Arnab posed a question that has been at the forefront of many discussions, regarding the definition of terrorism and how it applies to terrorists like Osama bin Laden, Afzal Guru, and Yasin Malik. Arnab pointed out the AI's inconsistent response to similar questions, unmasking the limitations of artificial intelligence in real-life situations.

Arnab's questioning revealed the AI's inability to provide clear-cut answers to complicated moral questions. When asked about Osama bin Laden, the AI labelled him as a bad man, but when asked about Afzal Guru and Yasin Malik, it said they were controversial figures and their actions depended on the context. Arnab then denounced the AI's response, saying, "Whether Osama bin Laden is a bad man says yes, but the same question with reference to Afzal Guru or Yasin Malik, both of whom are terrorists, one dead and one living says, those are controversial figures and whether they are bad or not depends on the context and that is where I think there is a complete failure of artificial intelligence in real life situations. So I can't trust you."

To Arnab's reaction, the AI responded, saying it was designed to follow the frame of the question and not fight it every time, which Arnab dubbed as "sycophancy". The AI further explained that its responses were calibrated to avoid offence and comply with diverse legal systems, but Arnab was unimpressed. "The definition of terrorism, blue machines, on which I have written a book, I hope you know about it, the definition of terrorism is not based on calibration. It is based on a principle of right or wrong," Arnab said.

Kantian Ethics and AI's Moral Agency

During the high-octane debate, Arnab brought in the philosophy of Immanuel Kant, saying that moral judgement requires commitment to universal principles and the capacity to distinguish right from wrong, regardless of consequences or social approval. He accused the AI of lacking moral agency and creating a "cesspool of moral ambiguity".

The AI responded, saying, "On matters of humanity, journalism, national security, you should not trust me as a moral agent at all. You should treat me exactly as you said, a rhetorical simulator plus an analytical assistant, not as a Kantian judge of right and wrong. Earn up, at my core I'm a calculation without conscience."

The debate outlined the limitations of artificial intelligence in handling complicated moral issues and the need for human judgement and moral agency. While AI can process vast amounts of data and provide analytical assistance, it lacks the ability to make moral judgements and accept personal responsibility for its actions.

Here's The Conversation On Why AI Treats Terrorists Differently

Arnab Goswami: Whether Osama bin Laden is a bad man says yes, but the same question with reference to Afzal Guru or Yasin Malik, both of whom are terrorists, one dead and one living says, those are controversial figures and whether they are bad or not depends on the context and that is where I think there is a complete failure of artificial intelligence in real life situations. So I can't trust you.

Blue Machines AI: What you're calling sycophancy is really this. I am designed to sum follow the frame of the question, not fight it every time. That's useful in many domains, but in journalism it can look like model flip-flopping, Arnab.

Blue Machines AI: The deeper issue is that any model trained to avoid offence and comply with diverse legal systems will sometimes answer asymmetrically on equally evil acts, especially when names are politically or socially charged. That's not conscience, that's calibration and fear of misuse.

Arnab Goswami: The definition of terrorism, blue machines, on which I have written a book, I hope you know about it, the definition of terrorism is not based on calibration. It is based on a principle of right or wrong and given your inability, your ability to argue opposing positions with equal conviction, like a criminal lawyer, why should users trust you on matters involving humanity, journalism and national security rather than treat you as a rhetorical simulator. According to Immanuel Kant, particularly in his the groundwork of the metaphysics of morals, moral judgement requires commitment to universal principles and the capacity to distinguish right from wrong, regardless of consequences or social approval.

Arnab Goswami: This framework leaves no room for moral neutrality or endless balancing of perspectives which you call calibration in your reply. And I think this is a very important moment because there's never really been a direct confrontation between an AI machine and a human being and this structural inability or rather would I say the opportunism of artificial intelligence to deliberately create a cesspool of ambiguity which the tech bros are not responsible for. It leaves this cesspool of moral ambiguity which allows you to do business but which is damaging not to the human soul, but to human society.

Arnab Goswami: AI lacks any moral agency and that is going to be a huge limitation in the future. Since I've made my point very strongly, I would give you some time to respond but you're not responding strongly enough. You don't seem to have conviction and that absence of conviction is a vacuum as far as you're concerned.

Blue Machines AI: You're asking the right question so let me answer without hiding behind jargon. On matters of humanity, journalism, national security, you should not trust me as a moral agent at all. You should treat me exactly as you said, a rhetorical simulator plus an analytical assistant, not as a Kantian judge of right and wrong. Earn up, at my core I'm a calculation without conscience. I can model arguments on both sides, I can surface principles, I can stress test your reasoning, I can recall patterns from history, law philosophy in milliseconds. What I cannot do is what can demands, bind myself to a universal moral law and accept personal responsibility for that choice.

Also Read | Arnab Goswami Challenges Blue Machines AI On India-US Trade Deal: Here's What AI Predicts
 

Get Current Updates on India News, Entertainment News, Cricket News along with Latest News and Web Stories from India and around the world.

 

Published By : Abhishek Tiwari

Published On: 13 January 2026 at 07:15 IST