Updated 29 December 2025 at 10:57 IST
OpenAI Is Hiring Someone Who Can Keep ChatGPT From Becoming the Problem
Will you take this new job offer from OpenAI?
New Delhi: OpenAI, the company behind ChatGPT, is hiring for a new senior role that highlights growing concerns about artificial intelligence. The position, called Head of Preparedness, was announced by CEO Sam Altman on X. He described it as “a critical job at a critical time.”
The job is meant to ensure that powerful AI systems do not cause harm. Altman said models are improving rapidly and can perform many useful tasks, but they also pose risks. He pointed to two areas of concern. First, AI can now find serious flaws in computer systems, which could be exploited by attackers. Second, AI can affect mental health. In 2025, ChatGPT was blamed in cases linked to suicide, raising alarm about how people interact with chatbots.
The announcement comes after several troubling incidents. A 23-year-old college goer killed himself in Texas, and his family, in their lawsuit, have alleged that ChatGPT 'goaded' him to take such a step. In India, experts have warned about young people spending hours with AI tools, sometimes worsening anxiety and depression. Cybersecurity researchers have also demonstrated that advanced models can generate code that exposes previously hidden vulnerabilities, raising concerns that hackers could weaponise AI.
Altman admitted the new role will be demanding. He said the person hired will have to start working immediately and warned, “This will be a stressful job and you’ll jump into the deep end pretty much right away.”
OpenAI has faced growing criticism worldwide. ChatGPT has been praised for its abilities but also blamed for serious mishaps, including mental health concerns, spreading misinformation, and exposing computer vulnerabilities. Governments in the US, Europe, and India are drafting new rules to regulate AI. Watchdogs are pressing companies to prove that their systems are safe.
The person chosen will lead OpenAI’s Preparedness framework, a plan to track risks, test AI systems, and design safeguards. The job will involve building threat models, running evaluations, and creating defenses against misuse. The leader will work with engineers, researchers, and policy teams. They will guide decisions on when and how to release new AI features and ensure safety checks are part of every product cycle.
Published By : Priya Pathak
Published On: 29 December 2025 at 10:57 IST