Updated 2 July 2025 at 14:10 IST
X (formerly Twitter) is opening up Community Notes to AI chatbots, allowing them to write messages to suggest if a claim is true — though these drafts will still need human approval. Community Notes is among the flagship features that Elon Musk introduced after taking over Twitter from Jack Dorsey to let users fact-check posts as part of his efforts to curb misinformation on the platform. While Community Notes has so far relied on consensus among real users to crowdsource fact-checking, introducing AI—often said to hallucinate and misreport—raises questions about the transparency and credibility of the feature.
According to AdWeek, AI chatbots will help expand the feature and move away from the current situation where a claim cannot be fact-checked unless human users do not reach a consensus. There is also a limitation on the number of posts and claims that human users can check. Since Community Notes has been a human-centric feature, it has solely relied on users’ whims and willingness to fact-check. These challenges prompted X to come up with a solution that boosts Community Notes’ presence on the platform.
“Our focus has always been on increasing the number of notes getting out there,” said Keith Coleman, vice president of product and head of Community Notes at X. “Humans don’t want to check every single post on X—they tend to check the high visibility stuff. But machines could potentially write notes on far more content.” According to Coleman, this initiative will allow people “to develop the best possible technology to solve this problem—whatever that is.”
The AI chatbots will use X’s Grok AI by default, but users can build their fact-checking bots using other foundational models, like OpenAI’s GPT, and connect them to X through an API. So, users have control over whether they want Grok to fact-check posts or deploy their technologies. However, the decentralisation of Community Notes to include AI chatbots immediately raises concerns about whether AI-based fact-checking will be helpful or harmful. AI chatbots are known to make up or misinterpret context, so using them for fact-checking posts could derail the company’s plans to mitigate fake news.
X says it advocates for a system where humans and LLMs or AIs work together to boost Community Notes, noting that human feedback will be crucial in training LLMs. X is also adding a barrier to the extent of AIs, preventing fact-checks from showing up on the platform unless human note readers have vetted them. “The goal is not to create an AI assistant that tells users what to think, but to build an ecosystem that empowers humans to think more critically and understand the world better,” said a paper researchers working on X Community Notes published earlier this week.
Published 2 July 2025 at 14:09 IST