Joe Biden admin seeks additional safety measures before release of AI tools like ChatGPT
US President Joe Biden’s administration wants stronger measures to test the safety of artificial intelligence tools like ChatGPT before publicly releasing them.
- World News
- 3 min read

The United States Biden administration is seeking more rigorous safety testing for artificial intelligence (AI) tools like ChatGPT before they are made available to the public. However, it remains uncertain whether the government will play a part in the vetting process.
On Tuesday, the US Commerce Department announced that it will spend the next 60 days soliciting feedback on the feasibility of measures such as AI audits, risk assessments, and other initiatives that could alleviate consumer apprehension about these emerging systems.
“There is a heightened level of concern now, given the pace of innovation, that it needs to happen responsibly,” AP quoted Assistant Commerce Secretary Alan Davidson, administrator of the National Telecommunications and Information Administration.
The National Telecommunications and Information Administration (NTIA), which functions primarily as an advisor rather than a regulator, is requesting input on the policies that could enhance the accountability of commercial AI tools.
Advertisement
In a meeting with his council of science and technology advisors last week, US President Biden emphasised that tech companies are responsible for ensuring the safety of their products before releasing them to the public.
Last year, the Biden administration introduced a comprehensive set of objectives designed to prevent harms associated with the proliferation of AI systems. However, the release of ChatGPT by San Francisco-based startup OpenAI, as well as similar products from Microsoft and Google, has raised awareness of the capabilities of the latest AI tools that can generate human-like text passages, images, and videos. As a result, the Biden administration's goals may need to be revised in light of these developments.
Advertisement
“These new language models, for example, are really powerful and they do have the potential to generate real harm,” Davidson said in an interview. “We think that these accountability mechanisms could truly help by providing greater trust in the innovation that’s happening,” he added.
The NTIA's request for feedback primarily focuses on "self-regulatory" measures that tech companies could potentially lead in building the technology. This stands in contrast to the European Union, where legislators are currently negotiating the passage of new laws that may establish strict limitations on AI tools based on the level of risk they pose.
Tech leaders call for a six-month AI pause
Elon Musk and other tech leaders have recently called for a six-month pause in the development of systems that are more advanced than GPT-4, the latest version of OpenAI's chatbot, which was released about a month ago. They expressed concern that a race between OpenAI and its competitors, including Google, was occurring without sufficient management and planning for the potential risks involved.
Eric Schmidt, who previously served as the CEO of Google and chaired a congressional commission on the national-security implications of AI, believes that policymakers should be careful not to undermine America's technological leadership, while also promoting development and innovation in line with democratic values.
“Let American ingenuity, American scientists, the American government, American corporations invent this future, and we’ll get something pretty close to what we want,” he said at a House Oversight Committee hearing last month. “And then you guys can work on the edges, where you have misuse,” he added.