Facebook, Microsoft and seven universities, including Oxford and MIT, have come together to launch a "deepfake challenge." Together, they are looking to fight Deepfakes and improve tools that detect videos and other media manipulated by artificial intelligence (AI). Facebook on Thursday announced its 10-million-dollar initiative. Facebook aims to curb what is seen as a significant threat to the integrity of online information, including on social media services like Facebook and Instagram. As part of the initiative, Facebook has partnered with Microsoft for AI expertise. Facebook has also partnered with academics from Massachusetts Institute of Technology (MIT), Cornell University, University of Oxford, University of California-Berkeley, University of Maryland and University at Albany. These partnerships between Technology firms and universities represent an effort to combat the dissemination of manipulated video or audio as part of a misinformation campaign.
"The goal of the challenge is to produce technology that everyone can use to better detect when AI has been used to alter a video in order to mislead the viewer," said Facebook chief technical officer Mike Schroepfer. Schroepfer said deepfakes techniques "have significant implications for determining the legitimacy of information presented online. Yet the industry doesn't have a great data set or benchmark for detecting them."
What is deepfake, you may ask? Deepfakes present realistic generated videos of people doing and saying fictional things, using Artificial Intelligence (AI). Earlier, a deepfake video of Facebook CEO Mark Zuckerberg was uploaded on Instagram. In the video, Facebook CEO was portrayed as saying things that seemed to defame Facebook and Zuckerberg himself. However, Instagram refused to delete the video in question. Currently, Facebook is evaluating how to handle “deepfake” videos created with artificial intelligence (AI) and high-tech software tools to produce a false but realistic clip
This happens to the first project of a committee on AI and media integrity. Created by the Partnership on AI, the mission of the project is to promote beneficial uses of artificial intelligence and is also backed by Apple, Amazon, IBM and other tech firms and non-governmental organisations. Terah Lyons, executive director of the Partnership, said the new project is part of an effort to stem AI-generated fakes, which "have significant, global implications for the legitimacy of information online, the quality of public discourse, the safeguarding of human rights and civil liberties, and the health of democratic institutions". Facebook said it was offering funds for research collaborations and prizes for the challenge, and would also enter the competition, but not accept any of the prize money.
Oxford professor Philip Torr, one of the academics participating, said new tools are "urgently needed to detect these types of manipulated media. "Manipulated media being put out on the internet, to create bogus conspiracy theories and to manipulate people for political gain, is becoming an issue of global importance, as it is a fundamental threat to democracy," Torr said in a statement.