Technology firms and academics have joined hands to unveil a “deepfake challenge” to develop tools to identify videos and other media handled by artificial intelligence. The initiative was recently announced and includes $10 million from Facebook and intents to control what is considered as a key threat to the reliability of online information.
The effort, supported by Microsoft and the industry-backed Partnership on AI and includes researchers from the Massachusetts Institute of Technology, University of Oxford, Cornell University, University of Maryland, University of California-Berkeley, and University at Albany.
It characterizes a broad effort to fight the distribution of exploited video or audio as part of a propaganda campaign.
Facebook chief technical officer, Mike Schroepfer said that the aim is to produce technology that anyone can use to identify when AI has been used to manipulate a video to deliberately mislead the viewer.
Schroepfer said deepfake techniques, which show convincing AI-generated videos of people doing and saying unreal things, have severe consequences for determining the correctness of the information presented online. However, the industry doesn’t have a good data set or benchmark for distinguishing them. The task is the first project of a committee on AI and media truthfulness founded by the Partnership on AI, a group whose goal is to promote positive uses of artificial intelligence and is supported by Apple, IBM, Amazon, and other tech companies and non-governmental organizations.
Terah Lyons, executive director of the Partnership, said the new venture is part of an effort to curtail AI-generated fakes, which “have significant, global implications for the quality of public discourse, the legitimacy of information online, the safeguarding of human rights and civil liberties, and the health of democratic institutions.”
YOU MAY LIKE: Deepfake AI Can Change What People Say In a Video
Facebook said it was proposing funds for research associations and rewards for the challenge, and would also join the competition, but not accept any of the prize money. Oxford professor Philip Torr, one of the scholars, contributing, said new tools are quickly required to detect these types of exploited media.
Torr said in a statement that influenced media being put out on the internet, to create false conspiracy models and to control people for political gain, is becoming a subject of global importance, as it is a central threat to democracy.