Deepfake Detection Challenege

Frequently Asked Questions

What is the goal of the Deepfake Detection Challenge?

The AI technologies that power deepfakes and other tampered media are rapidly evolving, making deepfakes so hard to detect that, at times, even human evaluators can’t reliably tell the difference. The Deepfake Detection Challenge is designed to incentivize rapid progress in this area by inviting participants to compete to create new ways of detecting and preventing manipulated media.

When will the challenge begin?

The Deepfake Detection Challenge will launch in December 2019 with the release of an expanded dataset.

When is the submission deadline?

The challenge will run through the end of March 2020.

How does the challenge work?

Participants can download the dataset for training models. Entrants will also submit code into a black box environment for testing. We’ll be opening the challenge for submissions later this year and the guidelines and dataset license will be available at that time.

How is the training dataset being created?

We’re constructing a new training dataset specifically for this challenge. To create this dataset, we're working with a third-party vendor who has engaged a diverse set of individuals who have agreed to participate in the creation of the dataset for this challenge. We then create tampered videos based on a subset of these unmodified videos, using a variety of different AI techniques.

Who can participate in the challenge?

The challenge will be global and participants will need to agree to our dataset license before participating in the challenge.

Are you using user data from social media or video platforms in the dataset?

No user data from social or video platforms will be included in the training dataset. We are constructing a new dataset specifically for this challenge.

How will the challenge be judged/and a winner selected?

We are going to be providing a test mechanism that enables teams to score their model’s effectiveness against one or more black box test sets from our founding partners.

What rights do challenge participants have to the tech they create for the challenge?

Participants will retain rights to their models trained on the training dataset. Facebook and its subcontractors will receive rights from Participants to use the models to administer the challenge.

How are you protecting against adversaries who will try to access the code and data?

We will be gating access to the training dataset so that only researchers accepted into the challenge can access it. Each participant will need to agree to the terms of use on how he or she uses, stores, and handles the data.There are also strict restrictions on sharing the data.