Skip directly to content

Minimize RSR Award Detail

Research Spending & Results

Award Detail

Awardee:UNIVERSITY OF SOUTH CAROLINA
Doing Business As Name:University of South Carolina at Columbia
PD/PI:
  • Andrea E Hickerson
  • (803) 777-4979
  • hickera@mailbox.sc.edu
Award Date:07/25/2021
Estimated Total Award Amount: $ 114,234
Funds Obligated to Date: $ 114,234
  • FY 2021=$114,234
Start Date:10/01/2021
End Date:09/30/2024
Transaction Type:Grant
Agency:NSF
Awarding Agency Code:4900
Funding Agency Code:4900
CFDA Number:47.070
Primary Program Source:040100 NSF RESEARCH & RELATED ACTIVIT
Award Title or Description:Collaborative Research: SaTC: TTP: Small: DeFake: Deploying a Tool for Robust Deepfake Detection
Federal Award ID Number:2040125
DUNS ID:041387846
Parent DUNS ID:041387846
Program:Secure &Trustworthy Cyberspace
Program Officer:
  • Robert Beverly
  • (703) 292-7068
  • rbeverly@nsf.gov

Awardee Location

Street:Sponsored Awards Management
City:COLUMBIA
State:SC
ZIP:29208-0001
County:Columbia
Country:US
Awardee Cong. District:06

Primary Place of Performance

Organization Name:University of South Carolina
Street:800 Sumter Street
City:COLUMBIA
State:SC
ZIP:29208-0001
County:Columbia
Country:US
Cong. District:06

Abstract at Time of Award

Deepfakes – videos that are generated or manipulated by artificial intelligence – pose a major threat for spreading disinformation, threatening blackmail, and new forms of phishing. They are already widely used in creating non-consensual pornography, and have begun to be used to undermine governments and elections. Even the threat of deepfakes has cast doubts on the authenticity of videos in the news. Journalists, who have a key role in verifying information, especially need help to deal with ever-improving deepfake technology. Recent results on detecting deepfakes are promising, with close to 100% accuracy in lab tests, but few systems are available for real-world use. It is critical to move beyond accuracy on curated datasets and address the needs of journalists who could benefit from these advances. The objective of this transition-to-practice project is to develop the DeFake tool, a system that utilizes advanced machine learning to help journalists detect deepfakes in a way that is robust, intuitive, and provides results that are explainable to the general public. To meet this objective, the project team is engaged in four main tasks: (1) Making the tool robust to new types of deepfakes, and having it show users why a video is fake; (2) Protecting the tool from adversarial examples – small perturbations to a video that are specially crafted to fool detection systems; (3) Working with journalists to understand what they need from the tool, and building an online community to discuss deepfakes and their detection; and (4) Integrating advances from the other tasks into a stable, efficient, and useful tool, and actively disseminating this tool to journalists. The project team is also leveraging visually interesting deepfakes to develop engaging education and outreach efforts, such as a museum-style exhibit on deepfake detection meant for broad audiences of all ages. This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

For specific questions or comments about this information including the NSF Project Outcomes Report, contact us.