Skip directly to content

Minimize RSR Award Detail

Research Spending & Results

Award Detail

Doing Business As Name:University of Wisconsin-Madison
  • Michael W Wagner
  • (608) 263-3392
  • Porismita Borah
  • Munmun De Choudhury
  • Srijan Kumar
  • Sijia Yang
Award Date:09/20/2021
Estimated Total Award Amount: $ 750,000
Funds Obligated to Date: $ 750,000
  • FY 2021=$750,000
Start Date:10/01/2021
End Date:09/30/2022
Transaction Type:Grant
Awarding Agency Code:4900
Funding Agency Code:4900
CFDA Number:47.083
Primary Program Source:040100 NSF RESEARCH & RELATED ACTIVIT
Award Title or Description:NSF Convergence Accelerator Track F: How Large-Scale Identification and Intervention Can Empower Professional Fact-Checkers to Improve Democracy and Public Health
Federal Award ID Number:2137724
DUNS ID:161202122
Parent DUNS ID:041188822
Program:Convergence Accelerator Resrch
Program Officer:
  • Mike Pozmantier
  • (703) 292-4475

Awardee Location

Street:21 North Park Street
Awardee Cong. District:02

Primary Place of Performance

Organization Name:University of Wisconsin-Madison
Street:5164 VIlas Hall, 821 University
Cong. District:02

Abstract at Time of Award

Democracy and public health in the United States rely on trust in institutions. Skepticism regarding the integrity of U.S. elections and hesitancy related to COVID-19 vaccines are two consequences of a decline in confidence in basic political processes and core medical institutions. Social media serve as a major source of delegitimizing information about elections and vaccines, with networks of users actively sowing doubts about election integrity and vaccine efficacy, fueling the spread of misinformation. This project seeks to support and empower efforts by journalists, developers, and citizens to fact-check such misinformation. They urgently need tools that can 1) enable testing of fact-checking stories on topics like elections and vaccines as they move across social media platforms like Twitter, Reddit, and Facebook, and 2) deliver feedback on how well the corrections worked in real time and with full performance transparency. Accordingly, this project will develop an interactive system that enables fact-checkers to perform rapid-cycle testing of fact-checking messages and monitor their real-time performance among online communities at-risk of misinformation exposure. To be transparent, all of the underlying code, surveys, and data will be available to share with the social science and computer science communities, and all evidence-based messages of immediate utility to public health professionals and electoral administrators will be made publicly accessible. This project is motivated by a desire to understand and help address two democratic and public health crises facing the U.S.: skepticism regarding the integrity of U.S. elections and hesitancy related to COVID-19 vaccines. Both of these crises are fueled by online misinformation, widely circulating on social media, with networks of users actively sowing doubts about election integrity and vaccine efficacy. The project will deliver an innovative, three-step method to identify, test, and correct real-world instances of these forms of online misinformation. First, using computational means, such as techniques in natural language processing, machine learning, social network analysis and modeling, and computer vision to identify posts and accounts circulating and susceptible to misinformation. Second, lab-tested corrections to the most prominent forms of misinforming claims using recommender systems to optimize message efficacy will be produced. And third, the project will disseminate and evaluate the effectiveness of evidence-based corrections using various scalable intervention techniques available through the platforms sponsored content systems. More specifically, for the first step of the method, the project will use multimodal signal detection and knowledge graph to engage in knowledge driven information extraction about electoral skepticism and vaccine hesitancy on social media, integrating user attributes, message features, and online network structural properties to predict likely exposure to future misinformation and identify susceptible online communities for intervention. The second step will consist of working with professional fact-checking organizations to lab test two types of intervention messages—pre-exposure inoculation and post-exposure correction— aimed at mitigating electoral skepticism and vaccine hesitancy, optimizing them using recommender system techniques. For the third step, field experiments will be conducted that deploy the lab-developed interventions, delivered through a combination of ad-purchasing, automated bots, and online influencers; and assess the success of our interventions with respect to optimal decision-making in both health and democracy-related arenas. Ultimately, this three-step approach can be applied across a range of topics in politics and health. This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

For specific questions or comments about this information including the NSF Project Outcomes Report, contact us.