NSF Convergence Accelerator Track F: How Large-Scale Identification and Intervention Can Empower Professional Fact-Checkers to Improve Democracy and Public Health

  • Funded by National Science Foundation (NSF)
  • Total publications:0 publications

Grant number: 2137724

Grant search

Key facts

  • Disease

    COVID-19
  • Start & end year

    2021
    2022
  • Known Financial Commitments (USD)

    $750,000
  • Funder

    National Science Foundation (NSF)
  • Principal Investigator

    Michael Wagner
  • Research Location

    United States of America
  • Lead Research Institution

    University of Wisconsin-Madison
  • Research Priority Alignment

    N/A
  • Research Category

    Policies for public health, disease control & community resilience

  • Research Subcategory

    Communication

  • Special Interest Tags

    N/A

  • Study Type

    Non-Clinical

  • Clinical Trial Details

    N/A

  • Broad Policy Alignment

    Pending

  • Age Group

    Adults (18 and older)

  • Vulnerable Population

    Unspecified

  • Occupations of Interest

    Unspecified

Abstract

Democracy and public health in the United States rely on trust in institutions. Skepticism regarding the integrity of U.S. elections and hesitancy related to COVID-19 vaccines are two consequences of a decline in confidence in basic political processes and core medical institutions. Social media serve as a major source of delegitimizing information about elections and vaccines, with networks of users actively sowing doubts about election integrity and vaccine efficacy, fueling the spread of misinformation. This project seeks to support and empower efforts by journalists, developers, and citizens to fact-check such misinformation. They urgently need tools that can 1) enable testing of fact-checking stories on topics like elections and vaccines as they move across social media platforms like Twitter, Reddit, and Facebook, and 2) deliver feedback on how well the corrections worked in real time and with full performance transparency. Accordingly, this project will develop an interactive system that enables fact-checkers to perform rapid-cycle testing of fact-checking messages and monitor their real-time performance among online communities at-risk of misinformation exposure. To be transparent, all of the underlying code, surveys, and data will be available to share with the social science and computer science communities, and all evidence-based messages of immediate utility to public health professionals and electoral administrators will be made publicly accessible.

This project is motivated by a desire to understand and help address two democratic and public health crises facing the U.S.: skepticism regarding the integrity of U.S. elections and hesitancy related to COVID-19 vaccines. Both of these crises are fueled by online misinformation, widely circulating on social media, with networks of users actively sowing doubts about election integrity and vaccine efficacy. The project will deliver an innovative, three-step method to identify, test, and correct real-world instances of these forms of online misinformation. First, using computational means, such as techniques in natural language processing, machine learning, social network analysis and modeling, and computer vision to identify posts and accounts circulating and susceptible to misinformation. Second, lab-tested corrections to the most prominent forms of misinforming claims using recommender systems to optimize message efficacy will be produced. And third, the project will disseminate and evaluate the effectiveness of evidence-based corrections using various scalable intervention techniques available through the platforms sponsored content systems. More specifically, for the first step of the method, the project will use multimodal signal detection and knowledge graph to engage in knowledge driven information extraction about electoral skepticism and vaccine hesitancy on social media, integrating user attributes, message features, and online network structural properties to predict likely exposure to future misinformation and identify susceptible online communities for intervention. The second step will consist of working with professional fact-checking organizations to lab test two types of intervention messages-pre-exposure inoculation and post-exposure correction- aimed at mitigating electoral skepticism and vaccine hesitancy, optimizing them using recommender system techniques. For the third step, field experiments will be conducted that deploy the lab-developed interventions, delivered through a combination of ad-purchasing, automated bots, and online influencers; and assess the success of our interventions with respect to optimal decision-making in both health and democracy-related arenas. Ultimately, this three-step approach can be applied across a range of topics in politics and health.

This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.