XAIvsDisinfo: eXplainable AI Methods for Categorisation and Analysis of COVID-19 Vaccine Disinformation and Online Debates

  • Funded by UK Research and Innovation (UKRI)
  • Total publications:0 publications

Grant number: EP/W011212/1

Grant search

Key facts

  • Disease

    COVID-19
  • Start & end year

    2021
    2022
  • Known Financial Commitments (USD)

    $295,257.6
  • Funder

    UK Research and Innovation (UKRI)
  • Principal Investigator

    Kalina Bontcheva
  • Research Location

    United Kingdom
  • Lead Research Institution

    University of Sheffield
  • Research Priority Alignment

    N/A
  • Research Category

    Policies for public health, disease control & community resilience

  • Research Subcategory

    Communication

  • Special Interest Tags

    Digital Health

  • Study Type

    Non-Clinical

  • Clinical Trial Details

    N/A

  • Broad Policy Alignment

    Pending

  • Age Group

    Not Applicable

  • Vulnerable Population

    Not applicable

  • Occupations of Interest

    Not applicable

Abstract

UK vaccination rates are in decline and experts believe that vaccine disinformation, widely spread in social media, may be one of the reasons. Recent surveys have established that vaccine disinformation is impacting negatively citizen trust in COVID-19 vaccination specifically. As a response, the UK Government agreed with Twitter, Facebook, and YouTube measures to limit the spread of disinformation. However, simply removing disinformation from platforms is not enough, as the government also needs to monitor and respond to the concerns of vaccine hesitant citizens. Moreover, manual detection and tracking of disinformation, as currently practiced by many journalists, is infeasible, given the scale of social media. XAIvsDinfo aims to address these gaps through novel research on explainable AI-based models for large-scale analysis of vaccine disinformation. Specifically, vaccine disinformation will be classified automatically into the six narrative types defined by First Draft. A second model will categorise vaccine statements as pro-vaccine, anti-vaccine, vaccine-hesitant, or other. We will investigate explainable machine learning approaches that are human interpretable: both in detecting errors and weaknesses of the models and in providing human-readable explanations of the models' decisions. XAIvsDisinfo will also create two new multi-platform datasets and organise a new community research challenge on cross-platform analysis of vaccine disinformation, as follow-up from our RumourEval one. Our XAI models and tools will be integrated into the open-source InVID-WeVerify plugin, for take up by journalists and fact-checkers. The project outputs will also contribute to evidence-based policy activities by the UK government on improving citizen perception of COVID-19 vaccines.