HCC: Small: Crowd-Assisted Human-AI Teaming with Explanations

  • Funded by National Science Foundation (NSF)
  • Total publications:0 publications

Grant number: 2331069

Grant search

Key facts

  • Disease

    COVID-19
  • Start & end year

    2024
    2027
  • Known Financial Commitments (USD)

    $599,999
  • Funder

    National Science Foundation (NSF)
  • Principal Investigator

    Dong Wang
  • Research Location

    United States of America
  • Lead Research Institution

    University of Illinois at Urbana-Champaign
  • Research Priority Alignment

    N/A
  • Research Category

    14

  • Research Subcategory

    N/A

  • Special Interest Tags

    N/A

  • Study Type

    Non-Clinical

  • Clinical Trial Details

    N/A

  • Broad Policy Alignment

    Pending

  • Age Group

    Unspecified

  • Vulnerable Population

    Unspecified

  • Occupations of Interest

    Unspecified

Abstract

This project investigates the problem of information integrity, that is, identifying faulty or ungrounded information online. It focuses on a specific domain, that of information produced during the COVID-19 pandemic, and processes both text and image data. While significant efforts in artificial intelligence (AI) and machine learning (ML) have addressed information integrity in this type of multimodal setting, many solutions cannot be directly applied due to lack of domain specific knowledge and the expertise needed to provide meaningful, convincing explanations. Motivated by such limitations, this project develops a crowdworker-based interactive AI system that explores the collective strengths of the professional knowledge of domain expert crowd workers, the general logical reasoning ability of non-expert crowd workers, and the effective information retrieval capability of AI models. The resulting system will accurately assess information integrity in posts on COVID-19 and explicitly explain the detection results in natural language. This project complements two past research threads: (1) The prevailing AI solutions that primarily focus on extracting specific segments of input posts to serve as explanations, but fail to generate convincing explanations; and (2) Solutions that employ crowdworkers, but only recruit non-expert crowd workers and so fail to leverage the domain knowledge of experts. The results of the project will provide unprecedented accuracy by integrating diverse human and machine intelligence to address highly technical, domain-specific problems. While the focus is COVID-19, the framework and models developed in this project will address information integrity with explanations in other domains (such as those in healthcare and public safety). This project will also provide opportunities for students in STEM and underrepresented groups to study human-centered AI techniques. This project develops a human-centered AI framework that can be used to guide the design, development, and implementation of future explainable crowd-AI systems where the hybrid human intelligence from expert and non-expert crowd workers is integrated with AI models to make more accurate decisions and provide well-grounded, meaningful, and convincing explanations of those decisions. The research integrates AI, crowdsourcing, ML, and human-AI interactions. Specifically, the research includes: i) developing a deep text-visual alignment approach to construct a multimodal COVID-19 knowledge graph; ii) creating a logic-oriented crowdsourcing interface for non-expert crowd workers to validate the knowledge graph; iii) designing a topic-driven human-AI interaction scheme that will use expert crowd workers to construct a generalized multimodal COVID-19 knowledge graph; iv) developing a dynamic graph-attentive knowledge discriminator to address and explain issues in information integrity in COVID-19 information with natural language descriptions. This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.