Skip directly to content

Minimize RSR Award Detail

Research Spending & Results

Award Detail

Awardee:UNIVERSITY OF WASHINGTON
Doing Business As Name:University of Washington
PD/PI:
  • Hanna Hajishirzi
  • (206) 543-4043
  • hannaneh@uw.edu
Award Date:06/09/2021
Estimated Total Award Amount: $ 549,843
Funds Obligated to Date: $ 299,843
  • FY 2021=$299,843
Start Date:09/01/2021
End Date:08/31/2026
Transaction Type:Grant
Agency:NSF
Awarding Agency Code:4900
Funding Agency Code:4900
CFDA Number:47.070
Primary Program Source:040100 NSF RESEARCH & RELATED ACTIVIT
Award Title or Description:CAREER: Knowledge-Rich Neural Text Comprehension and Reasoning
Federal Award ID Number:2044660
DUNS ID:605799469
Parent DUNS ID:042803536
Program:Robust Intelligence
Program Officer:
  • Roger Mailler
  • (703) 292-7982
  • rmailler@nsf.gov

Awardee Location

Street:4333 Brooklyn Ave NE
City:Seattle
State:WA
ZIP:98195-0001
County:Seattle
Country:US
Awardee Cong. District:07

Primary Place of Performance

Organization Name:University of Washington
Street:185 Stevens Way
City:Seattle
State:WA
ZIP:98195-2500
County:Seattle
Country:US
Cong. District:07

Abstract at Time of Award

Enormous amounts of ever-changing knowledge are available online in diverse textual styles (e.g., news vs. science text) and diverse formats (knowledge bases vs. web pages vs. textual documents). This proposal addresses the question of textual comprehension and reasoning given this diversity: how can artificial intelligence (AI) help applications comprehend and combine evidence from variable, evolving sources of textual knowledge to make complex inferences and draw logical conclusions? Recent advances in deep learning algorithms, large-scale datasets, and industry-scale computational resources are spurring progress in many Natural Language Processing (NLP) tasks, including question answering. Nevertheless, current models lack the ability to answer complex questions that require them to reason intelligently across diverse sources and explain their decisions. Further, these models cannot scale up when task-annotated training data are scarce and computational resources are limited. Our results will give rise to the next generation of question answering and fact checking algorithms that offer rich natural language comprehension using multi-hop and interpretable reasoning even when annotated training data is scarce. With a focus on textual comprehension and reasoning, this research will integrate capabilities of symbolic AI approaches into current deep learning algorithms. It will devise hybrid, interpretable algorithms that understand and reason about textual knowledge across varied formats and styles, generalize to emerging domains with scarce training data (are robust), and operate efficiently under resource limitations (are scalable). Toward this end, this research will focus on four transformative research initiatives: (1) defining a general-purpose formalism to promote data comprehension through knowledge-rich neural representations, (2) devising an interpretable, multi-hop inference and reasoning engine, (3) developing robust and scalable algorithms to demonstrate generalizable domain and device adaptation, and (4) building applications and datasets in question answering and fact checking tasks that will have lasting general-purpose utility. This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

For specific questions or comments about this information including the NSF Project Outcomes Report, contact us.