Skip directly to content

Minimize RSR Award Detail

Research Spending & Results

Award Detail

Awardee:STEVENS INSTITUTE OF TECHNOLOGY (INC)
Doing Business As Name:Stevens Institute of Technology
PD/PI:
  • Yue Ning
  • (201) 216-5486
  • yue.ning@stevens.edu
Award Date:06/09/2021
Estimated Total Award Amount: $ 571,930
Funds Obligated to Date: $ 109,034
  • FY 2021=$109,034
Start Date:06/15/2021
End Date:05/31/2026
Transaction Type:Grant
Agency:NSF
Awarding Agency Code:4900
Funding Agency Code:4900
CFDA Number:47.070
Primary Program Source:040100 NSF RESEARCH & RELATED ACTIVIT
Award Title or Description:CAREER: Towards Deep Interpretable Predictions for Multi-Scope Temporal Events
Federal Award ID Number:2047843
DUNS ID:064271570
Parent DUNS ID:064271570
Program:Info Integration & Informatics
Program Officer:
  • Wei Ding
  • (703) 292-8017
  • weiding@nsf.gov

Awardee Location

Street:CASTLE POINT ON HUDSON
City:HOBOKEN
State:NJ
ZIP:07030-5991
County:Hoboken
Country:US
Awardee Cong. District:08

Primary Place of Performance

Organization Name:Stevens Institute of Technology
Street:Castle Point on Hudson
City:Hoboken
State:NJ
ZIP:07030-5991
County:Hoboken
Country:US
Cong. District:08

Abstract at Time of Award

Many human events, such as personal visits to hospitals, flu outbreaks, or protests, are recorded in temporal sequences and exhibit recurring patterns. For instance, in hospital admission records, patients who have been diagnosed with hypertension often later visit the hospital for heart diseases. Predictions of human events using past event patterns are key to many stakeholders in AI-assisted decision making. Interpretable predictive models will significantly improve transparency in these decision-making processes. Recently, interpretable machine learning has been drawing an increasing amount of attention. However, most state-of-the-art works in this domain focus on static analysis such as identifying pixels for object detection in an image. Little work has been developed for temporal event prediction in dynamic, heterogeneous, and multi-source data sequences. To address this problem, this project will support the design of transformative interpretable paradigms for temporal event sequences of different scopes with heterogeneous and multi-source features. Providing predictive tools that can capture hierarchical, relational, and complex evidence will enrich and support robust forecasting in the future. This work will involve educational activities such as developing new courses on interpretable machine learning; training graduate, undergraduate, and high-school students in interdisciplinary studies; and increasing participation of women and minority groups in academic research. Core outcomes of this project such as software, datasets, and publications will be made available to the general public. This project will create a new set of interpretable mechanisms that provide dynamic, heterogeneous, and multi-source explanations in temporal event prediction. Although a variety of explainable approaches have been developed in many traditional machine learning tasks, several unique challenges remain unexplored: (1) Regulating attention-based models for auditing a model is an urgent need given the wide adoption of attention mechanisms in deep learning. (2) Most current approaches focus on selecting important input features based on correlations which often lack causal evidence. (3) Reciprocal relations and dependencies among heterogeneous data sources are largely ignored in current research. This project will address these challenges in the following ways: (i) It will investigate new collaborative attention regulation strategies by using domain knowledge for calibration. (ii) It will integrate dynamic causal discovery into temporal event prediction with hidden confounder representation learning. (iii) It will provide multi-faceted explanations by distilling semantic knowledge from unstructured text and incorporating this knowledge in a co-learning framework with multi-source temporal data. The specific research aims will be complemented by an extensive set of evaluation plans including standard retrospective evaluation on multi-scope real-world event records as well as multiple user studies to evaluate the interpretability of developed models. The project outcomes including observational data, interpretable prediction tools, and open-source software for stakeholders will be shared with the computer science research community and other practitioners in healthcare, political science, and epidemiology. This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

For specific questions or comments about this information including the NSF Project Outcomes Report, contact us.