Skip directly to content

Minimize RSR Award Detail

Research Spending & Results

Award Detail

Awardee:UNIVERSITY OF UTAH, THE
Doing Business As Name:University of Utah
PD/PI:
  • Ganesh L Gopalakrishnan
  • (801) 581-3568
  • ganesh@cs.utah.edu
Co-PD(s)/co-PI(s):
  • Aditya Bhaskara
Award Date:06/08/2021
Estimated Total Award Amount: $ 450,000
Funds Obligated to Date: $ 450,000
  • FY 2021=$450,000
Start Date:10/01/2021
End Date:09/30/2024
Transaction Type:Grant
Agency:NSF
Awarding Agency Code:4900
Funding Agency Code:4900
CFDA Number:47.070
Primary Program Source:040100 NSF RESEARCH & RELATED ACTIVIT
Award Title or Description:Collaborative Research: FMitF: Track-1: Correctness at Both Ends: Rigorous ML Meets Efficient Sparse Implementations
Federal Award ID Number:2124100
DUNS ID:009095365
Parent DUNS ID:009095365
Program:FMitF: Formal Methods in the F
Program Officer:
  • Wei Ding
  • (703) 292-8017
  • weiding@nsf.gov

Awardee Location

Street:75 S 2000 E
City:SALT LAKE CITY
State:UT
ZIP:84112-8930
County:Salt Lake City
Country:US
Awardee Cong. District:02

Primary Place of Performance

Organization Name:University Of Utah, School of Computing
Street:50 South Central Campus Drive, 5
City:Salt Lake City
State:UT
ZIP:84112-9205
County:Salt Lake City
Country:US
Cong. District:02

Abstract at Time of Award

This project called CBE (correctness at both ends) addresses the growing concern that on one hand systems based on deep neural networks (DNNs) are playing an increasing role in critical applications, but on the other hand these systems can do significant damage if they harbor software defects. These defects go beyond the familiar logic bugs, including also semantic bugs such as misclassifying medical images. Traditionally, attempts to make DNNs more efficient by sparsifying them have often resulted in increased levels of semantic bugs. The project's novelties are its integrated approach to sparsify networks while preserving semantic correctness as well as helping eliminate logic bugs. The project's impacts are in making DNNs energy-efficient, permitting their deployment in edge devices, while also helping eliminate their defects. CBE employs a knowledge-distillation paradigm wherein a sparsified network is trained by imitating the parent network's classification behavior. Sparsification steps that meet higher level semantic objectives may unfortunately lead to an inefficient sparse implementation -- especially in newly introduced GPUs for which hand-tuned sparse libraries are unavailable. The CBE project supports the developers of such libraries by also providing low-level implementation verification tools. The investigators are domain experts in DNN semantics and optimization, and also in software verification. Their three-year collaborative research project is taking case studies of DNNs from critical areas such as medical imaging, and showing how DNNs can be sparsifed to ensure correctness at both ends. The main impact of this project is to develop prototype software tools that can help spur further research and technology development. Another major impact is the training of students who will fill critical roles in the fast-growing area of deployable machine-learning systems where talent shortage can cripple the nation's economy and security. This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

For specific questions or comments about this information including the NSF Project Outcomes Report, contact us.