Skip directly to content

Minimize RSR Award Detail

Research Spending & Results

Award Detail

Awardee:UNIVERSITY OF WYOMING
PD/PI:
  • Chao Lan
  • clan@uwyo.edu
Award Date:09/04/2019
Estimated Total Award Amount: $ 174,998
Funds Obligated to Date: $ 174,998
  • FY 2019=$174,998
Start Date:10/01/2019
End Date:09/30/2021
Transaction Type:Grant
Agency:NSF
Awarding Agency Code:4900
Funding Agency Code:4900
CFDA Number:47.070
Primary Program Source:040100 NSF RESEARCH & RELATED ACTIVIT
Award Title or Description:CRII: III: Fair Machine Learning with Restricted Access to Sensitive Personal Data
Federal Award ID Number:1850418
DUNS ID:069690956
Parent DUNS ID:069690956
Program:INFO Info Integration & Inform
Program Officer:
  • Frank Olken
  • (703) 292-8930
  • folken@nsf.gov

Awardee Location

Street:1000 E. University Avenue
City:Laramie
State:WY
ZIP:82071-2000
County:Laramie
Country:US
Awardee Cong. District:00

Primary Place of Performance

Organization Name:University of Wyoming
Street:
City:
State:WY
ZIP:82071-2000
County:Laramie
Country:US
Cong. District:00

Abstract at Time of Award

Machine learning is increasingly applied to assist consequential decision makings, typically by learning a model to automatically score people's potential and prioritizing advantaged decisions on those receiving higher scores. While this enables more efficient and evidence-based decision makings, recent studies show that many model scorings are biased against minority people and can result in negative societal impacts. This has triggered intensive research interests in developing fair machine learning techniques that can mitigate demographic bias in model scoring. The problem is, existing developments are running into a conflict with data privacy regulations. To be specific, most fair learning techniques require free access to one's sensitive demographic data, but the latter is increasingly restricted to use for protecting one's privacy. There are ongoing debates on whether it is permissible or necessary to use sensitive demographic data in fair machine learning, but so far no consensus has been reached due to the lack of scientific investigations. This project aims to fill this gap, not only for establishing a fundamental relation between fairness and privacy but also for broadening the deployment and impact of fair learning techniques in real-world applications; the project will also have an important educational impact via the involvement of underrepresented students in computer science research and the creation of a new curriculum on ethical machine learning to train the next-generation ethics-aware data scientists. This project will develop novel fair machine learning techniques with restricted access to sensitive demographic data (SDD). Three scenarios will be formulated and solved: where SDD is not accessible, where SDD can be accessed with cost, and where SDD can be accessed through a private third party. To tackle these scenarios, this project will integrate fairness objectives with a variety of sophisticated learning techniques including transfer learning, active learning, distributed learning and private learning. The project will also investigate a fundamental relation between fairness and privacy in machine learning, that is, how much fairness can be achieved in a model's scoring if one has to protect certain privacy of the sensitive demographic data when learning the model. The developed solutions will be presented in conference and journal venues, and the project website will provide access to the results, with references to the codes for the developed and evaluated algorithms. This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

For specific questions or comments about this information including the NSF Project Outcomes Report, contact us.