Skip directly to content

Minimize RSR Award Detail

Research Spending & Results

Award Detail

Awardee:UNIVERSITY OF ARKANSAS SYSTEM
Doing Business As Name:University of Arkansas
PD/PI:
  • Lu Zhang
  • (479) 575-4382
  • lz006@uark.edu
Co-PD(s)/co-PI(s):
  • Xintao Wu
Award Date:09/01/2021
Estimated Total Award Amount: $ 484,828
Funds Obligated to Date: $ 484,828
  • FY 2021=$484,828
Start Date:10/01/2021
End Date:09/30/2024
Transaction Type:Grant
Agency:NSF
Awarding Agency Code:4900
Funding Agency Code:4900
CFDA Number:47.070
Primary Program Source:040100 NSF RESEARCH & RELATED ACTIVIT
Award Title or Description:III:Small: Counterfactually Fair Machine Learning through Causal Modeling
Federal Award ID Number:1910284
DUNS ID:191429745
Parent DUNS ID:055600001
Program:Info Integration & Informatics
Program Officer:
  • Sylvia Spengler
  • (703) 292-7347
  • sspengle@nsf.gov

Awardee Location

Street:1125 W. Maple Street
City:Fayetteville
State:AR
ZIP:72701-3124
County:Fayetteville
Country:US
Awardee Cong. District:03

Primary Place of Performance

Organization Name:University of Arkansas
Street:430 J.B. Hunt Building
City:Fayetteville
State:AR
ZIP:72701-1201
County:Fayetteville
Country:US
Cong. District:03

Abstract at Time of Award

Machine learning models play increasingly important roles in modern society. Various machine learning models have been built around the collection and use of historical data to automatically make important decisions like employment, loan approval, insurance, bail and criminal sentencing. It is imperative to ensure that the decisions made with the assistance of machine learning models are not subject to discrimination and social biases. To achieve this goal, fundamental questions including how to define fairness criteria and how to incorporate them into machine learning algorithms to build fair models must be answered. This project is to establish a unified framework for addressing these challenges. The framework is built upon causal inference theories to answer “what if” questions that are critical in judging discrimination and fairness. The successful outcome of this project will lead to advances in theoretical understanding of the application of causality to fairness and contribute to the limited base of knowledge in fair machine learning. The education program will involve, through courses and thesis projects, undergraduate and graduate students to enhance their knowledge and skills in solving problems in machine learning and artificial intelligence, and attract high school students especially those from underrepresented groups to pursue careers in STEM. This project proposes a unified framework for fair machine learning, which covers different types of fairness measures such as total discrimination, direct/indirect discrimination or counterfactual fairness, different levels of fairness such as group-level or individual-level, mixed data type such as categorical or numerical, different forms of equations in the causal model, and fairness in both the data and the model. Within the framework, the investigators will address the unidentifiability challenge in causal inference by digging into the deep-seated reasons of unidentifiability and deriving mathematical bounds for unidentifiable causal effects. With the developed techniques and tools, the investigators will develop quantitative measure for existing fairness notions as well as propose new fairness notions that are compatible with the framework. The investigators will incorporate proposed fairness measures into machine learning model construction. Finally, the investigators will make the proposed framework more generally applicable by handling the mixed-type variables via deep generative models and relaxing the Markovian assumption as semi-Markovian models. This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

For specific questions or comments about this information including the NSF Project Outcomes Report, contact us.