Skip directly to content

Minimize RSR Award Detail

Research Spending & Results

Award Detail

Awardee:ARIZONA STATE UNIVERSITY
Doing Business As Name:Arizona State University
PD/PI:
  • Christian Wright
  • (480) 965-5479
  • cdwrigh2@asu.edu
Co-PD(s)/co-PI(s):
  • Sara E Brownell ~000636591
Award Date:11/22/2017
Estimated Total Award Amount: $ 249,922
Funds Obligated to Date: $ 249,922
  • FY 2018=$249,922
Start Date:12/01/2017
End Date:11/30/2020
Transaction Type:Grant
Agency:NSF
Awarding Agency Code:4900
Funding Agency Code:4900
CFDA Number:47.076
Primary Program Source:040106 NSF Education & Human Resource
Award Title or Description:Exploring Differences Between Instructors' Exams and How These Differences Produce Scores that Could Inaccurately and Inequitably Represent Student Understanding
Federal Award ID Number:1711272
DUNS ID:943360412
Parent DUNS ID:806345658
Program:IUSE
Program Officer:
  • Lidia C. Yoshida
  • (703) 292-4644
  • lyoshida@nsf.gov

Awardee Location

Street:ORSPA
City:TEMPE
State:AZ
ZIP:85281-6011
County:Tempe
Country:US
Awardee Cong. District:09

Primary Place of Performance

Organization Name:Arizona State University
Street:P.O. Box 876011
City:Tempe
State:AZ
ZIP:85287-6011
County:Tempe
Country:US
Cong. District:09

Abstract at Time of Award

In many STEM courses, students' exam scores determine their course grades. In turn, when averaged together, course grades determine each student's grade point average, which can affect their persistence in a STEM major and their competitiveness for admission to professional or graduate schools. Thus, it is important that these exams accurately and equitably measure students' understanding of the subject matter that they are supposed to test. Little research has been done to determine whether specific exam questions accurately measure student understanding or whether they are fair to all students. This collaborative project between Arizona State University and University of Washington will try to fill this gap in knowledge by analyzing questions on introductory biology course exams taught by different instructors. They will examine the relationships between different types of questions, student scores on the questions, and student understanding of the concept that the question is supposed to test. This information has the potential to help biology instructors more fairly and accurately test student understanding of biology. It may also provide guidelines for building fair and accurate exam questions that are relevant to other STEM disciplines. This project has four key goals: (1) characterizing the composition of instructor-generated biology exams across sections of the same introductory biology course; (2) characterizing elements of questions that result in students performing differently on questions; (3) determining if modifying questions can diminish differences in student performance; and (4) correlating exam scores to students' conceptual understanding of biology. This project will focus on the exams of different instructors teaching the same introductory biology course offered across multiple institutions within the same regional network. No published studies have explored differences in question performance on instructor-generated exams in introductory biology courses. Further, this project would be the largest, most comprehensive analysis of instructor-generated exams and questions in any STEM discipline done so far, providing insights into what factors may affect performance differences on individual questions. This project can help instructors and researchers become more aware of elements of questions that result in unfair evaluation of certain groups of students, leading to more accurate and equitable measurement of student understanding.

For specific questions or comments about this information including the NSF Project Outcomes Report, contact us.