Skip directly to content

Minimize RSR Award Detail

Research Spending & Results

Award Detail

Awardee:UNIVERSITY OF ILLINOIS
Doing Business As Name:University of Illinois at Urbana-Champaign
PD/PI:
  • Kiel Christianson
  • (217) 265-6558
  • kiel@illinois.edu
Co-PD(s)/co-PI(s):
  • Tania Ionin
  • Melissa Bowles
  • ANNA TSIOLA
Award Date:07/07/2020
Estimated Total Award Amount: $ 16,409
Funds Obligated to Date: $ 16,409
  • FY 2020=$16,409
Start Date:07/15/2020
End Date:06/30/2022
Transaction Type:Grant
Agency:NSF
Awarding Agency Code:4900
Funding Agency Code:4900
CFDA Number:47.075
Primary Program Source:040100 NSF RESEARCH & RELATED ACTIVIT
Award Title or Description:Doctoral Dissertation Research: Linguistic and visual cue competition in novel L2 structure learning and thematic role assignment
Federal Award ID Number:2016922
DUNS ID:041544081
Parent DUNS ID:041544081
Program:DDRI Linguistics
Program Officer:
  • Joan Maling
  • (703) 292-8046
  • jmaling@nsf.gov

Awardee Location

Street:1901 South First Street
City:Champaign
State:IL
ZIP:61820-7406
County:Champaign
Country:US
Awardee Cong. District:13

Primary Place of Performance

Organization Name:University of Illinois at Urbana-Champaign
Street:HAB, 506 S. Wright St.
City:Urbana
State:IL
ZIP:61801-3620
County:Urbana
Country:US
Cong. District:13

Abstract at Time of Award

This project examines how American English speakers learn a case-marking, flexible word-order language as a second language under different context conditions (with supporting images or translations). It links the real time processing of the new language, as learners read sentences, to the learning outcomes. Learning languages whose structures are different from English poses great difficulty for learners. This project examines the origins of this attested difficulty, and how visual information (images) can guide learners’ attention and help them notice and understand new grammatical structures. It considers the role of the learning environment for grammar learning and can illustrate the way language input is better integrated with non-linguistic, multimodal contextual support. It can guide educators and policy makers in the development of educational software and game design, and online language learning. It can inform the design of successful language programs for U.S. adult learner populations. The studies include a language learning and a subsequent testing phase. During the learning phase, self-paced reading and eye-tracking will show where learners allocate their attention as they read second language sentences, and how attention to different parts of the sentence is modulated by the type of contextual support (images or translations). The goal is to examine how linguistic and visual information interact and compete for learners’ attention. The hypothesis is that visual scenes can make some aspects of grammar ‘stand out’ as learners compare their sentence interpretation to the visual information. During the testing phase, multiple measures will assess participants’ ability to comprehend and produce the new grammatical structure. These studies can show how multimodal input influences the focus of learners’ attention during real-time sentence reading and how this processing affects their learning. This will advance our understanding of how the mind processes and integrates linguistic and visual information when learning a second language, which has clear educational applications especially in online and multimedia language learning. This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

For specific questions or comments about this information including the NSF Project Outcomes Report, contact us.