Skip directly to content

Minimize RSR Award Detail

Research Spending & Results

Award Detail

Doing Business As Name:University of Michigan Ann Arbor
  • Frederick G Conrad
  • (734) 936-1019
Award Date:09/20/2010
Estimated Total Award Amount: $ 705,410
Funds Obligated to Date: $ 705,410
  • FY 2011=$179,537
  • FY 2010=$525,873
Start Date:10/01/2010
End Date:09/30/2014
Transaction Type:Grant
Awarding Agency Code:4900
Funding Agency Code:4900
CFDA Number:47.075
Primary Program Source:040100 NSF RESEARCH & RELATED ACTIVIT
Award Title or Description:Collaborative Research: Responding to Surveys on Mobile Multimodal Devices
Federal Award ID Number:1026225
DUNS ID:073133571
Parent DUNS ID:073133571
Program:Methodology, Measuremt & Stats
Program Officer:
  • Cheryl Eavey
  • (703) 292-7269

Awardee Location

Street:3003 South State St. Room 1062
City:Ann Arbor
County:Ann Arbor
Awardee Cong. District:12

Primary Place of Performance

Organization Name:University of Michigan Ann Arbor
Street:3003 South State St. Room 1062
City:Ann Arbor
County:Ann Arbor
Cong. District:12

Abstract at Time of Award

Collecting survey data of national importance (for example, on employment, health, and public opinion trends) is becoming more difficult as communication technologies undergo rapid and radical change. Important basic questions about whether and how to adapt data collection methods urgently need to be addressed. This project investigates how survey participation, completion, data quality, and respondent satisfaction are affected when respondents answer survey questions via mobile phones with multimedia capabilities (e.g., iPhones and other "app phones"), which allow alternative modes for answering (voice, text) and can allow respondents to answer questions in a different mode than the one in which they were invited. Two experiments will compare participation, completion, data quality, and satisfaction when the interviewing agent is a live human or a computer and when the medium of communication is voice or text, resulting in four modes: human-voice interviews, human-text interviews, automated-voice interviews, and automated-text interviews. The first experiment randomly assigns respondents to one of these modes; the second experiment allows respondents to choose the mode in which they answer. Results will shed light on whether respondents using these devices agree to participate and answer differently to human and computer-based interviewing agents, and whether this differs for more and less sensitive questions. Results also will shed light on how the effort required to interact with a particular medium (e.g., more effort to enter text than to speak) affects respondents' behavior and experience, and whether the physical environment that respondents are in (a noisy environment, a non-private environment, a brightly lit environment with glare that makes reading a screen difficult) affects their mode choice and the quality of their data. Finally, the results will clarify how allowing respondents to choose their mode of response affects response rates and data quality. These studies are designed to benefit researchers, survey respondents, and society more broadly. For researchers, the benefit is to allow them to adapt to the mobile revolution as they collect data that are essential for the functioning of modern societies, maintaining high levels of contact and participation while gathering reliable and useful data. For survey respondents, the potential benefit is the design of systems that make it more convenient and pleasant to respond and that enable them to choose ways of responding appropriate to their interactive style, the subject matter, and their physical environment. For society more broadly, it is essential that the survey enterprise is able to continue to gather crucial information that is reliable and does not place undue burden on citizens as their use of communication technology changes and as alternate sources of digital data about people proliferate. More fundamentally, the results will add to basic understanding of how human communication is evolving as people have expanded ability to communicate anytime, anywhere, and in a variety of ways. The project is supported by the Methodology, Measurement, and Statistics Program and a consortium of federal statistical agencies as part of a joint activity to support research on survey and statistical methodology.

Publications Produced as a Result of this Research

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

Johnston, M., Ehlen, P., Conrad, F.G., Schober, M.F., Antoun, C., Fail, S., Hupp, A., Vickers, L, Yan, H., & Zhang, C. "Spoken dialog systems for automated survey interviewing" Proceedings of the 14th Annual SIGDIAL Meeting on Discourse and Dialogue (SIGDIAL 2013), v., 2013, p.329.

Schober, M.F., Conrad, F.G., Dijkstra, W., & Ongena, Y.P. "Disfluencies and gaze aversion in unreliable responses to survey questions." Journal of Official Statistics, v.28, 2012, p.555-582.

Project Outcomes Report


This Project Outcomes Report for the General Public is displayed verbatim as submitted by the Principal Investigator (PI) for this award. Any opinions, findings, and conclusions or recommendations expressed in this Report are those of the PI and do not necessarily reflect the views of the National Science Foundation; NSF has not approved or endorsed its content.


Responding to surveys on mobile multimodal devices

 As people increasingly communicate via asynchronous non-spoken modes on mobile devices, particularly text messaging (SMS), longstanding assumptions and practices of social measurement via telephone survey interviewing are being challenged. The funding for the current project supported two experiments that examined data quality in text and voice interviews on smartphones (iPhones) administered by human and automated interviewers, resulting in four interview modes: Human Voice, Human Text, Automated Voice, and Automated Text.

In the first experiment, 634 people who had agreed to participate in an interview on their iPhone were randomly assigned to answer 32 questions from US social surveys via text messaging or speech, administered either by a human interviewer or by an automated interviewing system. 10 interviewers from the University of Michigan Survey Research Center administered the voice and text interviews; automated systems launched parallel text and voice interviews at the same time as the human interviews were launched. Quality of answers was measured by (1) precision of numerical answers (how many were not rounded—rounded answers presumably involve less thoughtful responding), (2) differentiation of answers to multiple questions with the same response scale (the same answer for all questions likely indicates a lack of thoughtfulness), and (3) disclosure of socially undesirable (more embarrassing or compromising information, which can be assumed to be more truthful). The results showed that text interviews led to higher quality data—more precise and differentiated answers and more disclosure of sensitive information—than voice interviews, both with human and automated interviewers. Independent of this, respondents also disclosed more sensitive information to automated (voice and text) interviewers. Text respondents reported a strong preference for future interviews by text.

The second experiment examined the impact on the quality of answers of choosing one's mode of interviewing on a single device. In the experiment, an additional 626 iPhone users were contacted in the same four modes and required to choose their interview mode (which could be the contact mode). Overall, more than half the respondents chose to switch modes, most often into a text mode. The findings demonstrate that just being able to choose (whether switching or not) improved data quality: when respondents chose the interview mode, responses were more precise and differentiated than when the mode was assigned (Experiment 1), and there was no less disclosure. Those who began the interview in a mode they chose were more likely to complete it than respondents interviewed in an assigned mode; the only evident cost for mode choice was a small loss of invited participants at the point the choice was made (mostly in switches from automated to human interviews). Finally, participants who chose their interview mode were more satisfied with the experience than those who were interviewed in an assigned mode.

The findings suggest that people interviewed on mobile devices at a time and place that is convenient for them, even when they are multitasking, can give more trustworthy and accurate answers than those in more traditional spoken interviews. The results also suggest that allowing respondents to choose their interviewing mode on a single device can lead to improved data quality and increased satisfaction. Many respondents reported that responding via text is particularly convenient because they can continue with other activities while responding; convenience was respondents' most frequent explanation for why they chose the interviewing modes they did (whatever their choice). 

Additional contributions ...

For specific questions or comments about this information including the NSF Project Outcomes Report, contact us.