Skip directly to content

Minimize RSR Award Detail

Research Spending & Results

Award Detail

Awardee:BROWN UNIVERSITY IN PROVIDENCE IN THE STATE OF RHODE ISLAND AND PROVIDENCE PLANTATIONS
Doing Business As Name:Brown University
PD/PI:
  • Daniel Ritchie
  • (401) 863-2777
  • daniel_ritchie@brown.edu
Award Date:09/05/2019
Estimated Total Award Amount: $ 498,333
Funds Obligated to Date: $ 498,333
  • FY 2019=$498,333
Start Date:10/01/2019
End Date:09/30/2022
Transaction Type:Grant
Agency:NSF
Awarding Agency Code:4900
Funding Agency Code:4900
CFDA Number:47.070
Primary Program Source:040100 NSF RESEARCH & RELATED ACTIVIT
Award Title or Description:CHS: Small: Learning to Automatically Design Interior Spaces
Federal Award ID Number:1907547
DUNS ID:001785542
Parent DUNS ID:001785542
Program:CHS-Cyber-Human Systems
Program Officer:
  • Ephraim Glinert
  • (703) 292-8930
  • eglinert@nsf.gov

Awardee Location

Street:BOX 1929
City:Providence
State:RI
ZIP:02912-9002
County:Providence
Country:US
Awardee Cong. District:01

Primary Place of Performance

Organization Name:Brown University
Street:Office of Sponsored Projects
City:Providence
State:RI
ZIP:02912-9093
County:Providence
Country:US
Cong. District:01

Abstract at Time of Award

People spend a large part of their lives indoors, in bedrooms, living rooms, offices, kitchens, etc. The demand for virtual versions of these spaces has never been higher; robotics, computer vision, architecture, interior design, virtual and augmented reality -- all of these fields need to create high-fidelity digital instances of real-world indoor scenes. To meet this need, this project will develop new generative models of indoor scenes that can rapidly synthesize novel environments. To achieve this goal, a scene synthesis system should be data driven, be able to quickly generate a variety of plausible and visually appealing results, and be user-controllable. While prior work has addressed indoor scene synthesis, no existing approach satisfies all of these requirements. Not only will this project achieve that goal, it also includes efforts to use the new software system for training robots to navigate. Broader impact of project outcomes will be enhanced through industrial technology transfer in collaboration with two furniture and interior design companies. The research will create freely available online demos, and will engage and mentor female students as research assistants. The first component of the envisaged system will be a new scene generative model based on deep convolutional neural networks that unifies a detailed, image-based representation of scenes based on floor plans with a discrete, symbolic representation of scenes based on object relationship graphs, thereby gaining the benefits of both to generate a variety of plausible scenes. Convolutions on both graphs and images will be employed to make synthesis decisions based on the relevant spatial context in the scene; the resulting model will be fast, controllable, and fully data driven. The system's second component will be a model of the visual compatibility of scene objects, which is necessary for generating visually appealing scenes. This model will exploit a convolutional network to analyze rendered views of the scene, capturing the visual appearance of the scene and objects in it; the network will be trained on a new dataset of professionally designed interior scenes. This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

For specific questions or comments about this information including the NSF Project Outcomes Report, contact us.