Skip directly to content

Minimize RSR Award Detail

Research Spending & Results

Award Detail

Awardee:NEW JERSEY INSTITUTE OF TECHNOLOGY
Doing Business As Name:New Jersey Institute of Technology
PD/PI:
  • Tao Han
  • (908) 768-0083
  • th36@njit.edu
Award Date:09/23/2021
Estimated Total Award Amount: $ 403,794
Funds Obligated to Date: $ 265,726
  • FY 2019=$249,726
  • FY 2020=$16,000
Start Date:07/01/2021
End Date:09/30/2022
Transaction Type:Grant
Agency:NSF
Awarding Agency Code:4900
Funding Agency Code:4900
CFDA Number:47.070
Primary Program Source:040100 NSF RESEARCH & RELATED ACTIVIT
Award Title or Description:CNS Core: Small: UbiVision: Ubiquitous Machine Vision with Adaptive Wireless Networking and Edge Computing
Federal Award ID Number:2147821
DUNS ID:075162990
Parent DUNS ID:075162990
Program:Networking Technology and Syst
Program Officer:
  • Murat Torlak
  • (703) 292-7748
  • mtorlak@nsf.gov

Awardee Location

Street:University Heights
City:Newark
State:NJ
ZIP:07102-1982
County:Newark
Country:US
Awardee Cong. District:10

Primary Place of Performance

Organization Name:New Jersey Institute of Technology
Street:323 Dr Martin Luther King Jr Blvd
City:Newark
State:NJ
ZIP:07102-1824
County:Newark
Country:US
Cong. District:10

Abstract at Time of Award

Penetration of technologies such as wireless broadband and artificial intelligence (AI) is propelling a rapid adoption of network cameras across the household, industrial, and commercial sectors. These cameras such as surveillance cameras, dash cameras, and wearable cameras can capture voluminous amounts of visual data that can be turned into valuable information for public safety, autonomous driving, service robots, augmented/mixed reality, assisted living, etc. To reach the potential, new methods are needed for efficiently and effectively extracting, transferring, and sharing useful information from ubiquitous cameras while preserving user privacy. This project uses techniques and perspectives from wireless networking, computer vision, and edge computing to analyze and solve the problems in ubiquitous camera systems, fosters interdisciplinary research, provides a unique training program for undergraduate and graduate students, and has a high potential to introduce transformative technologies that enable new real-life products and services. This project aims to realize ubiquitous machine vision (UbiVision) and enable efficient utilization of networked cameras for information extraction and sharing. Toward this end, three fundamental research problems are investigated: 1) how to dynamically manage highly coupled resources and functions across multiple technology domains: camera functions, network resources, and computation resources on edge servers; 2) how to design adaptive and efficient machine vision algorithms for resource-constrained smart cameras; and 3) how to engineer reliable machine learning frameworks for robust vision analysis on edge servers. First, a new model-free end-to-end resource orchestration method is designed to improve the efficiency of wireless networking and computing by combining the merits of conventional optimization and emerging machine learning techniques. Second, a novel universal convolution neural network (CNN) and corresponding CNN optimization methods are developed for efficient multi-task feature learning on smart cameras. Third, a teacher-student network learning paradigm is innovated to develop memory and computation efficient machine vision algorithms that are able to achieve robust performance under various adverse conditions caused by varying network conditions and limited server computation budgets. This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.

Publications Produced as a Result of this Research

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

Bhattacharyya, Sumanta and Shen, Ju and Welch, Stephen and Chen, Chen "Efficient unsupervised monocular depth estimation using attention guided generative adversarial network" Journal of Real-Time Image Processing, v.18, 2021, p.. doi:https://doi.org/10.1007/s11554-021-01092-0 Citation details  

Liu, Qiang and Han, Tao and Moges, Ephraim "EdgeSlice: Slicing Wireless Edge Computing Network with Decentralized Deep Reinforcement Learning" IEEE 40th International Conference on Distributed Computing Systems (ICDCS), v., 2020, p.. doi:https://doi.org/10.1109/ICDCS47774.2020.00028 Citation details  

Liu, Qiang and Han, Tao and Xie, Jiang Linda and Kim, BaekGyu "LiveMap: Real-Time Dynamic Map in Automotive Edge Computing" IEEE Conference on Computer Communications, v., 2021, p.. doi:https://doi.org/10.1109/INFOCOM42981.2021.9488872 Citation details  

Zhu, Sijie and Yang, Taojiannan and Chen, Chen "Visual Explanation for Deep Metric Learning" IEEE Transactions on Image Processing, v.30, 2021, p.. doi:https://doi.org/10.1109/TIP.2021.3107214 Citation details  

For specific questions or comments about this information including the NSF Project Outcomes Report, contact us.