NestDNN: Resource-Aware Multi-Tenant On-Device Deep Learning for Continuous Mobile Vision

Mobile vision systems usually run more than one streaming vision applications (e.g., face detection, scene understanding) at a time. Limited on-device resource incurs resource contention across applications. To this end, we propose NestDNN, a resource-aware deep learning framework. The key technique of NestDNN is a multi-capacity model that is reconfigurable according to resource variation. To fully utilize the potential of multi-capacity model, we also design a runtime scheduler that jointly optimizes the overall performance.

Paper in ACM MobiCom 2018.

Selected Media: Synced (Chinese)

DeepASL: Enabling Ubiquitous and Non-Intrusive Word and Sentence-Level Sign Language Translation

DeepASL technology enables realtime two-way communcation between deaf population and normal hearings. In effect, deaf people simply perform sign language as usual and DeepASL will capture the hand shape and movement information, transfrom into recognized word or sentence, and "speak" with synthesized voice. To recognize them, we use a hierarhical bidirectional RNN appended with Connectionist Temporal Classification (CTC) losss.

Paper in ACM SenSys 2017.

Selected Media: NSF, Nvidia-1, Nvidia-2, SmithSonian, Michigan Radio, AAU

Multi-Task Learning Age-Gender Identification and DCGANS on Face Regeneration and Completion

In this project, we realize age and gender identification using multi-task learning in CNN. We also generate new faces using generative adversarial network (GAN) boosted by age-gender multi-task learning. We show that multi-task learning achieves better performance than single-task learning. We also demonstrate that GAN achieves high performance in generating faces.

Technical Report

BodyScan: Enabling Radio-based Sensing on Wearable Devices for Contactless Activity and Vital Sign Monitoring

BodyScan is able to detect more than ten daily activities and monitor breath rate using a smartwatch and hip-mounted device. It transmits WiFi signal, receives the signal that bounces off user's body, and classifies the signal into recognized activities using SVM. Breath rate is detected using power spectral density.

Paper in ACM MobiSys 2016.

HeadScan: A Wearable System for Radio-based Sensing of Head and Mouth-related Activities

HeadScan enables contactless sensing of head and mouth related activities. Simply with two antennas on both shoulders, HeadScan is able to recognize whether you eat, cough, drink and speak. HeadScan potentially benefits the medical treatment and care of diabetes and autism population.

Paper in ACM/IEEE IPSN 2016.

Selected Media: Stanford Medicine, MedGadget, Futurity, MSUToday, Fox 2, ReadWrite WLNS6

Biyi Fang's Profile Picture

AirSense: An Intelligent Home-based Sensing System for Indoor Air Quality Analytics

AirSense technology detects and classifies the pollution events (e.g., smoke and cook) that happen indoor. It also visualizes the indoor air quality (IAQ) in your smartphone application. We use PM2.5, volatile organic compound (VOC) and humidity sensors to differentiate pollution events. AirSense could increase the awareness of indoor pollution of people.

Paper in ACM Ubicomp 2016.

Selected Media: Futurity, MSUToday, ASHARE, DBusiness

© Biyi Fang. All rights reserved.