PhD Defense: Modeling Deep Context in Spatial and Temporal Domain

Talk
Xiyang Dai
Time: 
10.30.2018 11:00 to 13:00
Location: 

AVW 4172

Context has been one of the most important aspects in computer vision researches because it provides useful guidance to solve variant tasks in both spatial and temporal domain. As the recent rise of deep learning methods, deep networks have shown impressive performances on many computer vision tasks. Model deep context explicitly and implicitly in deep networks can further boost the effectiveness and efficiency of deep models.In spatial domain, implicitly model context can be useful to learn discriminative texture representations. We present an effective deep fusion architecture to capture both the second order and first older statistics of texture features; Meanwhile, explicitly model context can also be important to challenging task such as fine-grained classification. We then present a deep multi-task network that explicitly captures geometry constraints by simultaneously conducting fine-grained classification and key-point localization.In temporal domain, explicitly model context can be crucial to activity recognition and localization. We present a temporal context network to explicitly capture relative context around a proposal, which samples two temporal scales pair-wisely for precise temporal localization of human activities; Meanwhile, implicitly model context can lead to better network architecture for video applications. We then present a temporal aggregation network that learns a deep hierarchical representation for capturing temporal consistency.Finally, we conduct researches on jointly modeling context in both spatial and temporal domain for human action understanding, which requires to predict where, when and what a human action happens in a crowd scene. We first present a decoupled framework that has dedicated branches for spatial localization and temporal recognition. Contexts in spatial and temporal branches are modeled explicitly and fused together later to generate final predictions. We then present a flow-guided architecture that implicitly models spatiotemporal contexts together by utilizing motion flow in different formulations, such as, learning motion dynamics, smoothing deep features to enhance spatial localization and generating self-attention motion mask to assist temporal understanding.

Examining Committee:

Chair: Dr. Larry S. Davis Dean's rep: Dr. Rama Chellappa Members: Dr. Ramani Duraiswami Dr. Hector Corrada Bravo Dr. Tom Goldstein