​Dense Estimate ​the​ ​Complex Dynamic Scene ​in Three-dimensional Space from Images

Date: December 13, 2017


Estimating scene motion accurately and efficiently is a key to many applications such as robot planning, dynamic reconstruction, autonomous driving, action recognition, etc. In this report, I will present my research in dynamic scene motion understanding in 3D using images from the examples in dynamic object mapping and scene flow estimation. I will start with my work and motivation in dense scene mapping in the dynamic world, in both RGB-D indoor scenario and stereo outdoor large-scale scenes. Then I will focus on discussing my two recent work in dense 3D non-rigid correspondence, termed as scene flow. In the first work, I will present how to solve scene flow via continuous optimization method applied in stereo imagery. To overcome the inference complexity in optimization-based approach, I propose the first learning scene flow method applied in RGB-D images, which achieves state-of-art accuracy, fast execution time and good generalization to complex scenes.

Personal Information

Zhaoyang Lv is​ a Ph.D. student in Robotics at Georgia Tech, School of Interactive Computing, jointly advised by Prof. James Rehg, and Prof. Frank Dellaert. Before ​he​ start​s​ the Phd​ at ​Georgia Tech, ​he​ finished ​the Master thesis under the supervision of Prof. Andrew Davison at Imperial College London. ​His​ research interests cover computer vision researches as the perception for robot systems, particularly in the 3D scene understanding that can bridge perception to planning and control. ​His ​current focus is to explore efficient approaches for dense 3D motion (scene flow) from videos, which is the core of ​the​ thesis topic.​ ​

More information about personal information is shared via my personal website: https://www.cc.gatech.edu/~zlv30/
A short CV can be found at https://www.cc.gatech.edu/~zlv30/files/cv.pdf





Comments are closed.