Hyunggic !

PhD problem statement
For my PhD work, I search for a systematic approach to a holistic scene understanding for autonomous driving in urban environments. The approach should be mathematically solid and, at the same time, practically working well on our Autonomous Cadillac SRX4 shown at the right. The main building blocks for a proof-of-concept are:

1. Vision-based object detection
2. Vision-based lane marker detection
3. Multi-sensor fusion for moving object detection and tracking
4. Visual odometry
5. Scene modeling using a graphical model for integrating above perception sub-tasks

Vision-based Object Detection and Tracking

Vehicle, pedestrian and bicycle detection and tracking

Vision-based object "tracking by detection" method using Pedro Felzenszwalb's deformable part model HOG (Histogram of Oriented Gradient) detector and an efficient Kalman filter based tracker. [Click the title to see more about it.]


Moving Object Tracking using Radar, Lidar, and Vision

Multisensor multiobject tracking

Implementing a multi-sensor, multi-object tracking system using three main heterogeneous sensors, namely, Radar, Lidar, and Vision. Now the working version has been implemented based on Boss' tracking system. We added a vision sensor to increase its object recoginition capability. Check an amazing performance of our tracking system on highway and urban envrironments ! [Click the title to see more about it.]

Visual Odometry

Vehicle ego-motion estimation

I might be able to implement this in this summer, so, stay tuned!

A framework for Holistic Scene Understanding

A unified method for modelling interactions (i.e., information flows) among perception algorithms

Coming soon!