This project will utilize the Carnegie Mellon University Multi Modal Activity (CMU-MMAC) data set as a testbed for developing automatic object segmentation and tracking algorithms. The student will use the available data to create an algorithm, which will segment and track objects used by different people performing the same action, but at a different rate and with a different style.
The project will involve finding object features that are invariant to rotation, translation and are robust to occlusion. Furthermore, existing algorithms for segmenting and tracking objects from a static and a first-person camera will be explored, modified and applied to natural activites found in CMU-MMAC data set.
The CMU-MMAC data set is very challenging because subjects were asked to perform kitchen activities as naturally as they would perform them in their home. Due to the high variability across people in executing the same action, it is essential that activity recognition programs utilize algorithms for object segmentation and tracking. If the objects in use are known, we can utilize semantic information from ontologies or recipes to help with activity recognition algorithms. The size of the data set requires that such algorithms are automatic, as it would be infeasible to perform object segmentation and tracking manually.
The student will be immediately involved in research and will be directly supervised by a graduate student from the faculty's lab.