Collision detection and estimation from a monocular visual sensor is an important enabling technology for safe navigation of small or micro air vehicles in near earth flight. In this project, we introduce a new approach called expansion segmentation, which simultaneously detects “collision danger regions” of significant positive divergence in inertial aided video, and estimates maximum likelihood time to collision (TTC) in a correspondenceless framework within the danger regions. This approach was motivated from a literature review which showed that existing approaches make strong assumptions about scene structure or camera motion, or pose collision detection without determining obstacle boundaries, both of which limit the operational envelope of a deployable system. Expansion segmentation is based on a new formulation of 6-DOF inertial aided TTC estimation, and a new derivation of a first order TTC uncertainty model due to subpixel quantization error and epipolar geometry uncertainty. Proof of concept results are shown in a custom designed urban flight simulator and on operational flight data from a small air vehicle

  1. J. Byrne and C.J. Taylor, “Expansion Segmentation for Visual Collision Detection and Estimation”, Proceedings of the 2009 IEEE International Conference on Robotics and Automation (ICRA’09), pp. 1938-1945, May 12-17, 2009; [pdf][video]

 

  1. B. Cohen and J. Byrne, “Inertial Aided SIFT for Visual Collision Estimation”, Video proceedings of the 2009 IEEE International Conference on Robotics and Automation (ICRA’09), May 12-17, 2009;  [video]