Target shape identification for nanosatellites using monocular point cloud techniques

Post, Mark A. and Yan, Xiu T. (2014) Target shape identification for nanosatellites using monocular point cloud techniques. In: 6th European CubeSat Symposium, 2014-10-14 - 2014-10-16.

[thumbnail of Post-Yan-ECS-2014-target-shape-identification-for-nanosatellites-using-monocular-point-cloud-techniques]
Preview
Text. Filename: Post_Yan_ECS_2014_target_shape_identification_for_nanosatellites_using_monocular_point_cloud_techniques.pdf
Accepted Author Manuscript

Download (6MB)| Preview

Abstract

Many mission scenarios for nanosatellites and CubeSat hardware have already been created that will require autonomous target tracking and rendezvous maneuvers in close proximity to other orbiting objects. While many existing hardware and software designs require the use of rangefinders or laser-based sensors to identify and track nearby objects, the size and power limitations of a CubeSat make a simple monocular system greatly preferable, so long as reliable identification can still be carried out. This presentation details the development and testing of an embedded algorithm for visually identifying the shape of a target and tracking its movement over time, which can include rotation about any axis. A known three-dimensional geometric model is required for use as a reference when identifying a target. First, feature descriptors implemented in the OpenCV framework are used to create a sparse point cloud of features from a nearby object. Using structure-from-motion (SfM) methods, feature points obtained over successive images can be triangulated in three dimensions to obtain a pose estimate. Statistical shape recognition is then used to identify the object based on features from available three-dimensional models. While more feature points make the identification more accurate, more computing power is required, and within the limitations of an embedded system, the balance of speed and accuracy is evaluated. The algorithm is designed to be efficient enough for feasible operation using embedded hardware useable on a CubeSat, and can be used with appropriate hardware for real-time operation. An overview of the algorithm and vision system design is given, and some initial test results for a simulated orbital rendezvous scenario are provided for some indication of the performance of these methods. Applications of interest for this type of algorithm include external monitoring of other spacecraft, robotic capture and docking, and space debris removal.