The video shows localization data collected during a short flight over an IR marker, which is used to estimate the relative location of the copter with respect to the visual landmark. Note that this was a manually flown flight, but the ultimate goal is automation. Detecting visual landmarks/features is a fundamental task in many forms of robot localization and navigation. For example, the Snapdragon Flight includes 4 camera sensors for visual-inertial odometry.

The plot in the video shows the copter’s vision-based position estimation, versus the traditional position estimation. The red data is logged by APM:Copter running on Pixhawk with a 3DR GPS module. The blue data is derived from IR-LOCK sensor data, as it detects a MarkOne Beacon at approximately 50Hz. LidarLite is used for AGL altitude measurements. The presented data looks nice, since it was a fairly tame test. We need to calibrate the lens before we can correctly handle larger pitch/roll angles.

 

 

You can think of this as a ‘flipped’ version of the StarGazer indoor robot localization system, where a unique visual landmark is placed on the ceiling. However, the copter localization problem is a bit more tricky, due to the extra degrees of freedom. It can pitch, roll, ascend, etc. So the copter localization estimation also depends on the flight controller’s state estimation. And ideally, all of the data would be fused together.

 

 

One of the key advantages of having a uniquely identifiable visual landmark is that it can be used to remove drift in velocity and/or position estimations, which is typically the function of the GPS. This can also be accomplished by developing a local map (i.e., SLAM) …. With the MarkOne Beacon, we can also operate at night, but the video would be even more boring. :) Robust vision performance in variable lighting conditions typically requires some form of IR projection. (see Intel RealSense specs)