SDTAM: Semi-direct tracking and mapping for RGB-D cameras

1. Brief introduction

We present a novel semi-direct tracking and mapping (SDTAM) approach for RGB-D cameras which inherits the advantages of both direct and feature based methods, and consequently it achieves high efficiency, accuracy, and robustness.

The inputted RGB-D frames are tracked with a direct method and keyframes are refined by minimizing a proposed measurement residual function which takes both geometric and depth information into account. A local optimization is performed to refine the local map while global optimization detects and corrects loops with the appearance based bag of words and a co-visibility weighted pose graph.

Our method has higher accuracy on both trajectory tracking and surface reconstruction compared to state-of-the-art frame-to-frame or frame-model approaches. We test our system in challenging sequences with motion blur, fast pure rotation, and large moving objects, the results demonstrate it can still successfully obtain results with high accuracy. Furthermore, the proposed approach achieves real-time speed which only uses part of the CPU computation power, and it can be applied to embedded devices such as phones, tablets, or MAVs.

More detailed information can be accessed at:


2. Experiments and results

We evaluate the proposed approach on two widely used RGB-D benchmarks TUM and ICL since they both provide synchronized ground-true poses which can be adopted to evaluate the tracking accuracy.

Because the ground-truth point cloud model is provided in ICL, the reconstruction accuracy is also compared.

However, both of TUM and ICL do not contain large-scale sequences, so that experiments on the NPU dataset with several large sequences captured by ourself are also conducted.

2.1. Results on TUM dataset




2.2. Results on ICL dataset



2.3. Results on NPU dataset

The NPU dataset contains several large scale RGB-D sequences and its public available at this website. You can download the dataset here while the reconstruction results is also provided.


2.4. Demonstration video

The demonstration video can be found at:

3. References

Comments are closed.