In this article we present a new method for visual odometry based on a focused plenoptic camera. This method fuses the depth datagained by a monocular Simultaneous Localization and Mapping (SLAM) algorithm and the one received from a focused plenoptic cam-era. Our algorithm uses the depth data and the totally focused images supplied by the plenoptic camera to run a real-time semi-densedirect SLAM algorithm. Based on this combined approach, the scale ambiguity of a monocular SLAM system can be overcome. Fur-thermore, the additional light-field information highly improves the tracking capabilities of the algorithm. Thus, visual odometry evenfor narrow field of view (FOV) cameras is possible. We show that not only tracking profits from the additional light-field information.By accumulating the depth information over multiple tracked images, also the depth accuracy of the focused plenoptic camera can behighly improved. This novel approach improves the depth error by one order of magnitude compared to the one received from a singlelight-field image.