Orb Slam2

Localization Models

  1. Orb Slam2 Pdf
  2. Orb Slam2 Github
  3. Orb Slam2 Ros
Introduced by Mur-Artal et al. in ORB-SLAM2: an Open-Source SLAM System for Monocular, Stereo and RGB-D Cameras

Second, ORB-SLAM2system framework To understand the code implementation of a system, always start with the framework of the system, the system framework of the orb-slam2 is shown below: We can see that Tracking line Cheng Zhong did Extract ORB features, initial position estimation, etc. However, the ORB-SLAM2 is based on the static-world assumption and still has a few shortcomings in handling with dynamic scene problems, so we integrate our method with the ORB-SLAM2 system in order to enhance its robustness and stability in highly dynamic environments.

ORB-SLAM2 is a complete SLAM system for monocular, stereo and RGB-D cameras, including map reuse, loop closing and relocalization capabilities. The system works in real-time on standard CPUs in a wide variety of environments from small hand-held indoors sequences, to drones flying in industrial environments and cars driving around a city.

Source: Mur-Artal and Tardos

Image source: Mur-Artal and Tardos

Source: ORB-SLAM2: an Open-Source SLAM System for Monocular, Stereo and RGB-D Cameras

Papers

Slam2

Tasks

TaskPapersShare
Simultaneous Localization and Mapping521.74%
Visual Odometry313.04%
Semantic Segmentation28.70%
Autonomous Vehicles28.70%
Visual Localization28.70%
Instance Segmentation14.35%
Optical Flow Estimation14.35%
Monocular Visual Odometry14.35%
Visual Navigation14.35%

Usage Over Time

This feature is experimental; we are continuously improving our matching algorithm.

Components

ComponentType
🤖 No Components FoundYou can add them if they exist; e.g. Mask R-CNN uses RoIAlign

Categories

Contact us on: hello@paperswithcode.com . Papers With Code is a free resource with all data licensed under CC-BY-SA.

What is Orb SLAM?


SLAM is short for Simultaneous Localization and Mapping. It is a process which is mostly used for determining the position of a robot with respect to its environment. Its quite similar to what you would do when you wake up after partying so hard that you don't remember what you did the night before. You would get up and scan the room, and maybe if you're lucky, you find yourself in a familiar room. This is quite what happens in SLAM. The camera (or range sensors like LIDARs) on the robots continuously sense the information of their surroundings. For each image frame, they find feature points using an algorithm like ORB, hence the name ORB-SLAM. These features are tracked in time and pixel space of the images and are then used to build an understanding of where the robot might be given the information extracted from these frames. This is the basic idea of how SLAM works, however the modern solutions are quite complex and would require a full explanatory post of itself (which I might upload later!).

Let's ROS!


ROS (Robotic Operating System1) has a variety of packages for all your robotics need and SLAM is no exception to that. For this post we are going to be looking at the orb_slam2_ros package which supports a lot of cameras out of the box (complete list). First, make sure you have ROS installed and setup (installation guide here). For this post I'm going to be working with ros kinetic and Gazebo 7 which will come installed if you go for the full desktop installation of ROS. Create a catkin workspace if you don't have one or want to test it out on a fresh workspace!


Note ROS is not compatible with python 3 out of the box. If you need to achieve that you would need to build ROS as well as your workspace from source with a special flag specifying the correct python 3 executable instead of the python 2 one (which is the default). This can be quite painful and is not recommended for the faint-hearted. But if you do dare to do that, you can follow this tutorial to do so! Here we are going to be sticking to python 2 for this post and pray that the python community doesn't look down upon us.

Clone the orb_slam2_ros package into the src folder.


Now we want to set up Gazebo world and spawn robot model with a camera installed. I'll be using the turtlebot 3 package as it comes with most of the prerequisites needed for this task out of the box. You can install turtlebot3 packages either to your ROS installation, or you can clone the repository in the current workspace and run catkin_make again. I would recommend the former if you would regularly use turtlebot 3 and have majority of your projects involving it, otherwise it is wise to clone the repository in the workspace.

Orb Slam2 Pdf


If you use zsh instead of bash the you need to source devel/setup.zsh. Now we can run different launch files and see the results. You would need to run the launch files in different terminal sessions. You can use terminator, tmux, screen or even tab out your terminal (No one's is gont to judge you for that!!). To launch the gazebo simulation with a turtlebot in a house run the command below. You can also add your custom maps in gazebo and go full on loco!


Gazebo launched with Turtlebot3 Waffle inside a house environment.
Now we need to run the ORB SLAM algorithm. This can be done by running the command below. Note that by default the WAFFLE configuration comes with the intel's realsense r200 camera plugin. We can use this camera in all the three modes porvided by the orb_slam2 package.


During the time of writing this article, the launch files orb_slam2_r200_rgbd.launch and orb_slam2_r200_stereo.launch were missing a camera parameter from their launch files which would result in the following error.


To resolve this just make sure that the following parameter is present in their respective launch files:


If you go with the mono version, then you might get a constant stream of Map point vector is empty! on your terminal. Don't worry, It is because the monocullar SLAM is not initialized yet. To compute depth, the monocular slam uses a concept called parallax, which relies on the difference in the rate of percieved movement of objects at different depths. You would need to move the turtlebot around until the system is initialized. You can do it by launching a teleop node by running the following.

Orb Slam2 Github


To view the image captured from the camera of the turtlebot you can run the following:

Image view with rgb image observed by the Turtlebot on the left and the depth image on the right
If you dont have the image_view package installed, you can install it easily by running sudo apt get install ros-kinetic-image-view.

As a side note you can install any ROS package directly into your ROS installation by running sudo apt installros-<ros_distro>-<ros_package_name>. This convention comes in handy a lot of times!!

Orb Slam2 Ros


Screenshot of rViz displaying the features recognized by ORB SLAM and the depth map (above). The point cloud of all the recognized features used for localization (below).