Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is there any way for to do SLAM using only one monocular camera with rtabmap. #1256

Open
rag0nn opened this issue Dec 14, 2024 · 5 comments
Open

Comments

@rag0nn
Copy link

rag0nn commented Dec 14, 2024

Hi,
I'm beginner for SLAM. I tried to use rtabmap with monocular data but ı can't handle calculation of /odom data. Rtabmap has 3 nodes to do this: rgbd_odometry, icl_odometry and stereo_odometry, is there any way to use with monocular data these nodes with different parameters? Or is this impossible to do this without depth data.
Thanks!

My Setup:
ROS2 Humble
Ubuntu 22.04

@matlabbe
Copy link
Member

We don't provide an odometry node to do mononular visual odometry. You may search for a VIO package/library to do it.

@rag0nn
Copy link
Author

rag0nn commented Dec 17, 2024

Thanks for helps!
I'm using MiDaS to calculate depth images. I provide camera info, rgb image and depth image to rgbd_odometry node but node doesn't send any data. Is there any detailed guide for this? Am i have to run with any other node?

@matlabbe
Copy link
Member

Is rgbd_odometry logging something? How long is required to generate the depth image, there could be a sync issue where depth image has significant more delay than the two other topics. The sync_queue_size and/or topic_queue_size could be increased. Are depth images published with same stamp than input image?

Using these two tools on all input topics can give a better idea:

ros2 topic hz ...
tos2 topic delay ...

@rag0nn
Copy link
Author

rag0nn commented Jan 8, 2025

I tried so much things. Currently, i'm here:
Ekran Görüntüsü - 2025-01-08 18-20-46
I think first problem is extracting 954 features but odometry node dont works. I can't figure out the odometry.
And the second i can't really understand the approach of the data synchronization. If i set approx_sync=False, not works again even not red screens appears.

NOTE1: I'm using freiburg2_pioneer_slam2 dataset, synchorenized with https://gist.github.com/matlabbe/484134a2d9da8ad425362c6669824798 file in this image. The sharing order seems correct. I tried rosbags convertions from ros1 to ros2 bag but it not worked. So ı'm publish manuel. Also ı tried with freiburg3_household dataset but same results occured.

NOTE2: The launch file.
'''

  def generate_launch_description():

  parameters=[{
        'frame_id':'kinect',
        'subscribe_depth':True,
        'subscribe_odom_info':True,
        #'approx_sync_max_interval':0.04,
        'sync_queue_size':25,
        # RTAB-Map's parameters should all be string type:
        'Odom/Strategy':'0',
        'Odom/ResetCountdown':'15',
        'Odom/GuessSmoothingDelay':'0',
        'Rtabmap/StartNewMapOnLoopClosure':'true',
        'RGBD/CreateOccupancyGrid':'false',
        'Rtabmap/CreateIntermediateNodes':'true',
        'RGBD/LinearUpdate':'0',
        'RGBD/AngularUpdate':'0'}]
        
  remappings=[
        ('rgb/image', '/camera/rgb/image_color'),
        ('rgb/camera_info', '/camera/rgb/camera_info'),
        ('depth/image', '/camera/depth/image')]
        

  return LaunchDescription([

      SetParameter(name='use_sim_time', value=True),
      # 'use_sim_time' will be set on all nodes following the line above

      # Nodes to launch
      Node(
          package='rtabmap_odom', executable='rgbd_odometry', output='screen',
          parameters=parameters,
          remappings=remappings),

      Node(
          package='rtabmap_slam', executable='rtabmap', output='screen',
          parameters=parameters,
          remappings=remappings,
          arguments=['-d']),

      Node(
          package='rtabmap_viz', executable='rtabmap_viz', output='screen',
          parameters=parameters,
          remappings=remappings),
     
      # /tf topic is missing in the converted ROS2 bag, create a fake tf
      Node(
          package='tf2_ros', executable='static_transform_publisher', output='screen',
          arguments=['0.0', '0.0', '0.0', '-1.57079632679', '0.0', '-1.57079632679', 'kinect', 'openni_rgb_optical_frame']),
  ])

I got launch file from rtabmap repo:
https://github.com/introlab/rtabmap_ros/blob/ros2/rtabmap_examples/launch/rgbdslam_datasets.launch.py
'''

NOTE3:My publisher node
'''

PUBLISH_PERIOT = 5
PUBLISH_QUEUE_SIZE = 30
FRAME_ID = "kinect"

class DataProvider(rclpy.node.Node):

def __init__(self):
    super().__init__(NODE_NAME)
    self.rgb_names = []
    self.depth_names = []
    self._get_image_names()
    self.bridge = CvBridge()

    self.pub_rgb = self.create_publisher(Image,TOPIC_NAME_PUB_RGB,PUBLISH_QUEUE_SIZE)
    self.pub_depth = self.create_publisher(Image,TOPIC_NAME_PUB_DEPTH,PUBLISH_QUEUE_SIZE)
    self.pub_caminfo = self.create_publisher(CameraInfo,TOPIC_NAME_PUB_CAMERA_INFO,PUBLISH_QUEUE_SIZE)

    self.static_cam_info = self._get_camera_info()

    self.counter_seq = 0

    self.get_logger().info(f"Image sequence starting with {PUBLISH_PERIOT} second periot...")
    self.timer = self.create_timer(PUBLISH_PERIOT,self.publish_callback)

        
def sequence_read_callback(self):
    frame = cv2.imread(self.rgb_names[self.counter_seq])
    depth_frame = cv2.imread(self.depth_names[self.counter_seq],cv2.IMREAD_UNCHANGED)
    print(f"\nIMAGE RGB: {self.rgb_names[self.counter_seq]} SHAPE: ",frame.shape)
    print(f"IMAGE DEPTH: {self.depth_names[self.counter_seq]} SHAPE: ",depth_frame.shape)

    self.counter_seq +=1
    return frame,depth_frame

def publish_callback(self):
    start_callback_time = time.time()
    timestamp = self.get_clock().now().to_msg()
    rgb,depth = self.sequence_read_callback()

    # rgb
    msg_rgb = self.bridge.cv2_to_imgmsg(rgb, encoding="bgr8")
    msg_rgb.header = Header(stamp=timestamp,frame_id=FRAME_ID)

    # depth
    msg_depth = self.bridge.cv2_to_imgmsg(depth,encoding='16UC1')
    msg_depth.header = Header(stamp=timestamp,frame_id=FRAME_ID)

    # camera info
    self.static_cam_info.header = Header(stamp=timestamp,frame_id=FRAME_ID)

    # publish
    self.pub_rgb.publish(msg_rgb)
    self.pub_depth.publish(msg_depth)
    self.pub_caminfo.publish(self.static_cam_info)

    end_callback_time = time.time()
    callback_duration = end_callback_time - start_callback_time
    self.get_logger().info(f"Data Published: {self.counter_seq-1}  timestamp-{timestamp}")
    self.get_logger().info(f"Total read and publish time: {callback_duration}")

def _get_camera_info(self):
    msg = CameraInfo()
    msg.width,msg.height = 1280,960
    msg.distortion_model = "plumb_bob"
    msg.d = [0.2312,-0.7849,-0.0033,-0.0001,0.9172]
    msg.k = [520.9, 0.0, 325.1,
             0.0, 521.0, 249.7,
             0.0, 0.0, 1.0
             ]
    return msg

def _get_image_names(self):

    rgb_names = sorted(glob(f"{IMAGES_PATH}/*"))        
    depth_names = sorted(glob(f"{DEPTH_IMAGES_PATH}/*"))
    self.rgb_names = rgb_names
    self.depth_names = depth_names

'''

@matlabbe
Copy link
Member

The depth image may not be valid, or understandable by rgbd_odometry. In the screenshot, it extracts over 900 features, but warning "20 visual features required.." is shown. It means it cannot have at least 20 features with valid depth to initialize. It is then publishing null odometry, for which rtabmap node produces an error that odometry is null (which is the case). The issue is that from the 900 features, none of them have valid depth.

The depth format should be 32FC1 in meters, or 16UC1 in mm. If the depth image is valid, maybe the error is coming from the calibration, or camera_info topic. Make sure P matrix in camera_info topic is valid and resolution matches the color image.

If rgb and depth images don't have exact same stamp, you should use approximate synchronizer.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants