-
Notifications
You must be signed in to change notification settings - Fork 17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Using ZED2 stereo camera instead of Kinect #2
Comments
Hi @esmaeilinia , I have exactly the same issue with a d435 camera, all the detected objects (i.e. person,book) are positioned at the origin of the map frame, even if I have adapted the camera frame and the pointcloud topic in the object_positioner.yaml file and in the object_projector.cpp. |
Hello @renatojmsdh @ericksonrn @ericksonrn @michelmelosilva could you please help to understand the root cause of the issue ? Is this a misconfiguration of mine ? |
Hi @germal, I couldn't resolve it. I had the same issue with ZED and D435i. |
Hello @esmaeilinia and @germal, I will have a look to check what is happening. Could you please tell me:
Thanks |
Hello @renatojmsdh Thank you for your reply. |
@renatojmsdh Thanks for your reply. I have tested on both Kinetic and Melodic. There was no error message. I'll share a sample ROS bag asap. |
Hi @renatojmsdh
|
Hello @renatojmsdh , |
Hi @germal, |
Hello @renatojmsdh , @ericksonrn , @michelmelosilva |
Hello @renatojmsdh |
How can I run this using a stereo camera (ZED2), and without turtlebot?
Im trying to modify RTAB-Map to read from a stereo camera, but obj_positioner fails to position the detected objects on the map.
I have modified the nodes to read from ZED stereo camera, but still it cant work with RTAB-map.
Can you please provide more details on using different RGB-D cameras other than Kinect?
Thanks
The text was updated successfully, but these errors were encountered: