Adding depth perception to an agent #405
LauritsJonassen
started this conversation in
General
Replies: 1 comment 1 reply
-
Hi @LauritsJonassen , it sounds like you'll want to do ray casting against geoms in the environment. For the boxes, you can cast rays against triangles (line-triangle intersection). It'll look something like this: Line 158 in 45729a4 but instead of a segment you'll want a line intersection. FWIR "Real-Time Collision Detection" by Ericson has a lot of these derivations handy |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I am looking to add sensory/depth perception to an agent so he is able to 'see' and navigate the environment before him. However, Brax does not seem to have a built-in method for computing depth such as a range finder function.
My current idea is to have the agent throw lasers in every direction to get the depths and then concatenate the angle and distance value of each ray. In other words, compute the distance and angle from the agent to the object in the environment which is a single mesh object from a LiDAR scan. I will then need to adapt the input size of the neural network so that the size of the observation space matches the input size of the policy network.
Does anyone have any experience with adding depth perception in Brax or a clever idea for how it can be done? Thanks
Beta Was this translation helpful? Give feedback.
All reactions