You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, thanks for Carla 0.9.15 open source project. I tried to create a customed dataset by Carla, and I need to transfrom vehicle location in Carla world corrdinates to pixel location in RGB Camera for object detection. I set my Camera fov to 110 degrees. I refer the Geometric transformations and issue #56, but the result is incorrect for me. The part of the code is as follows.
1、I defined a new type of sensor called 'VehicleLocation' to compute the transformed vehicle location every several times. And I define the intrinsic transformation matrix self.camera_intrinsic and world to camera transformation matrix self.camera_transform_matrix.
2、implementation of self.save_vehicle_location. object_camera_location is the transformed world to camera location. location_cam is the estimated pixel location.
def save_vehicle_location(self, timestamp, if_save=False):
if self.vehicle_destroyed: # check vehicle if destroyed
return
self.tics_processing += 1
if if_save and (self.tics_processing % self.save_period_vehicle_location == 0) and (
self.tics_processing > self.wait_time):
location_path = os.path.join(self.vehicle_location_path,
f"location_{self.tics_processing // self.save_period_vehicle_location}.json")
snapshot = self.world.get_snapshot()
simulation_time = snapshot.timestamp.elapsed_seconds
simulation_time_str = str(datetime.timedelta(seconds=simulation_time))
info_len = len(self.candidates_list)
location_world = np.zeros([info_len, 3])
velocity_world = np.zeros([info_len, 3])
location_cam = np.zeros([info_len, 2])
velocity_cam = np.zeros([info_len, 2])
label = np.zeros([info_len])
for idx, candidate in enumerate(self.candidates_list):
# There is only one vehicle in the list under the test scenario
if idx == 0:
label[idx] = 1
location = candidate.get_location()
velocity = candidate.get_velocity()
location_world[idx, 0:3] = [location.x, -location.y, location.z]
velocity_world[idx, 0:3] = [velocity.x, -velocity.y, velocity.z]
object_camera_location = np.dot(self.camera_transform_matrix, np.asarray(
[location.x, location.y, location.z, 1]))
# New we must change from UE4's coordinate system to an "standard"
# (x, y ,z) -> (y, -z, x)
# and we remove the fourth componebonent also
point_camera = np.asarray([object_camera_location[1],
-object_camera_location[2], object_camera_location[0]])
pixel_location = np.dot(self.camera_intrinsic, point_camera)
pixel_location[0] = pixel_location[0] / pixel_location[2]
pixel_location[1] = pixel_location[1] / pixel_location[2]
location_cam[idx, 0:2] = pixel_location[0:2]
object_camera_velocity = np.dot(self.camera_transform_matrix, np.array(
(velocity.x, velocity.y, velocity.z, 1)))
# New we must change from UE4's coordinate system to an "standard"
# (x, y ,z) -> (y, -z, x)
# and we remove the fourth componebonent also
velocity_camera = np.asarray([object_camera_velocity[1],
-object_camera_velocity[2], object_camera_velocity[0]])
pixel_velocity = np.dot(self.camera_intrinsic, velocity_camera)
pixel_velocity /= pixel_velocity[2]
velocity_cam[idx, 0:2] = pixel_velocity[0:2]
location_data = {
"Simulation_Time": simulation_time_str,
"Location_Sionna": location_world,
"Velocity_Sionna": velocity_world,
"Location_Camera": location_cam,
"Velocity_Camera": velocity_cam
}
print('location_data', location_data)
self.location_data = location_data
3、One captured image is shown as follows. The print output is
It can be seen that the true pixel location is about [1541, 780] instead of [769.22, 702.89]. So how to compute the correct pixel location. Is there any mistake in my code. Thanks a lot!
The text was updated successfully, but these errors were encountered:
Hello, thanks for Carla 0.9.15 open source project. I tried to create a customed dataset by Carla, and I need to transfrom vehicle location in Carla world corrdinates to pixel location in RGB Camera for object detection. I set my Camera fov to 110 degrees. I refer the Geometric transformations and issue #56, but the result is incorrect for me. The part of the code is as follows.
1、I defined a new type of sensor called 'VehicleLocation' to compute the transformed vehicle location every several times. And I define the intrinsic transformation matrix
self.camera_intrinsic
and world to camera transformation matrixself.camera_transform_matrix
.2、implementation of
self.save_vehicle_location
.object_camera_location
is the transformed world to camera location.location_cam
is the estimated pixel location.3、One captured image is shown as follows. The print output is
It can be seen that the true pixel location is about [1541, 780] instead of [769.22, 702.89]. So how to compute the correct pixel location. Is there any mistake in my code. Thanks a lot!
The text was updated successfully, but these errors were encountered: