Skip to content

art-e-fact/ros2-simulation-workflows

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 

Repository files navigation

ROS2 Simulation workflows

Tips, scripts and examples of workflows using simulation in ROS2. This is a work in progress. Contributions welcome.

Integration tests

You can use launch_testing to define tests.

It allows you to define nodes running in a test similarly to what would be done in a launch file bu using generate_test_description instead of generate_launch_description.

import unittest

import pytest
import launch_testing
from launch import LaunchDescription


@pytest.mark.launch_test
def generate_test_description():
    return LaunchDescription([
        Node(
            package="turtlesim",
            executable='turtlesim_node',
        ),
        launch_testing.actions.ReadyToTest()
    ])

@launch_testing.post_shutdown_test()
class TestProcessOutput(unittest.TestCase):
    def test_exit_code(self, proc_info):
        # Check that all processes in the launch (in this case, there's just one) exit
        # with code 0
        launch_testing.asserts.assertExitCodes(proc_info)
class TestProcessOutput(unittest.TestCase):
    def test_exit_code(self, proc_info):
        # Check that all processes in the launch (in this case, there's just one) exit
        # with code 0
        launch_testing.asserts.assertExitCodes(proc_info)

Running tests can be done with the launch_test command, for example:

launch_test launch_turtle_tests.py

Integration with CI tools

You can export test results with the --junit-xml options, which create Junit compatible tests results, meaning it can be used in frameworks like Jenkins,

launch_test launch_turtle_tests.py --junit-xml results.xml

The results.xml file can be parsed by continuous integration tools and contains description of test suites which map to the test launch file, and test cases which map to each individual test method inside unittest's TestCase.

Creating tests

Initial conditions from live session

Best practice to define tests:

  • Make sure your simulation is publishing ground truth information for what you want to test
  • Create a node that uses topic subscriptions to estimate success
  • Make sure that a test ends: using a timeout is usually the easiest way

Parameter optimization

A common use case in simulation is to find an algorithm parameter that optimized a desired metric. Trial and error is typically used, but this is error prone and time consuming, especially if this is is to be done for multiple customer dpeloyments.

The strategy is to:

  • define the range of parameters to test
  • define a metric topic
  • perform a grid search to run test on all parameter combinations and report the best performing set.

An example parameter using SMAC instead of grid search on a specific use case: https://github.com/oscar-lima/autom_param_optimization

References

About

Sample workflows for common simulation use cases in ROS2

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published