-
Notifications
You must be signed in to change notification settings - Fork 5
Switch scalability test with active MT Cbench switches
This is a switch scalability test with switches emulated using MT-Cbench. Its
target is to explore the maximum number of switches the controller can sustain
while they consistently initiate traffic to it (active), and how the controller
servicing throughput scales as more switches are being added. MT-Cbench switches
send artificial OF1.0 Packet-In messages to the controller, which replies with
also artificial OF1.0 Flow-Mod messages; these message types dominate the traffic
exchanged between the switches and the controller. The controller should be
configured to start with the drop-test
feature installed to be able to reply
to MT-Cbench messages. The emulated switches are arranged in a disconnected
topology, meaning they do not have any interconnection between them. This, along
with the limited protocol support, constitute MT-Cbench a special-purpose OF
generator and not a full-fledged, realistic OF switch emulator.
A switch scalability test with active MT-Cbench switches can be started by specifying the following options in NSTAT command line:
--test=sb_active_scalability
--sb-generator-base-dir=<MT-Cbench dir>
The MT-Cbench dir
defined as value of --sb-generator-base-dir
parameter, is
under the path emulators/mt-cbench
. The full path of this location should be
given, as this value will be used as base path for the location of other files
related to MT-Cbench in test. Under the stress_test/sample_test_confs/<controller_name>/
directories, the JSON files ending in _sb_active_scalability_mtcbench
can be
handled as template configuration files for this kind of test scenario. You can
specify them to the --json-config
option to run a sample test. For
larger-scale stress tests, have a look at the corresponding files under the
stress_test/stress_test_confs/<controller_name>/
directories.
For this test 3 nodes are required.
- NSTAT node
- controller node
- SouthBound emulator node (MT-Cbench)
In order to deploy these nodes, based on docker containers, we have two options
- download the prebuilt environment from DockerHub
- build your own container locally using the provided
Dockerfiles
for proxy and no-proxy environments, under the pathdeploy/docker
In both cases, docker has to be installed and any user that will manipulate docker containers, must be added to the docker group. To deploy the required nodes, see installation wiki.
After deployment of docker nodes, update the NSTAT repository using the following steps
-
open a new terminal and execute the command
docker ps -a
the output of the above command will be similar to the following
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 4c05473bb7c8 intracom/nstat-sdn-controllers:proxy "/usr/sbin/sshd -D" About a minute ago Up About a minute 22/tcp controller 72e4572878e2 intracom/mtcbench:proxy "/usr/sbin/sshd -D" About a minute ago Up About a minute 22/tcp mtcbench 60db64735a26 intracom/nstat:proxy "/usr/sbin/sshd -D" About a minute ago Up About a minute 22/tcp nstat
get the container names of all docker containers you created
-
for each docker ID execute the following command
WAIT_UNTIL_RETRY=2 docker exec -i $container_name /bin/bash -c "rm -rf /opt/nstat; \ cd /opt; \ until git clone https://github.com/intracom-telecom-sdn/nstat.git -b master; do \ echo 'Fail git clone NSTAT. Sleep for $WAIT_UNTIL_RETRY and retry'; \ done"
where you should replace the
$container_name
with the container name of the corresponded docker node, acquired from previous step.
The IP addresses of all deployed VMs and the credentials to open SSH connections, must be configured in the json configuration file of the sample test we want to run. This action must be done in nstat_node.
-
Run the command
docker ps -a
to get container names of
- NSTAT node
- Controller node
- SouthBound emulator node (MT-Sbench)
-
Get IP Addresses of all nodes
docker exec -i $container_name /bin/bash -c "ifconfig"
-
SSH into nstat_node
ssh root@<NSTAT_node_ip>
the password to connect is root123.
-
Edit json file /opt/nstat/stress_test/sample_test_confs/boron/boron_RPC_sb_active_scalability_mtcbench.json and change the following lines changing IP addresses and SSH credentials:
"nstat_node_ip":"<NSTAT_node_ip>", "nstat_node_ssh_port":"22", "nstat_node_username":"root", "nstat_node_password":"root123", "controller_node_ip":"<Controller_node_ip>", "controller_node_ssh_port":"22", "controller_node_username":"root", "controller_node_password":"root123", "sb_emulator_name":"MTCBENCH", "sb_emulator_node_ip":"<MT-Cbench_node_ip>", "sb_emulator_node_ssh_port":22, "sb_emulator_node_username":"root", "sb_emulator_node_password":"root123",
In order to run the test
-
Open a new terminal and execute the following command
docker exec -i nstat /bin/bash -c "export PYTHONPATH=/opt/nstat; source /opt/venv_nstat/bin/activate; \ python3.4 /opt/nstat/stress_test/nstat.py \ --test=sb_active_scalability \ --ctrl-base-dir=/opt/nstat/controllers/odl_boron_pb/ \ --sb-emulator-base-dir=/opt/nstat/emulators/sbemu/mtcbench/ \ --json-config=/opt/nstat/stress_test/sample_test_confs/boron/boron_RPC_sb_active_scalability_mtcbench.json \ --json-output=/opt/nstat/results.json \ --html-report=/opt/nstat/report.html \ --output-dir=/opt/nstat/results_boron_RPC_sb_active_scalability_mtcbench/"
Once test execution is over, inspect the results under
/opt/nstat/results_boron_RPC_sb_active_scalability_mtcbench
The configuration keys that must be specified in the JSON configuration file are:
Config key | type | description |
---|---|---|
nstat_node_ip |
string | IP Address of the NSTAT VM |
nstat_node_ssh_port |
string | the ssh port of the NSTAT VM |
nstat_node_username |
string | username for ssh login in the NSTAT VM |
nstat_node_password |
string | password for ssh login in the NSTAT VM |
controller_name |
string | name of the used controller. This value is used in Controller Factory method to return the appropriate controller object. For this test it should be ODL
|
controller_node_ip |
string | IP Address of the Controller VM |
controller_node_ssh_port |
string | The ssh port of the Controller VM |
controller_node_username |
string | Username for ssh login in the Controller VM |
controller_node_password |
string | Password for ssh login in the Controller VM |
sb_emulator_name |
string | The name of SouthBound emulator. This value is used in Generator Factory method to return the appropriate SouthBound emulator object. For this test it should be MTCBENCH
|
sb_emulator_node_ip |
string | IP Address of the MT-Cbench VM |
sb_emulator_node_ssh_port |
string | The ssh port of the MT-Cbench VM |
sb_emulator_node_username |
string | username for ssh login in the MT-Cbench VM |
sb_emulator_node_password |
string | password for ssh login in the MT-Cbench VM |
controller_build_handler |
string | executable for building controller (relative to --ctrl-base-dir command line parameter). It fetches all controller handlers from nstat-sdn-controllers repository |
controller_clean_handler |
string | executable for cleaning up controller directory (relative to --ctrl-base-dir command line parameter) |
controller_get_handler |
string | executable for downloading the controller prebuild version from its repository and extracts it. |
controller_start_handler |
string | executable for starting controller (relative to --ctrl-base-dir command line parameter) |
controller_stop_handler |
string | executable for stopping controller (relative to --ctrl-base-dir command line parameter) |
controller_status_handler |
string | executable for querying controller status (relative to ctrl-base-dir command line parameter) |
controller_statistics_handler |
string | executable for changing the period that the controller collects topology statistics (relative to --ctrl-base-dir command line parameter) |
controller_persistent_handler |
string | disables persistence of controller. This can be acchieved by adding the attribute persistent=false in file <controller_base_dir>/etc/org.opendaylight.controller.cluster.datastore.cfg
|
controller_oper_hosts_handler |
string | makes a RESTCALL to the NorthBound interface of the controller in order to get the number of hosts from the operational datastore |
controller_oper_links_handler |
string | makes a RESTCALL to the NorthBound interface of the controller in order to get the number of links from the operational datastore |
controller_oper_switches_handler |
string | makes a RESTCALL to the NorthBound interface of the controller in order to get the number of switches from the operational datastore |
controller_oper_flows_handler |
string | makes a RESTCALL to the NorthBound interface of the controller in order to get the number of flows from the operational datastore |
controller_flowmods_conf_handler |
string | configures the controller plugins to respond with flow modifications on any PacketIN message with ARP payload |
controller_logs_dir |
string | controllers logs directory (relative to --ctrl-base-dir command line parameter) |
controller_port |
number | controller port number where OF switches should connect |
controller_statistics_period_ms |
array of numbers | controller different statistics period values (in (ms)) |
sb_emulator_build_handler |
string | executable for building MT-Cbench (relative to --sb-emulator-base-dir command line parameter) |
sb_emulator_run_handler |
string | executable for running MT-Cbench (relative to --sb-emulator-base-dir command line parameter) |
sb_emulator_clean_handler |
string | executable for cleaning up MT-Cbench (relative to --sb-emulator-base-dir command line parameter) |
sb_emulator_cleanup |
boolean | whether to cleanup MT-Cbench after test completion |
mtcbench_simulated_hosts |
array of numbers | _number of hosts (MACs) simulated by the MT-Cbench |
mtcbench_threads |
array of numbers | number of total MT-Cbench threads |
mtcbench_switches_per_thread |
array of numbers | number of OF switches simulated per MT-Cbench thread |
mtcbench_thread_creation_delay_ms |
array of numbers | delay (in ms) between creation of consecutive MT-Cbench threads |
mtcbench_delay_before_traffic_ms |
array of numbers | delay (in ms) before MT-Cbench threads start transmitting OF traffic |
mtcbench_mode |
string | MT-Cbench mode ("Latency" or "Throughput") |
mtcbench_warmup |
number | number of initial internal iterations that should be treated as "warmup" and are not considered when computing aggregate performance results |
mtcbench_ms_per_test |
number | duration (in ms) of generator internal iteration |
mtcbench_internal_repeats |
number | number of internal iterations during traffic transmission where performance and other statistics are sampled |
java_opts |
array of strings | Java options to initialize JAVA_OPTS env variable |
test_repeats |
number | number of external iterations for a test, i.e. the number of times a test should be repeated to derive aggregate results (average, min, max, etc.) |
plots |
array of plot objects | configurations for plots to be produced after the test |
The array-valued configuration keys shown in bold are the test dimensions of the test scenario. The stress test will be repeated over all possible combinations of their values.
The most important configuration keys are
mtcbench_threads
mtcbench_switches_per_thread
mtcbench_thread_creation_delay_ms
These keys determine the parameters for progressively booting switches
into an SDN network, allowing in this way to find the combination of values for
booting a topology of certain size, optimally. The values of
mtcbench_threads
and mtcbench_switches_per_thread
, define the overall
number of network nodes (topology size), connected on the controller. This number
is equal to (mtcbench_threads
* mtcbench_switches_per_thread
).
See the plotting page.
The result keys produced by this kind of test scenario and which can be used subsequently to generate custom plots, are the following:
Result key | type | description |
---|---|---|
global_sample_id |
number | unique (serial) ID for this sample |
timestamp |
number | unique timestamp for this sample |
date |
string | date this sample was taken |
test_repeats |
number | number of times the test was repeated (for reliability reasons) |
repeat_id |
number | ID for the external iteration of this sample |
mtcbench_internal_repeats |
number | number of internal iterations during traffic transmission where performance and other statistics were sampled |
internal_repeat_id |
number | ID for the internal MT-Cbench iteration corresponding to this sample |
throughput_responses_sec |
number | controller measured throughput (responses/sec) |
mtcbench_simulated_hosts |
number | number of hosts (MACs) simulated by the MT-Cbench |
mtcbench_switches |
number | total number of MT-Cbench simulated switches (equals to #threads*#switches_per_thread |
mtcbench_threads |
number | number of total MT-Cbench threads |
mtcbench_switches_per_thread |
number | number of OF switches simulated per MT-Cbench thread |
mtcbench_thread_creation_delay_ms |
number | delay (in ms) between creation of consecutive threads |
mtcbench_delay_before_traffic_ms |
number | delay (in ms) before MT-Cbench threads start transmitting OF traffic |
mtcbench_ms_per_test |
number | duration (in ms) of MT-Cbench internal iteration |
mtcbench_warmup |
number | number of initial internal iterations that were treated as "warmup" and are not considered when computing aggregate performance results |
mtcbench_mode |
string | generator mode (Latency or Throughput) |
controller_node_ip |
string | controller IP address where OF switches were connected |
controller_port |
number | controller port number where OF switches should connect |
controller_java_xopts |
array of strings | controller Java optimization flags (-X ) |
one_minute_load |
number | one-minute average system load |
five_minute_load |
number | five-minute average system load |
fifteen_minute_load |
number | fifteen-minute average system load |
used_memory_bytes |
number | system used memory in bytes |
total_memory_bytes |
number | system total memory in bytes |
controller_cpu_shares |
number | the percentage of CPU resources of the physical machine, allocated to the controller process |
controller_cpu_system_time |
number | CPU system time for controller |
controller_cpu_user_time |
number | CPU user time for controller |
controller_num_threads |
number | number of controller threads measured when this sample was taken |
controller_num_fds |
number | number of open file descriptors measured when this sample was taken |
controller_statistics_period_ms |
number | the interval (in ms) of the statistics period of the controller |
The result key in bold (throughput_responses_sec
) is the main performance
metric produced by this test scenario.
The following figures show sample results from switch scalability stress tests with the OpenDaylight controller operating in two modes:
-
RPC mode: the controller is configured to directly reply to the switches
with a predefined Flow-Mod message at the OpenFlow plugin level (use of
start_droptestRPC.sh
handler)
Intro
Stress Tests
- Switch scalability test with active MT-Cbench switches
- Switch scalability test with active Multinet switches
- Switch scalability test with idle MT-Cbench switches
- Switch scalability test with idle Multinet switches
- Controller stability test with active MT-Cbench switches
- Controller stability test with idle Multinet switches
- Flow scalability test with idle Multinet switches
Emulators
Monitoring tools
- OpenFlow monitoring tools
Design
Releases
ODL stress tests performance reports
Sample Performance Results