Our tooling has first-class support for running benchmarks across a matrix of CPU configs, sites, and engines, then collating their results into reports like our November 2024 report.
Results are organised by CPU config, then by site, then by engine. The results for one combination of (CPU configuration, site, engine) is known as a sample. For example, results for the sample with CPU config key “2cpu”, site key “servo.org”, and engine key “servo.old” would be found at ./2cpu/servo.org/servo.old.
-
Create a study.
$ mkdir studies/foo $ cp studies/example/study.toml studies/foo
-
Edit study.toml to define CPU configs, sites, engines, and other settings.
$ $EDITOR studies/foo/study.toml
-
Collect results. For Servo samples, this creates
trace*.html
,servo*.pftrace
, andmanifest*.json
. For Chromium samples, this createschrome*.pftrace
.$ cargo run -r -- collect studies/foo
If collection for a sample fails, the program will fail loudly with a non-zero exit status. In this case, run the
collect
command again, and collection will restart from the sample that failed. -
Analyse results. This creates
summaries.txt
andsummaries.json
. For Chromium samples, this also createschrome*.json
, which arechrome*.pftrace
converted to JSON.$ cargo run -r -- analyse studies/foo
Our analysis code is currently written to consume the old Chrome JSON trace format, but we should migrate it to consume Perfetto traces directly, because that will simplify and speed up analysis.
-
Generate the report.
$ cargo run -r -- report studies/foo
Create a mitmproxy
group and add it to your user’s supplementary groups:
$ sudo groupadd mitmproxy
$ sudo usermod -aG mitmproxy $(whoami)
Or if you are on NixOS:
users.groups.mitmproxy = {
members = [ "<your username>" ];
};
Then record a dump and replay it, following the prompts:
$ sudo ./start-mitmproxy.sh record path/to/example.com.dump
$ sudo ./start-mitmproxy.sh replay path/to/example.com.dump
This is a simpler workflow, ideal for situations where you only have one CPU configuration, one site, and very few engines.
Make sure you build Servo with --features tracing-perfetto
. When benchmarking the root of a site, be sure to include the trailing slash any time a URL is needed.
Run benchmarks as follows:
$ ./benchmark-servo.sh ~/path/to/servoshell http://example.com/ 30 ./example.com.servo
$ ./benchmark-chromium.sh google-chrome-stable http://example.com/ 30 ./example.com.chromium
We also recommend configuring your window manager to move the windows to a secondary monitor, where they can be kept visible but not focused. For example, in i3, where 7
is a workspace that has windows but is not visible on any monitor:
$ cat ~/.config/i3/config
for_window [instance="^google-chrome [(]" class="^Google-chrome$"] floating enable
for_window [instance="^servo$" class="^servo$"] floating enable
assign [instance="^google-chrome [(]" class="^Google-chrome$"] 7
assign [instance="^servo$" class="^servo$"] 7
Both trace formats are required for now, because some metrics like TimeToFirstPaint and TimeToFirstContentfulPaint are only in the HTML traces, while some events like ScriptEvaluate are only in the Perfetto traces.
Use the servo
command, where <url>
is the same URL as the page you loaded:
$ RUST_LOG=analyse=info cargo run -r servo <url> <manifest.json> [manifest.json ...]
--print-pwm
tells you in the terminal running Servo when you’ve waited long enough for the Time To Interactive metric to appear in your trace.
Use the chromium
command, where <url>
is the same URL as the page you loaded:
$ python traceconv json chrome.pftrace chrome.json
$ RUST_LOG=analyse=info cargo run -r chromium <url> <chrome.json> [chrome.json ...]
How to generate a combined Chrome JSON trace for a set of related samples
Use the combined
command, where each <command>
is a servo
or chromium
command from above:
$ RUST_LOG=analyse=info cargo run -r combined <command> [[-- <command>] ...]
These traces can be opened in the Perfetto UI.