Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add a benchmark for compiling large fluent files #30

Merged
merged 2 commits into from
May 30, 2024

Conversation

leamingrad
Copy link
Contributor

@leamingrad leamingrad commented May 26, 2024

This PR adds a new benchmark to the project for compiling a fluent file with 10K items it.

While the benchmark itself relies on a static file, I've included the script that I used to generate it.

Sample results
❯ ./compiler.py -k 10k --benchmark-warmup=off
================================================================================================= test session starts ==================================================================================================
platform darwin -- Python 3.10.14, pytest-8.2.1, pluggy-1.5.0
benchmark: 4.0.0 (defaults: timer=time.perf_counter disable_gc=False min_rounds=5 min_time=0.000005 max_time=1.0 calibration_precision=10 warmup=False warmup_iterations=100000)
rootdir: /Volumes/Code/fluent-compiler
configfile: pyproject.toml
plugins: anyio-4.3.0, hypothesis-6.102.6, benchmark-4.0.0
collected 4 items / 3 deselected / 1 selected                                                                                                                                                                          

compiler.py .                                                                                                                                                                                                    [100%]


----------------------------------------------- benchmark: 1 tests -----------------------------------------------
Name (time in s)                Min      Max    Mean  StdDev  Median     IQR  Outliers     OPS  Rounds  Iterations
------------------------------------------------------------------------------------------------------------------
test_file_with_10k_items     9.7863  10.1248  9.9234  0.1377  9.8794  0.2118       1;0  0.1008       5           1
------------------------------------------------------------------------------------------------------------------

Legend:
  Outliers: 1 Standard Deviation from Mean; 1.5 IQR (InterQuartile Range) from 1st Quartile and 3rd Quartile.
  OPS: Operations Per Second, computed as 1 / Mean
====================================================================================== 1 passed, 3 deselected in 69.96s (0:01:09) ======================================================================================

I am looking to add a benchmark for parsing large fluent files, in order to
test potential performance optimisations. Originally I used an actual
fluent file from my project, but that cannot be shared.

This change adds a script which can be used to generate "random" fluent
files, which share similar statistical properties to the actual file I
used. For example, the ratio of comments to messages, or the number of
elements in side a message.
Prior to this change, all of the benchmarks were micro benchmarks.
However, I work with a project that contains some large fluent files,
and we have observed some slowness when compiling them.

This change adds a benchmark for compiling a 10000 item file, which can
be used to verify performance optimisations in other commits.
Copy link
Contributor Author

@leamingrad leamingrad left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some tasting notes

@@ -0,0 +1,157 @@
import argparse
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm happy to drop this commit entirely if the script isn't useful, but I thought it was worth offering up (and am happy to make fixups).

@spookylukey
Copy link
Contributor

The test fails due to an issue on master I have resolved, this is good to go. Thanks!

@spookylukey spookylukey merged commit a040791 into django-ftl:master May 30, 2024
6 of 11 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants