You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
#661 introduces an initial approach to benchmark the sync_state request. Currently, it executes multiple requests sequentially and prints the average elapsed time per request. I think it's worth discussing which metrics are most valuable for this flow (and also others as well).
For example, instead of only sequential execution, we could run the requests at different levels of concurrency and measure measure latency percentiles (e.g. p95, p99).
In #778 there is a possible implementation on running the requests concurrently and measuring the p95 stat.
For example, instead of only sequential execution, we could run the requests at different levels of concurrency and measure measure latency percentiles (e.g. p95, p99).
I agree that running at different levels of concurrency is a good idea. I think we should also run at different database sizes - e.g., 100K vs. 1M (ideally, we'd do bigger database sizes too - but that could be further down the road).
As for latency percentiles, I think p95 is probably fine.
Follow-up issue for #609.
Use the loaded store generated with the seed-store CLI added in #657 and collect performance metrics when executing RPC endpoint calls.
The text was updated successfully, but these errors were encountered: