Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

"burst latency" and general tail latencies #2

Open
jberryman opened this issue Feb 24, 2021 · 0 comments
Open

"burst latency" and general tail latencies #2

jberryman opened this issue Feb 24, 2021 · 0 comments

Comments

@jberryman
Copy link

Just leaving this here for visibility.

When benchmarking constant loads we see zipf-ish tail latencies, where the max increases with the number of samples collected (it seems).

https://hasura.io/blog/decreasing-latency-noise-and-maximizing-performance-during-end-to-end-benchmarking/

  • is the burst throughput/latency metric just a reflection of this same tail latency phemonenon? Put another way: is the concept of "per-burst latency" another model we can use to motivate lowering tail latencies? (related to the more common example of the way that tail latencies affect UX on a web-page that makes many requests to render a single view)
  • Is there reason to expect that latencies should be distributed as they are? I have no non-hand-wavy explanation for the far outliers. Maybe the tests here provide some insight (e.g. does it suggest poor scheduling in the RTS in some way?)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant