You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As for sleep, I think that's a bit more complicated. In this generic loop, I think not sleeping is probably appropriate because we don't have any expectations around concurrency. But in the Pyramid subclass, where we do have expectations around highly concurrent usage, I think a non-zero default value is probably appropriate. If we go with three retries, then in the worst case that we actually have to take those retries, our expected total backoff time would be 0.5 * sleep + 1.5 * sleep + 3.5 * sleep = 5.5 * sleep . We want to choose a value that minimizes conflicts while also minimizing delays; that's hard to do statically.
I mentioned this on another ticket and dismissed it as unnecessary complexity, but rethinking it, what if the base sleep value was dynamic for each request? Specifically, what if we derive sleep from the amount of time the request ran? sleep = min(min(X% of run time, Y), Z), where X, Y, and Z would be configurable, defaulting to, say 10%, 10ms and 50ms. The idea is that the more expensive the request was to run, the longer we should be willing to wait to retry it (because the longer we wait, the more likely it is that other concurrent processes we're conflicting with will have finished). But we still don't want to wait too long because (a) the user is waiting and (b) the longer we wait the more likely it is that we may have to actually do more work to achieve the desired result.
The text was updated successfully, but these errors were encountered:
Moved from #30 (comment)
As for sleep, I think that's a bit more complicated. In this generic loop, I think not sleeping is probably appropriate because we don't have any expectations around concurrency. But in the Pyramid subclass, where we do have expectations around highly concurrent usage, I think a non-zero default value is probably appropriate. If we go with three retries, then in the worst case that we actually have to take those retries, our expected total backoff time would be
0.5 * sleep + 1.5 * sleep + 3.5 * sleep = 5.5 * sleep
. We want to choose a value that minimizes conflicts while also minimizing delays; that's hard to do statically.I mentioned this on another ticket and dismissed it as unnecessary complexity, but rethinking it, what if the base sleep value was dynamic for each request? Specifically, what if we derive
sleep
from the amount of time the request ran?sleep = min(min(X% of run time, Y), Z)
, where X, Y, and Z would be configurable, defaulting to, say 10%, 10ms and 50ms. The idea is that the more expensive the request was to run, the longer we should be willing to wait to retry it (because the longer we wait, the more likely it is that other concurrent processes we're conflicting with will have finished). But we still don't want to wait too long because (a) the user is waiting and (b) the longer we wait the more likely it is that we may have to actually do more work to achieve the desired result.The text was updated successfully, but these errors were encountered: