-
Notifications
You must be signed in to change notification settings - Fork 519
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Revise config defaults for Valkey 8 #653
Comments
some suggestions:
|
Interesting. @soloestoy can you explain what Latency monitor, if it has almost no CPU overhead, I'm OK with enabling it by default. Please explain to me that I don't have to worry about this comment in the config file:
|
I have always wished that Redis/Valkey could track different types of memory usage, such as the amount of memory used for storing user data versus the amount of memory utilized for system operation (like clients). This way, users could set independent limits for data and system operation, such as About latency monitor, enable it is not a big deal. /* Add the sample only if the elapsed time is >= to the configured threshold. */
#define latencyAddSampleIfNeeded(event, var) \
if (server.latency_monitor_threshold && (var) >= server.latency_monitor_threshold) latencyAddSample((event), (var)); It has almost no impact, because it's difficult to actually reach this threshold, so it's merely a conditional statement. Even if the threshold is truly exceeded, the time it consumes to log this is only at the microsecond level, which is far less than 100ms. |
@soloestoy I also agree that we need to keep some tight memory accounting for the user data. I wonder though if evictions (both clients and data) should only be triggered by that thresholds alone? for example how do we account for memory waste like fragmentation, it is very hard to evaluate the fragmentation ratio of each of the system/data memory and take it into account. |
I agree about memory accounting needs especially for clients, that can consume resource independently of each other and as such you may want to have independent control. I think that another aspect of memory usage is the transient memory allocations, that are allocated and freed during command execution (processing memory := memory for processing needs), like lua allocation/modules allocation/temp objs and so on, this memory usage can cause memory pressure/swap without being apparent in any data point except the peak memory that only works for all-time memory maximum and not recent. [On this note, I hoped to raise one day the idea of memory pool isolation. where we use different memory pool for data for clients for processing memory. I have poc'd this idea using jemalloc's private arena. @ranshid, I think this will solve the fragmentation cost association to usage type. The motivation for memory pool is of course of much wider scope as the memory life cycle is very different, so isolating the usages will improve the overall efficiency. Maybe I'll raise a separate issue on this] |
I agree, depending on the complexity of it :) |
lets do it in 8.0, i think we should configure the ideal configuration items for users. |
@enjoy-binbin what's the current default |
the current default latency-monitor-threshold is 0 (disable). I see you guys discussed this in here #653 (comment) the memory one, yean, maybe, or a latency-history-max-len similar to slowlog-max-len? the latency i think is useful like slowlog, i think they can be put together as an analogy. |
ohh, we already have the history limit:
|
@madolson wrote in redis/redis#12191:
The
set-max-listpack-entries
was discussed here, where it was first added: redis/redis#11290 (comment)Ideally we should tune this in some smart way, but I assume nobody will do this, I suggest the following (pseudo-random 🤣) new defaults:
@eduardobr wrote in redis/redis#12747:
@xuliang0317 wrote in redis/redis#12747:
So I suggest
repl-backlog-size 1m -> 10m
.The text was updated successfully, but these errors were encountered: