Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

lmdbpool: Do not use sync.Pool #116

Open
bmatsuo opened this issue Feb 22, 2017 · 1 comment
Open

lmdbpool: Do not use sync.Pool #116

bmatsuo opened this issue Feb 22, 2017 · 1 comment

Comments

@bmatsuo
Copy link
Owner

bmatsuo commented Feb 22, 2017

A proper free list of Txns should probably not use sync.Pool for its underlying storage. It can work for some applications, which I have tried to clearly explain in the documentation. But some of sync.Pool's behaviors are not conducive to effectively managing a Txn free list. For example, there is no guarantee that a single goroutine can flush all cached objects from a sync.Pool.

This issue is meant to hold the discussion of how a free list that does not rely on sync.Pool may be implemented.

This issue was brought up initially in a comment by @lmb on #104. Full background can be found by reading that thread. But this comment summarizes the general outcome of the discussion.

Why do you want to tie the lifetime of the reserved transactions to the GC cycle?

This is valid question. I have not implemented a custom free list because can be tricky to determine how many items to hold onto before additional items would be thrown away (aborted in the case of Txns).

For an lmdb.Env the maximum size of the pool is the value of MaxReaders (126 by default), which must be configured before Env.Open is called. So does it make sense that the free list be allowed to grow to MaxReaders size? Possibly. But maybe not. Can I issue an Env.Copy with MaxReaders in the free list? I doubt it.

I can implement a bounded free list so that the application developer is required to specify the maximum number of items it may hold. But I believe Go's general design philosophy is that application developers cannot determine a proper bound much better than I can as a package author, and they are prone to chose a bad value.

@lmb
Copy link
Contributor

lmb commented Feb 24, 2017

I would probably just use a buffered channel. You won't be able to implement faster acquire / release, and draining the pool is relatively straightforward. Users will have to specify a limit for the pool, but I think that is good. As you have pointed out, you can get into a situation where a Copy() call fails / blocks forever since all slots are taken. Therefore the right size for the free list depends on the application, which only the user knows.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants