Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memoize macro for wrapping IO operation in async Task #76

Open
nielsls opened this issue Mar 17, 2022 · 2 comments
Open

Memoize macro for wrapping IO operation in async Task #76

nielsls opened this issue Mar 17, 2022 · 2 comments

Comments

@nielsls
Copy link

nielsls commented Mar 17, 2022

Hey - thanks for this excellent package! My code is littered with @memoize!
I have a feature request/idea:

I have a lot of IO-operations and have thus made my code highly concurrent (littered with @async...).
So to @memoize IO operations while making sure I only load stuff once, I frequently use the following pattern:

load_stuff(params) = fetch(_load_stuff(params))
@memoize _load_stuff(params) = @async expensive_load(params)

Hence, by memoizing the Task created by @async, we make sure stuff is only loaded once even when multiple concurrent tasks might like to load the same thing at the same time.

It would be nice if the above could be wrapped in a macro (e.g. @task_memoize ?). As a performance enhancement, the @task_memoize macro could potentially unwrap the result from its task once the task is done. Then the task can be GC'ed and only the result needs to be saved.

Thoughts/suggestions welcome - can't help thinking the above use-case is relatively generic/common.
I might take a stab at it myself - although I have doubts my macro-manipulating skills are currently fit for the job.

@cstjean
Copy link
Collaborator

cstjean commented Mar 17, 2022

That sounds very nice! In fact, if the overhead isn't too large, maybe this should be the default, although it's good to start with a separate macro.

I might take a stab at it myself - although I have doubts my macro-manipulating skills are currently fit for the job.

Have at it! To be frank, it's unlikely to be implemented by anyone else.

@nielsls
Copy link
Author

nielsls commented Mar 17, 2022

Well, the overhead would be minimal and amount to this:

return x isa Task ? fetch(x) : x  # replaces: return x

where x is the memoized value.
Making this the default is interesting, although it would be breaking (as you then can't memoize a Task).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants