Replies: 1 comment
-
Hi @RBrossard - I meant to respond to this earlier. Regarding the 100 ms latency you're observing, there's no fundamental architectural reason that it should take so long, but we'd likely need to do profiling to speed it up. I can't promise that's something will be able to get too soon, but would be happy to help out if you wanted to look into it. Regarding dynamic outputs, what you observed is expected - we don't start downstream steps until a step has fully completed. We've talked about improving that in the future, but it's not something we have immediate plans for. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi, I am using Dagster as an orchestrator for all of my training pipeline of ML and really loving it. Thank you for the great work !
I am now thinking about using dagster for real time inference pipeline, but I am not sure about how I should proceed. I have two main concerns:
I am not sure to understand what takes so much time. Is it all the checks that dagster is doing ? In any case, I am happy to use all the features while testing, but once the pipeline is robust and ready for production, I will only be interested into passing the solid_configs and resources. Is there a way to deactivate all of the heavy bunch ?
The previous code prints:
while I would hope that it behaves like a generator, meaning
second_step 0
printed just afterfirst_step 0
. In a more advanced example, the first step would yield a stream of data in real time and the rest of the pipeline would have to run as soon as it receives it.With my current understanding, I have no other idea for real time inference using dagster. Am I missing something ? What do you think ?
Beta Was this translation helpful? Give feedback.
All reactions