Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Simplify precompiles #669

Open
wants to merge 4 commits into
base: master
Choose a base branch
from
Open

Simplify precompiles #669

wants to merge 4 commits into from

Conversation

timholy
Copy link
Owner

@timholy timholy commented Feb 3, 2022

This switches to "do a little work" for precompilation,
rather than explicitly spelling out the specific signatures.
This should be more maintainable and potentially more
exhaustive (or could be made to be so) since it
precompiles across runtime dispatch boundaries.

This also uses Base.inferencebarrier in several places to
prevent an argument from being specialized.

I was using this as a test case for JuliaLang/julia#43990 to see if I can eliminate all inference from Revise. It turns out there's one CodeInstance whose inferred code is either not cached or is deleted after compilation. Needs investigation. But the total inference time is <50ms, so in practice it's not a big deal.

This switches to "do a little work" for precompilation,
rather than explicitly spelling out the specific signatures.
This should be more maintainable and potentially more
exhaustive (or could be made to be so) since it
precompiles across runtime dispatch boundaries.

This also uses `Base.inferencebarrier` in several places to
prevent an argument from being specialized.
@timholy
Copy link
Owner Author

timholy commented Feb 7, 2022

I've investigated with help from @aviatesk, and determined that the stubborn MethodInstance was having its compiled code discarded because of a LimitedAccuracy annotation. This was pretty easy to work around (though documentation on this point will need to be added to, e.g., SnoopCompile). Now the total inference time is <5ms on JuliaLang/julia#43990. ✌️

@@ -122,7 +122,7 @@ end
function methods_by_execution(mod::Module, ex::Expr; kwargs...)
methodinfo = MethodInfo()
docexprs = DocExprs()
value, frame = methods_by_execution!(JuliaInterpreter.Compiled(), methodinfo, docexprs, mod, ex; kwargs...)
value, frame = methods_by_execution!(Base.inferencebarrier(JuliaInterpreter.Compiled()), methodinfo, docexprs, mod, ex; kwargs...)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems to be an actual usecase example of JuliaLang/julia#41931. As far as I understand, does it turn out to be effective to turn off inference of recurse(...) within a callstack of methods_by_execution!? If recurse(...) is less-frequently called actually, then that sounds a reasonable hack to improve latency.

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, this is exactly a @noinfer. All the execution time specific to a particular recurse is in JuliaInterpreter anyway.

@timholy
Copy link
Owner Author

timholy commented Feb 8, 2022

Wow, the GC corruption on 1.7 is a bit scary---we might need to debug that and patch 1.7 before merging this.

But the test failure on nightly is JuliaLang/julia#42822, which appears to be deliberate. (CC @IanButterworth.)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants