-
Notifications
You must be signed in to change notification settings - Fork 295
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Compile time benchmarking #1100
base: master
Are you sure you want to change the base?
Conversation
As a small example, removing just the Servant typeclasses from being derived for keys shaves several hundred milliseconds off compilation. This is fairly significant given that an e.g. Yesod project has no need for these instances, and the result would be higher if done to more models files.
|
Changing the definition of |
I asked about how to do benchmarking in Q on reddit and got some good answers: https://www.reddit.com/r/haskell/comments/n75cd1/how_do_you_benchmark_q_template_haskell_functions/ |
This is a draft PR of some work I did to measure compile times. I've had it sitting around for awhile so decided to draft PR it, just in case someone could use it.
The general idea is that Persistent suffers from slow compile times, because its template Haskell code generates very large amounts of code. This causes issues both in development and production for users, especially because Persistent models are likely a fairly "root" dependency in many codebases.
There are a number of changes Persistent could potentially make to reduce compile times, for example deriving fewer instances for keys.
This PR takes the approach of having a sample project that primarily consists of a large
.persistentmodels
file, which is one of the files Mercury uses in production (with modifications). We benchmark compiling it in two ways:bench
CLI program, which is a wrapper around criterion. This gives us the usual benefits of criterion like statistical measurements in our benchmarks. The downside it it measures the full compilation time, not just desired module.-ddump-timings
andddump-to-file
when benchmarking, then on each build of the project, copy the file that has the timings for our models module to another directory. At the conclusion of the benchmarking, we use the timing files to get an average duration it took to compile our models.Overall I think this is a good approach to benchmarking compilation time. It can be used with a variety of compiler settings (e.g. -O0 matters for development, but -O1 or -O2 for production). But it could use more sample projects that exercise different parts of Persistent (e.g. perhaps there is a performance degradation with models with 20+ fields—this current PR would not catch that).