You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As Dagger is a very complicated set of interacting components and APIs, it would be very useful to be able to track Dagger's performance, scalability, and latency over time to ensure that we don't introduce unexpected regressions, and to be able to make claims about performance and suitability with some confidence.
To that end, I believe it would be valuable to, on every merge to master:
Run the full benchmark suite on various configurations
Stress-test under various configurations to find broken or buggy behavior
Perform automated profiling to find the current set of performance hotspots
As Dagger is a very complicated set of interacting components and APIs, it would be very useful to be able to track Dagger's performance, scalability, and latency over time to ensure that we don't introduce unexpected regressions, and to be able to make claims about performance and suitability with some confidence.
To that end, I believe it would be valuable to, on every merge to
master
:To make the collected information useful, we should automatically export the associated data to some persistent storage (say, S3) in raw form, together with any generated plots or aggregate metrics. We can use something like https://github.com/SciML/SciMLBenchmarks.jl/blob/84462b8f1e5c974df9f396ca4d9b4900e1108a21/.buildkite/run_benchmark.yml to upload to S3, and then provide a script or code to download and analyze this data.
An extra bonus would be to publish this data to https://daggerjl.ai/ so that we can show off our performance gains over time.
The text was updated successfully, but these errors were encountered: