docs/design/coreclr/jit/JitOptimizerPlanningGuide.md
The goal of this document is to capture some thinking about the process used to prioritize and validate optimizer investments. The overriding goal of such investments is to help ensure that the dotnet platform satisfies developers' performance needs.
There are a number of public benchmarks which evaluate different platforms' relative performance, so naturally dotnet's scores on such benchmarks give some indication of how well it satisfies developers' performance needs. The JIT team has used some of these benchmarks, particularly TechEmpower and Benchmarks Game, for scouting out optimization opportunities and prioritizing optimization improvements. While it is important to track scores on such benchmarks to validate performance changes in the dotnet platform as a whole, when it comes to planning and prioritizing JIT optimization improvements specifically, they aren't sufficient, due to a few well-known issues:
Compiler micro-benchmarks (like those in our test tree) don't share these issues, and adding them as optimizations are implemented is critical for validation and regression prevention; however, micro-benchmarks often aren't as representative of real-world code, and therefore not as reflective of developers' performance needs, so aren't well suited for scouting out and prioritizing opportunities.
While source changes can more rapidly and dramatically effect changes to targeted hot code sequences in macro-benchmarks, compiler changes have the advantage that they apply broadly to all compiled code. One of the best reasons to invest in compiler optimization improvements is to capitalize on this. A few specific benefits:
Listed here are several ideas for undertakings we might pursue to improve our ability to identify opportunities and validate/track improvements that mesh with the benefits discussed above. Thinking here is in the early stages, but the hope is that with some thought/discussion some of these will surface as worth investing in.