doc/source/resources/solver_benchmarks/index.rst
The solver-benchmarks <https://github.com/cvxpy/solver-benchmarks>_ project
collects benchmark problems and results to inform CVXPY's default solver
selection. By running a shared suite of problems across many machines and solver
versions, we can make data-driven decisions about which solvers to recommend as
defaults for each problem type (LP, QP, MIP, SOCP, SDP).
CVXPY supports many solvers, but choosing good defaults requires real performance data across a wide range of problems and environments. The solver-benchmarks project provides that data so CVXPY can recommend the best solver for each problem type out of the box.
We are collecting benchmark problems from the community. Ideal problems:
If you have a problem that fits these criteria, please contribute it!
Once the problem suite is established, contributors run benchmarks on their machines and submit results. Results are stored as JSONL files for easy comparison across machines and solver versions.
.. code-block:: bash
git clone https://github.com/cvxpy/solver-benchmarks.git cd solver-benchmarks uv sync uv run python run_benchmarks.py uv run python summarize.py
See the GitHub repository <https://github.com/cvxpy/solver-benchmarks>_ for
full instructions and contribution guidelines.