The neutral benchmark for rotating-machinery ML.
A standardised, contamination-resistant benchmark for AI surrogate models that design and simulate rotors. The MLPerf of rotating machinery.
§01Why a domain benchmark
Generic ML benchmarks measure things like image classification accuracy. They tell you nothing about whether a surrogate predicts Cl/Cd on a wind-blade section to the tolerance a procurement engineer needs.
Comparotor measures the things rotor designers actually use surrogates for — and rotates the test set quarterly so vendors cannot pre-train on it.
| Domain-specific scoring | MAE on Cl, Cd, Cm; L/D rank correlation; OOD generalisation; inference latency. |
| Contamination resistance | 50-airfoil quarterly rotation. The test set didn't exist when you trained. |
| Public leaderboard | Free public submissions. Citable in marketing and in research. |
| Private eval API | Pro and Enterprise plans keep results private. PDF reports for buy-side validation. |
§02Who it's for
Physics-ML startups
Your customers ask 'how do we know your model works?' — point them at a third-party score that's comparable across vendors.
A&D and energy OEMs
Cut through marketing claims when procuring ML surrogates. Get signed reports your simulation team will trust.
University labs
Cite a stable, versioned benchmark instead of rolling your own. Free public submissions with persistent score URLs.
§03Top of the leaderboard
full board| # | Model | Org | MAE Cl | MAE Cd | ρ L/D | Composite |
|---|---|---|---|---|---|---|
| Leaderboard is warming up. | ||||||
§04Pricing
all plans1 public run / week
50 private runs · API access · PDF report
Unlimited · custom suites · SLA
§05Design partners
Onboarding 5 design partners for the v0.1 launch — two academic anchors, three commercial.