Skip to content
Portada » Bullet chess engines rating list

Bullet chess engines rating list

Bullet Chess Engines Rating List — Methodology, Participants & Replication

Our Bullet list is built from reproducible round-robin tournaments on a single host with identical constraints. We compute primary ratings using BayesElo and cross-check standings with Ordo to validate error bars and relative placements. The live leaderboard is maintained on this page; related season posts/streams document the Bullet runs at 60+1. (ijccrl.com, open-chess.org)

Testbed & Global Constraints

  • Host: HP ProLiant DL360p Gen8 (stable environment; no foreground load)
  • OS: Windows 10 Pro Workstation
  • Tablebases: Syzygy 5-piece (-tb … -tbpieces 5)
  • Instruction set baseline: SSE3/SSSE3/SSE4/POPCNT builds where available, to keep the ISA comparable
  • Threads / Hash (strict fairness): Threads=1, Hash=16 for every engine
  • Parallelism: -concurrency 2 (two games in parallel on the same machine)

Tournament Runner (CuteChess-CLI)

We use CuteChess-CLI with color-balanced, multi-pass schedules:

  • Mode: -tournament round-robin
  • Time control (Bullet division): tc=60+1 (60 seconds + 1s increment)
  • Openings: fixed UHO 8-move suites, e.g. UHO_2024_8mvs_* (-openings file=… format=pgn order=random)
  • Color balance: -games 2 (each pair plays one as White, one as Black)
  • Depth: typically -rounds 7 and -repeat 2 to reduce variance
  • Recovery & output: -recover, -pgnout …\Games\*.pgn, -ratinginterval 10

These settings keep conditions identical within the division and reproducible across seasons. Bullet @ 60+1 is documented in your public event posts/threads. (ijccrl.com, open-chess.org)

Engines in the Bullet Division (original UCI engines only)

The current Bullet CMD for CuteChess includes the following original UCI engines (no derivatives from your private projects are listed here):

  • Alexandria 8.1.0
  • Clover 9.0-old
  • Integral (SSE4.1-POPCNT build)
  • RubiChess 20240817
  • Seer 2.8
  • Stormphrax 7.0.0
  • Titan x64
  • Velvet 8.1.1

The leaderboard on this page reflects the results from these participants under the constraints above; see the season posts/streams for additional context and rosters. (ijccrl.com)

Rating Computation (BayesElo + Ordo)

We publish ratings as Elo ± error with games played and (when applicable) LOS:

  • BayesElo (primary):bayeselo readpgn bullet_60+1_season.pgn elo mm ratings x
    • mm runs the ML iteration to convergence; ratings prints the table (Elo, error, games).
  • Ordo (cross-check):
    We process the same PGN in Ordo with a matching logistic model; Ordo’s color-bias handling and confidence intervals provide a second, independent view on the standings.

We never mix divisions (Bullet vs Blitz/Extended) and we don’t aggregate across different hardware. Each division is a clean slice with identical constraints.

Data & Downloads

  • PGNs: All Bullet games are exported and archived by year/month inside our container.
  • Machine-readable tables: Use the CSV/JSON templates below to publish ratings alongside the HTML leaderboard; replace values with your BayesElo/Ordo outputs at each update.

Compact Participants Table

RankEngineBayesElo+Ordo EloGamesScoreOpp.Draws
1Alexandria 8.1.09313133751137267%-1348%
2Clover 9.0 (old)8514133741137265%-1248%
3Stormphrax 7.0.05413133704137260%-848%
4RubiChess (2024-08-17)913133650137251%-147%
5Seer 2.8 (x64)-3413133600137244%547%
6Velvet 8.1.1-4913133583137241%748%
7Integral (SSE4.1+POPCNT)-6013133570137239%946%
8Titan (x64)-9814143524137232%1446%

Tiny FAQ (JSON-LD)

Paste this at the end of your Bullet page (HTML block):

<script type="application/ld+json">
{
  "@context":"https://schema.org",
  "@type":"FAQPage",
  "mainEntity":[
    {"@type":"Question","name":"What time control do you use for Bullet?",
     "acceptedAnswer":{"@type":"Answer","text":"All Bullet events run at 60+1 on a single host with identical constraints (Threads=1, Hash=16, Syzygy 5-man)."}},
    {"@type":"Question","name":"Which openings do you use?",
     "acceptedAnswer":{"@type":"Answer","text":"Fixed UHO 8-move suites (e.g., UHO_2024_8mvs_*), loaded in CuteChess via -openings file=... format=pgn order=random for diversity and reproducibility."}},
    {"@type":"Question","name":"How are ratings calculated?",
     "acceptedAnswer":{"@type":"Answer","text":"Primary ratings are computed with BayesElo (maximum likelihood) and cross-checked with Ordo to validate intervals and relative placements."}},
    {"@type":"Question","name":"Do you mix results from other divisions?",
     "acceptedAnswer":{"@type":"Answer","text":"No. Bullet results are never mixed with Blitz/Extended or with different hardware runs."}}
  ]
}
</script>

Machine-Readable Templates (Bullet)

Columns: rank, engine, version, tc, games, elo, error, los, ptnml, pgn_url
Populate with your BayesElo/Ordo outputs after each season update. (I’ve purposefully left the numeric fields blank to avoid mixing sample numbers with your official results.)


TL;DR

  • TC: Bullet 60+1; UHO 8-move openings; strict Threads=1 / Hash=16; Syzygy 5-man.
  • Engines: only original UCI engines (Alexandria, Clover, Integral, RubiChess, Seer, Stormphrax, Titan, Velvet).
  • Ratings: BayesElo (primary), cross-checked with Ordo.
  • Live table: this page; PGNs: archived by year/month; CSV/JSON templates provided for machine-readable publication.

Rating list

30092025

30s+1s

NameElogamesscoredraws
Alexandria 8.1.03750137267%48%
Clover.9.0-old3741137265%48%
stormphrax_7.0.03704137260%48%
RubiChess-202408173649137251%47%
seer_v2.8_x643600137244%47%
velvet-v8.1.13582137241%48%
integral_sse41_popcnt3570137239%46%
Titan-x643523137232%46%
games playes on 06 august to18.