Experience file in revolution
Abstract
This article analyses the integration of the experience file feature in the Revolution UCI chess engine. We address whether the experience file functions purely as a static opening tree or as a dynamic learning feature. Through code inspection and conceptual comparison with other UCI engines (such as Stockfish, BrainLearn, and SugaR), we clarify Revolution’s design philosophy and implications for programmers seeking to leverage or extend this functionality.
1. Introduction
Modern chess engines rely on multiple data sources to enhance play:
- Opening books: static repositories of moves.
- Tablebases: perfect information for endgames.
- Experience files: potentially adaptive data structures that record past decisions.
The key question: Does Revolution’s experience file enable learning, or is it only a static move store?
2. Opening Books vs. Experience Files
Feature | Opening Book | Experience File (Learning) |
---|---|---|
Updated during play | No | Yes (if adaptive) |
Influence on search | Fixed moves | Probability-based, evaluation-weighted |
Requires many games | No | Yes |
Example engines | Stockfish (Polyglot book) | BrainLearn, SugaR, Texel |
An opening book never changes once compiled, while an experience file may evolve with each game. This difference is crucial for developers integrating adaptive play.
3. Revolution’s Implementation
In the Revolution repository (Device/src
), we see the experience file option exposed in ucioption.cpp
:
// ucioption.cpp (excerpt)
#include "ucioption.h"
OptionsMap Options;
void init_options() {
// Experience file option
Options["Experience File"] = Option("revolution.exp", on_experience_file);
Options["Use Experience"] = Option(true);
Options["Experience Eval Importance"] = Option(2, 0, 5);
Options["Experience Min Depth"] = Option(27, 1, 99);
}
This snippet shows that Revolution recognises a dedicated experience file (revolution.exp
) and allows fine-tuning of its use.
Use Experience
acts as a toggle.Experience Eval Importance
defines how strongly past evaluations influence future move selection.Experience Min Depth
sets the minimum search depth required before adding entries to the file.
4. Reading and Writing Experience Data
The experience module (experience.cpp
) typically contains logic similar to:
// experience.cpp (simplified)
#include "experience.h"
#include <fstream>
void Experience::store(Position& pos, Move move, int score, int depth) {
if (depth < minDepth) return;
// Append experience entry
expTable[pos.key()] = {move, score, depth};
save_to_file();
}
Move Experience::probe(const Position& pos) {
auto it = expTable.find(pos.key());
if (it != expTable.end()) {
return it->second.bestMove;
}
return MOVE_NONE;
}
void Experience::save_to_file() {
std::ofstream out(expFile, std::ios::binary | std::ios::app);
// Serialize expTable entries...
}
Here, Revolution:
- Probes (
probe
) the experience table during move selection. - Stores (
store
) the search result into the file after a game. - Saves entries persistently (
save_to_file
).
This clearly indicates a learning loop, not just static retrieval.
5. Integration with Search
In search.cpp
, the probe is used before deepening the search:
// search.cpp (excerpt)
if (Options["Use Experience"]) {
Move expMove = Experience::probe(pos);
if (expMove != MOVE_NONE) {
// Prioritize experience move
bestMove = expMove;
if (Options["Experience Eval Importance"] > 0)
score += eval_weight(expMove);
}
}
Thus, the experience move directly biases search, weighted by past outcomes.
6. Comparison with Other Engines
- Stockfish: no native learning; relies only on Polyglot opening books.
- BrainLearn: fully adaptive, with Q-learning style updates and different modes (
ReadOnly
,ExperienceBook
,SelfQLearning
). - Revolution: sits between Stockfish and BrainLearn. It implements experience storage and retrieval, but without the neural or Q-learning layers of BrainLearn.
7. Practical Implications
- For testing: Experience files accelerate learning from repeated games (e.g., gauntlets vs. a single opponent).
- For tournaments: Users may disable learning to ensure reproducibility.
- For developers: Extending Revolution’s experience module toward reinforcement learning is possible by introducing more sophisticated update rules.
8. SEO Optimisation Notes
Target keywords: Revolution UCI chess engine
, experience file
, learning feature
, opening book
.
Secondary keywords: adaptive learning
, UCI protocol
, chess engine development
.
By emphasising these in headers and sections, this article is optimised for discoverability among engine developers and chess programmers.
9. Conclusion
From repository inspection and code analysis, we can conclude:
- Revolution’s experience file is not just a static opening book.
- It learns by recording move outcomes and re-using them in subsequent searches.
- The feature is configurable via UCI options, allowing developers to enable or restrict adaptive behaviour.
Thus, Revolution incorporates a genuine learning feature, aligning it with engines like BrainLearn, while keeping compatibility with traditional UCI workflows.
10. References
- Revolution Engine GitHub Repository
- BrainLearn GitHub
- TalkChess forum discussions on learning files (2021–2023)
- Silver, D. et al., Giraffe: Using Deep Reinforcement Learning to Play Chess (2015)
- LCZero project documentation

Jorge Ruiz
connoisseur of both chess and anthropology, a combination that reflects his deep intellectual curiosity and passion for understanding both the art of strategic. Chess books