Summary
- Expanded documentation to describe “Experience Learning,” outlining new options such as
Experience Prior
,Experience Width
, andExperience Eval Weight
, which bias root move ordering without acting as an opening book - Added engine options to manage experience-based learning, including parameters for width, depth, and evaluation weighting
- Revised search logic to probe stored experience and reorder root moves while still performing a full search, ensuring opening books remain optional
Testing
- ✅
bash tests/perft.sh ./src/revolution_dev_290825_v1.0.1
Experience Learning
## Experience Book
## Experience Learning
Revolution includes a simple text-based cache that stores root moves from
previous games. It functions as a lightweight opening book and does not
influence the internal search beyond the root. The following UCI options
control this system:
Revolution includes a simple text-based cache that stores root moves and evaluations from
previous games. Rather than forcing book moves, the cached information biases root move
ordering during search. The following UCI options control this system:
- `Experience Enabled`: enables or disables the experience feature (default `true`).
- `Experience File`: name of the file where the experience data is stored (default `revolution.exp`; legacy `.bin` files are converted automatically).
- `Experience Readonly`: if `true`, no changes are written to the file.
- `Experience Book`: uses the experience data as an opening book.
- `Experience Book Width`: number of principal moves to consider (1–20).
- `Experience Book Eval Importance`: weighting of evaluation when ordering moves (0–10).
- `Experience Book Min Depth`: minimum depth required to store a move (4–64).
- `Experience Book Max Moves`: maximum number of moves saved per position (1–100).
- `Experience Prior`: uses stored experience to bias root move ordering.
- `Experience Width`: number of principal moves to consider (1–20).
- `Experience Eval Weight`: weighting of evaluation when ordering moves (0–10).
- `Experience Min Depth`: minimum depth required to store a move (4–64).
- `Experience Max Moves`: maximum number of moves saved per position (1–100).
The file is loaded at engine startup and updated after each game if `Experience Readonly` is disabled.
Engine.cpp
diff --git a/src/engine.cpp b/src/engine.cpp
index 69a03bd26581c7ac622f74e2bbf8abefbf44acb9..3bdfd57861398f1fbb6584cd1981393e82a14350 100644
--- a/src/engine.cpp
+++ b/src/engine.cpp
@@ -153,55 +153,55 @@ Engine::Engine(std::optional<std::string> path) :
polybook[1].init(o);
return std::nullopt;
}));
options.add("Book2 BestBookMove", Option(false));
options.add("Book2 Depth", Option(255, 1, 350));
options.add("Book2 Width", Option(1, 1, 10));
options.add("Experience Enabled", Option(true, [this](const Option& o) {
if (bool(o))
experience.load_async(options["Experience File"]);
else
experience.clear();
return std::nullopt;
}));
options.add("Experience File", Option("revolution.exp", [this](const Option& o) {
if ((bool) options["Experience Enabled"])
experience.load_async(o);
return std::nullopt;
}));
options.add("Experience Readonly", Option(false));
- options.add("Experience Book", Option(false));
- options.add("Experience Book Width", Option(1, 1, 20));
- options.add("Experience Book Eval Importance", Option(5, 0, 10));
- options.add("Experience Book Min Depth", Option(27, 4, 64));
- options.add("Experience Book Max Moves", Option(16, 1, 100));
+ options.add("Experience Prior", Option(true));
+ options.add("Experience Width", Option(1, 1, 20));
+ options.add("Experience Eval Weight", Option(5, 0, 10));
+ options.add("Experience Min Depth", Option(27, 4, 64));
+ options.add("Experience Max Moves", Option(16, 1, 100));
// Optional experimental evaluation tweak that adapts weights based on
// simple positional cues. Disabled by default so it does not alter
// standard play unless explicitly requested by the user.
options.add("Adaptive Style", Option(false, [](const Option& o) {
Eval::set_adaptive_style(bool(o));
return std::nullopt;
}));
options.add( //
"EvalFile", Option(EvalFileDefaultNameBig, [this](const Option& o) {
load_big_network(o);
return std::nullopt;
}));
options.add( //
"EvalFileSmall", Option(EvalFileDefaultNameSmall, [this](const Option& o) {
load_small_network(o);
return std::nullopt;
}));
load_networks();
resize_threads();
}
Search.cpp
diff --git a/src/search.cpp b/src/search.cpp
index ed1991dca5778834aedaa71d13ce6c0ffd113b61..e441304121c55b366548185c466177f9733fe6ed 100644
--- a/src/search.cpp
+++ b/src/search.cpp
@@ -151,79 +151,88 @@ Search::Worker::Worker(SharedState& sharedState,
refreshTable(networks[token]) {
clear();
}
void Search::Worker::ensure_network_replicated() {
// Access once to force lazy initialization.
// We do this because we want to avoid initialization during search.
(void) (networks[numaAccessToken]);
}
void Search::Worker::start_searching() {
accumulatorStack.reset();
// Non-main threads go directly to iterative_deepening()
if (!is_mainthread())
{
iterative_deepening();
return;
}
main_manager()->tm.init(limits, rootPos.side_to_move(), rootPos.game_ply(), options,
main_manager()->originalTimeAdjust);
tt.new_search();
- Move bookMove = Move::none();
+ Move preferredMove = Move::none();
+ Move bookMove = Move::none();
if (rootMoves.empty())
{
rootMoves.emplace_back(Move::none());
main_manager()->updates.onUpdateNoMoves(
{0, {rootPos.checkers() ? -VALUE_MATE : VALUE_DRAW, rootPos}});
}
else
{
if (!limits.infinite && !limits.mate)
{
- if ((bool) options["Experience Enabled"] && (bool) options["Experience Book"])
- bookMove = experience.probe(rootPos, (int) options["Experience Book Width"],
- (int) options["Experience Book Eval Importance"],
- (int) options["Experience Book Min Depth"],
- (int) options["Experience Book Max Moves"]);
-
- if (bookMove == Move::none() && (bool) options["Book1"]
+ if ((bool) options["Experience Enabled"] && (bool) options["Experience Prior"])
+ preferredMove =
+ experience.probe(rootPos, (int) options["Experience Width"],
+ (int) options["Experience Eval Weight"],
+ (int) options["Experience Min Depth"],
+ (int) options["Experience Max Moves"]);
+
+ if ((bool) options["Book1"]
&& rootPos.game_ply() / 2 < (int) options["Book1 Depth"])
bookMove = polybook[0].probe(rootPos, (bool) options["Book1 BestBookMove"],
(int) options["Book1 Width"]);
if (bookMove == Move::none() && (bool) options["Book2"]
&& rootPos.game_ply() / 2 < (int) options["Book2 Depth"])
bookMove = polybook[1].probe(rootPos, (bool) options["Book2 BestBookMove"],
(int) options["Book2 Width"]);
}
+ if (preferredMove != Move::none()
+ && std::find(rootMoves.begin(), rootMoves.end(), preferredMove) != rootMoves.end())
+ for (auto&& th : threads)
+ std::swap(th->worker.get()->rootMoves[0],
+ *std::find(th->worker.get()->rootMoves.begin(),
+ th->worker.get()->rootMoves.end(), preferredMove));
+
if (bookMove != Move::none()
&& std::find(rootMoves.begin(), rootMoves.end(), bookMove) != rootMoves.end())
{
for (auto&& th : threads)
std::swap(th->worker.get()->rootMoves[0],
*std::find(th->worker.get()->rootMoves.begin(),
th->worker.get()->rootMoves.end(), bookMove));
}
else
{
threads.start_searching(); // start non-main threads
iterative_deepening(); // main thread start searching
}
}
// When we reach the maximum depth, we can arrive here without a raise of
// threads.stop. However, if we are pondering or in an infinite search,
// the UCI protocol states that we shouldn't print the best move before the
// GUI sends a "stop" or "ponderhit" command. We therefore simply wait here
// until the GUI sends one of those commands.
while (!threads.stop && (main_manager()->ponder || limits.infinite))
{} // Busy wait for a stop or a ponder reset
// Stop the threads if not already stopped (also raise the stop if
// "ponderhit" just reset threads.ponder)
Changelog
Added
Engine now appends the build date after its name in UCI identification.
UCI option Minimum Thinking Time
to enforce a minimum search duration per move.
What effect does the “Adaptive Style” box have?
Enabling the “Adaptive Style” option activates an experimental evaluation tweak: the engine will bias its evaluation score using simple positional cues—pressure on the enemy king, defenders near your own king, and control of central squares—slightly altering play style from the standard evaluation