Sturddle 2 is a fork of my Sturddle Chess Engine (https://github.com/cristivlas/sturddle-chess-engine) with many bug fixes and a rewritten (and trained from scratch) neural network.
python3 tools\build.py
builds a native executable for the host OS. The executable
bundles binary images that support AVX512, AVX2, and generic SSE2.
To build just a python module:
python3 setup.py build_ext --inplace
or:
CC=clang++ CFLAGS=-march=native python3 setup.py build_ext --inplace
or:
CC=clang++ CFLAGS=-march=native NATIVE_UCI=1 python3 setup.py build_ext --inplace
Clang is recommended, the GNU C++ compiler may work but it is not supported.
If built with the NATIVE_UCI
flag, invoke main.py
to run the UCI engine.
Without the NATIVE_UCI
flag, run sturddle.py
instead.
I have trained the neural net on a large dataset that I generated and curated over a couple of years.
To start from scratch:
- Start with games saved as PGN (downloaded from the Internet, or from engine tournaments).
- Generate sqlite3 database(s) of positions from PGN games, using:
tools/sqlite/mkposdb.py
. - Analyse positions with
tools/sqlite/analyse.py
ortools/sqlite/analyse_parallel.py
; both scripts require a UCI engine for analysis (such Sturddle 1.xx, Stockfish, etc.).
Alternatively:
- Download PGNs from https://database.lichess.org/ and extract
evaluations using
tools/sqlite/pgntoevals.py
, or - Download binpack files, convert them to plain using a development version
of Stockfish, then use
tools/sqlite/plaintodb.py
, or - Generate datasets with this engine, by compiling it with DATAGEN enabled (please refer to the source code for details).
-
Generate HDF5 file(s) from database(s) produced by any of the methods above: use
tools/nnue/toh5.py
. -
Train the neural net by running
tools/nnue/train-v4.py
(requires Tensorflow and the CUDA toolkit). -
Generate
weights.h
by exporting model trained in step 5:tools/nnue/train.py export -m -o weights.h
-
Build engine (using
tools/build.py
, or by runningpython3 setup.py build_ext --inplace
). Alternatively models can be saved as JSON with tools/nnue/modeltojson.py. The model can then be loaded into the engine at runtime by using the UCI command:setoption name NNUEModel value YOUR-JSON-FILE-HERE
.
Note that the name NNUE used here only means that the neural net is updated efficiently (i.e. in an incremental fashion). The model is original and consists of a relatively small number of parameters (485761).
Inference runs on the CPU using vectorized instructions. The Intel, ARM v7 / ARM64 with NEON architectures are supported.
There are two ways to tune parameters defined in config.h
:
- Using https://chess-tuning-tools.readthedocs.io/en/latest/
- or using the Lakas optimizer https://github.com/fsmosca/Lakas
-
Install the preferred tool and
cutechess-cli
. -
Edit the
config.h
file, and replaceDECLARE_VALUE
withDECLARE_PARAM
for the parameters to be tuned. -
Build the python module, preferably with
NATIVE_UCI
:CC=clang++ CFLAGS=-march=native NATIVE_UCI=1 python3 setup.py build_ext --inplace
Run./main.py
, and enter theuci
command. The parameters of interest should be listed in the output. Quit the engine. -
Run
tuneup/gentune.py
to generate a JSON configuration file for chess-tuning-tools, ortuneup/genlakas.py
to generate a wrapper script to invoke the Lakas optimizer. -
Run the optimizer. Once the optimizer converges, edit the
config.h
file, and change the values of the parameters; changeDECLARE_PARAM
back toDECLARE_VALUE
.