SugaR is a free, powerful UCI chess engine derived from Glaurung 2.1. Stockfish is not a complete chess program and requires a UCI-compatible graphical user interface (GUI) (e.g. XBoard with PolyGlot, Scid, Cute Chess, eboard, Arena, Sigma Chess, Shredder, Chess Partner or Fritz) in order to be used comfortably. Read the documentation for your GUI of choice for information about how to use Stockfish with it.
The SugaR engine features two evaluation functions for chess, the classical evaluation based on handcrafted terms, and the NNUE evaluation based on efficiently updatable neural networks. The classical evaluation runs efficiently on almost all CPU architectures, while the NNUE evaluation benefits from the vector intrinsics available on most CPUs (sse2, avx2, neon, or similar).
This distribution of SugaR consists of the following files:
-
Readme.md, the file you are currently reading.
-
Copying.txt, a text file containing the GNU General Public License version 3.
-
src, a subdirectory containing the full source code, including a Makefile that can be used to compile Stockfish on Unix-like systems.
-
a file with the .nnue extension, storing the neural network for the NNUE evaluation. Binary distributions will have this file embedded.
Note: to use the NNUE evaluation, the additional data file with neural network parameters
needs to be available. Normally, this file is already embedded in the binary or it can be downloaded.
The filename for the default (recommended) net can be found as the default
value of the EvalFile
UCI option, with the format nn-[SHA256 first 12 digits].nnue
(for instance, nn-c157e0a5755b.nnue
). This file can be downloaded from
https://tests.stockfishchess.org/api/nn/[filename]
replacing [filename]
as needed.
Currently, Stockfish has the following UCI options:
-
The number of CPU threads used for searching a position. For best performance, set this equal to the number of CPU cores available.
-
By setting a certain number of threads we will be able to start a full depth search with brute force in parallel. Introduce the uci BruteForceSearch option which by default is 0. If we want to set it to 1 or 2, it must be set after the uci Threads option. If set to a value> = Threads, all threads will have their fullSearch set to true, including the main thread.
-
The size of the hash table in MB. It is recommended to set Hash after setting Threads.
-
This is useful for long analysis. It allows you to save the current Hash Table to your hard drive, then reload it later.
Let Stockfish ponder its next move while the opponent is thinking.
-
Output the N best lines (principal variations, PVs) when searching. Leave at 1 for best performance.
-
Toggle between the NNUE and classical evaluation functions. If set to "true", the network parameters must be available to load from file (see also EvalFile), if they are not embedded in the binary.
-
The name of the file of the NNUE evaluation parameters. Depending on the GUI the filename might have to include the full path to the folder/directory that contains the file. Other locations, such as the directory that contains the binary and the working directory, are also searched.
-
An option handled by your GUI.
-
An option handled by your GUI. If true, Stockfish will play Chess960.
-
If enabled, show approximate WDL statistics as part of the engine output. These WDL numbers model expected game outcomes for a given evaluation and game ply for engine self-play at fishtest LTC conditions (60+0.6s per game).
For analysis (purpose)
- Value 1 corresponds to multiPV = 2
- Value 2 to multiPV = 4
- Value 3 to multiPV = 8
- Value 4 to multiPV =16
- Value 5 to multiPV = 32
- Value 6 to multiPV = 64
- Value 7 to multiPV = 128
- Value 8 to multiPV = 256
1-8 MultiPV: higher depths and longer time to reach them. So, fewer tactical shots missed, but loss of some ELO, increasingly until 8, corresponding to multiPV = 256.
Recommended values: from 2 to 5 ( > 5 too wide search width)
can be used to disable null motion suppression.
Default: 0, Min: 0, Max: 40 To play different opening lines from default (0), if not from book (see below). Higher variety -> more probable loss of ELO
-
Enable weaker play aiming for an Elo rating as set by UCI_Elo. This option overrides Skill Level.
-
If enabled by UCI_LimitStrength, aim for an engine strength of the given Elo. This Elo rating has been calibrated at a time control of 60s+0.6s and anchored to CCRL 40/4.
-
Lower the Skill Level in order to make Stockfish play weaker (see also UCI_LimitStrength). Internally, MultiPV is enabled, and with a certain probability depending on the Skill Level a weaker move will be played.
-
Path to the folders/directories storing the Syzygy tablebase files. Multiple directories are to be separated by ";" on Windows and by ":" on Unix-based operating systems. Do not use spaces around the ";" or ":".
Example:
C:\tablebases\wdl345;C:\tablebases\wdl6;D:\tablebases\dtz345;D:\tablebases\dtz6
It is recommended to store .rtbw files on an SSD. There is no loss in storing the .rtbz files on a regular HD. It is recommended to verify all md5 checksums of the downloaded tablebase files (
md5sum -c checksum.md5
) as corruption will lead to engine crashes. -
Minimum remaining search depth for which a position is probed. Set this option to a higher value to probe less aggressively if you experience too much slowdown (in terms of nps) due to TB probing.
-
Disable to let fifty-move rule draws detected by Syzygy tablebase probes count as wins or losses. This is useful for ICCF correspondence games.
-
Limit Syzygy tablebase probing to positions with at most this many pieces left (including kings and pawns).
-
A positive value for contempt favors middle game positions and avoids draws, effective for the classical evaluation only.
-
By default, contempt is set to prefer the side to move. Set this option to "White" or "Black" to analyse with contempt for that side, or "Off" to disable contempt.
-
Assume a time delay of x ms due to network and GUI overheads. This is useful to avoid losses on time in those cases.
-
Lower values will make Stockfish take less time in games, higher values will make it think longer.
-
Tells the engine to use nodes searched instead of wall time to account for elapsed time. Useful for engine testing.
-
Clear the hash table.
-
Write all communication to and from the engine into a text file.
-
Experience file structure:
- e4 (from start position)
- c4 (from start position)
- Nf3 (from start position) 1 .. c5 (after 1. e4) 1 .. d6 (after 1. e4)
2 positions and a total of 5 moves in those positions
Now imagine SugaR plays 1. e4 again, it will store this move in the experience file, but it will be duplicate because 1. e4 is already stored. The experience file will now contain the following:
- e4 (from start position)
- c4 (from start position)
- Nf3 (from start position) 1 .. c5 (after 1. e4) 1 .. d6 (after 1. e4)
- e4 (from start position)
Now we have 2 positions, 6 moves, and 1 duplicate move (so effectively the total unique moves is 5)
Duplicate moves are a problem and should be removed by merging with existing moves. The merge operation will take the move with the highst depth and ignore the other ones. However, when the engine loads the experience file it will only merge duplicate moves in memory without saving the experience file (to make startup and loading experience file faster)
At this point, the experience file is considered fragmented because it contains duplicate moves. The fragmentation percentage is simply: (total duplicate moves) / (total unique moves) * 100 In this example we have a fragmentation level of: 1/6 * 100 = 16.67%
Default: False If activated, the experience file is only read.
SugaR play using the moves stored in the experience file as if it were a book
-
ExperienceBook Best Move -> is similar to BestBookMove. If enabled, the best move from the experience book will be played. If you disable it, a random move will play from the experience file (not necessarily the best one)
-
This is a setup to limit the number of moves that can be played by the experience book. If you configure 16, the engine will only play 16 moves (if available).
Both approaches assign a value to a position that is used in alpha-beta (PVS) search to find the best move. The classical evaluation computes this value as a function of various chess concepts, handcrafted by experts, tested and tuned using fishtest. The NNUE evaluation computes this value with a neural network based on basic inputs (e.g. piece positions only). The network is optimized and trained on the evaluations of millions of positions at moderate search depth.
The NNUE evaluation was first introduced in shogi, and ported to Stockfish afterward. It can be evaluated efficiently on CPUs, and exploits the fact that only parts of the neural network need to be updated after a typical chess move. The nodchip repository provides additional tools to train and develop the NNUE networks.
On CPUs supporting modern vector instructions (avx2 and similar), the NNUE evaluation results in stronger playing strength, even if the nodes per second computed by the engine is somewhat lower (roughly 60% of nps is typical).
Note that the NNUE evaluation depends on the Stockfish binary and the network parameter file (see EvalFile). Not every parameter file is compatible with a given Stockfish binary. The default value of the EvalFile UCI option is the name of a network that is guaranteed to be compatible with that binary.
If the engine is searching a position that is not in the tablebases (e.g. a position with 8 pieces), it will access the tablebases during the search. If the engine reports a very large score (typically 153.xx), this means it has found a winning line into a tablebase position.
If the engine is given a position to search that is in the tablebases, it will use the tablebases at the beginning of the search to preselect all good moves, i.e. all moves that preserve the win or preserve the draw while taking into account the 50-move rule. It will then perform a search only on those moves. The engine will not move immediately, unless there is only a single good move. The engine likely will not report a mate score, even if the position is known to be won.
It is therefore clear that this behaviour is not identical to what one might be used to with Nalimov tablebases. There are technical reasons for this difference, the main technical reason being that Nalimov tablebases use the DTM metric (distance-to-mate), while Syzygybases use a variation of the DTZ metric (distance-to-zero, zero meaning any move that resets the 50-move counter). This special metric is one of the reasons that Syzygybases are more compact than Nalimov tablebases, while still storing all information needed for optimal play and in addition being able to take into account the 50-move rule.
Stockfish supports large pages on Linux and Windows. Large pages make the hash access more efficient, improving the engine speed, especially on large hash sizes. Typical increases are 5..10% in terms of nodes per second, but speed increases up to 30% have been measured. The support is automatic. Stockfish attempts to use large pages when available and will fall back to regular memory allocation when this is not the case.
Large page support on Linux is obtained by the Linux kernel transparent huge pages functionality. Typically, transparent huge pages are already enabled, and no configuration is needed.
The use of large pages requires "Lock Pages in Memory" privilege. See Enable the Lock Pages in Memory Option (Windows) on how to enable this privilege, then run RAMMap to double-check that large pages are used. We suggest that you reboot your computer after you have enabled large pages, because long Windows sessions suffer from memory fragmentation, which may prevent Stockfish from getting large pages: a fresh session is better in this regard.
Stockfish has support for 32 or 64-bit CPUs, certain hardware instructions, big-endian machines such as Power PC, and other platforms.
On Unix-like systems, it should be easy to compile Stockfish
directly from the source code with the included Makefile in the folder
src
. In general it is recommended to run make help
to see a list of make
targets with corresponding descriptions.
cd src
make help
make net
make build ARCH=x86-64-modern
When not using the Makefile to compile (for instance, with Microsoft MSVC) you need to manually set/unset some switches in the compiler command line; see file types.h for a quick reference.
When reporting an issue or a bug, please tell us which version and compiler you used to create your executable. These informations can be found by typing the following commands in a console:
./stockfish compiler
Stockfish's improvement over the last couple of years has been a great community effort. There are a few ways to help contribute to its growth.
Improving Stockfish requires a massive amount of testing. You can donate your hardware resources by installing the Fishtest Worker and view the current tests on Fishtest.
If you want to help improve the code, there are several valuable resources:
-
In this wiki, many techniques used in Stockfish are explained with a lot of background information.
-
The section on Stockfish describes many features and techniques used by Stockfish. However, it is generic rather than being focused on Stockfish's precise implementation. Nevertheless, a helpful resource.
-
The latest source can always be found on GitHub. Discussions about Stockfish take place in the FishCooking group and engine testing is done on Fishtest. If you want to help improve Stockfish, please read this guideline first, where the basics of Stockfish development are explained.
Stockfish is free, and distributed under the GNU General Public License version 3 (GPL v3). Essentially, this means you are free to do almost exactly what you want with the program, including distributing it among your friends, making it available for download from your website, selling it (either by itself or as part of some bigger software package), or using it as the starting point for a software project of your own.
The only real limitation is that whenever you distribute Stockfish in some way, you must always include the full source code, or a pointer to where the source code can be found. If you make any changes to the source code, these changes must also be made available under the GPL.
For full details, read the copy of the GPL v3 found in the file named Copying.txt.