Skip to content

Releases: lightvector/KataGo

Experimental Neural Nets

06 Jun 02:46
Compare
Choose a tag to compare
Pre-release

This is not intended to be a release of the main KataGo program, but rather just an update to the neural nets. The latest release of the code and executables can be found here.

New Nets

Uploaded here are some new experimentally-trained neural nets! These neural nets have been trained using some amount of human and other external data (varyingly 5%-10%) as initial starting positions or "hint" positions for self-play games (in which an potential unexpected good move is guaranteed to be noised and searched more deeply). So still all data is generated by self-play and rather than taken from outside, but the positions (and rare "hints") in such games are may come from positions well outside the normal distribution of positions that the net would see if self-playing the entire game from the empty board.

Additionally, they have been trained to hopefully understand Mi Yuting's flying dagger joseki significantly better - although understanding may still of course be imperfect due to the immense complexity of the joseki.

These nets are not necessarily stronger than the nets bundled with the v1.4.0 release, which were the final and strongest non-external-data-biased nets.

As measured by pure self-play, the new nets may even be slightly weaker in some cases than the previous nets, perhaps due to having to "spend effort" to learn more kinds of shapes that didn't often come up in matches only against itself. However, one hopes that they on average may handle some new kinds of positions better and/or generalize against other opponents better. But that remains to be tested!

Many more than just the three new nets attached to GitHub here have been uploaded.
They can be found at the usual KataGo g170 download site. These are intermediate versions sampled from between the v1.4.0-bundled final nets and the ones attached and are described in the readme in case you are interested in testing across intermediate versions and seeing how the introduction of successive kinds of external data has progressively affected the policy and evaluation of specific positions.

Enjoy!

Major PDA/pondering bugfixes, fixed pondering limits

12 May 23:16
Compare
Choose a tag to compare

If you're upgrading from a version before v1.4.0, please see the v1.4.0 releases page for a variety of notes about the changes since v1.3.x that are new that you might care about, as well as the latest and strongest neural nets!

If you're a new user, don't forget to check out this section for getting started and basic usage!

This is a quick release to fix a few bugs, including one pretty major bug for certain configurations.

Changes

  • Fixed a bug introduced earlier in v1.4.x where in handicap games, or in even games where playoutDoublingAdvantage was set to a nonzero value, if pondering was also enabled at the same time, the search could sometimes contain values with different signs, greatly damaging the quality and strength of the search.

  • Fixed a bug introduced earlier in v1.4.x where analysis with nonzero playoutDoublingAdvantage would use the wrong sign.

  • For maxTimePondering, maxPlayoutsPondering, maxVisitsPondering, KataGo will now assume no limit (unbounded) if these values are not specified, instead of defaulting to using maxTime, maxPlayouts, maxVisits, respectively.

Minor bugfixes, logging to directory

10 May 20:27
Compare
Choose a tag to compare

This release is outdated, see releases page for more recent versions with further bugfixes. But also, if you're upgrading from a version before v1.4.0, please see the v1.4.0 releases page for a variety of notes about the changes since v1.3.x that are new that you might care about, as well as the latest and strongest neural nets!

This is a quick release to fix a minor bug and slightly improve log management.

Changes

  • Fixed a bug where if playoutDoublingAdvantage was manually specified in the config as a nonzero value at the same time pondering was enabled, then tree reuse would not occur, or be ineffective.
  • The GTP and analysis engines can now log dated files to a directory instead of just single file, via logDir=DIRECTORY instead of logFile=FILE.log. The default and example configs provided with this release have been updated to do so.

New Neural Nets, Optional Wider Analysis, Bugfixes

09 May 21:53
Compare
Choose a tag to compare

This release is outdated, see releases page for more recent versions with important bugfixes. But also, if you're upgrading from a version before v1.4.0, see below for a variety of notes about the changes since 1.3.x! Also, as of early May 2020 the latest and strongest neural nets are still the ones here.

If you're a new user, don't forget to check out this section for getting started and basic usage!

This time, we have both new nets and new code!

New Neural Nets!

These are all the new strongest neural net of each size so far. Interestingly, the 40 block net seems to have pulled well ahead in how much it improved this time (about 70 Elo), while the 30 block did not make nearly the same improvement (about 25 Elo). Perhaps the 40 block net got lucky in its gradient descent and the neural net stumbled into learning something useful that the other net didn't. The 20 block net gained maybe around 15 Elo. All of these differences have an uncertainty window of +/- 15 Elo or so (95% confidence), and of course these differences are based on a large varied pool of internal KataGo nets, so might vary a bit against very different opponents.

The strongest net to use for weak to midrange hardware is likely going to be the 20 block net. With the large gain of the 40 block net though, the 40 block net might be stronger for strong hardware and/or long time controls.

KataGo's main run is close to wrapping up so these will likely be the last "semi-zero" neural nets released, that is, nets trained purely with no outside data. A few more nets will be released after these as KataGo finishes the end of this run with some experimentation with ways of using outside data.

  • g170-b30c320x2-s3530176512-d968463914 - The latest and final semi-zero 30-block net.
  • g170-b40c256x2-s3708042240-d967973220 - The latest and final semi-zero 40-block net.
  • g170e-b20c256x2-s4384473088-d968438914 - The latest and final semi-zero 20-block net (continuing extended training on games from the bigger nets).

New Feature and Changes this Release:

  • New experimental config option to help analysis: analysisWideRootNoise

    • Set to a small value like 0.04 to make KataGo broaden its search at the root during analysis (such as in Sabaki or Lizzie) and evaluate more moves, to make it easier to see KataGo's initial impressions of more moves, although at the cost of needing some more time before the top moves get searched as deeply.
    • Or set to a large value like 1 to make KataGo search and evaluate almost every move on the board a bunch.
    • You can also change this value at runtime in the GTP console via kata-set-param analysisWideRootNoise VALUE.
    • Only affects analysis, does NOT affect play (e.g. genmove).
  • KataGo will now tolerate model files that have been renamed to just ".gz" rather than one of ".bin.gz" or ".txt.gz".

  • Implemented cputime and gomill-cpu_time GTP commands, documented here, which should enable some automated match/tournament scripts to now compare and report the time taken by the bot if you are running KataGo in tests against other bots or other versions of itself.

  • EDIT (2015-05-12) (accidentally omitted in the initial release notes): Reworked the way KataGo's configures playoutDoublingAdvantage and dynamicPlayoutDoublingAdvantage, and slightly improved how it computes the initial lead in handicap games.

    • You can now simply comment out all playoutDoublingAdvantage-related values and KataGo will choose a sensible default, which is to play evenly in even games, and to play aggressively when giving handicap stones, and safely when receiving handicap stones.
    • The default config has been updated accordingly, and you can also read the new config to see how to configure these values going forward if you prefer a non-default behavior.

Bugfixes

  • Added workaround logic to correctly handle rules-based score adjustment in handicap games (e.g. +1 point per handicap stone in Chinese rules) when handicap is placed in a non-GTP-compliant way, via consecutive black moves and white passes. This behavior can still be disabled via assumeMultipleStartingBlackMovesAreHandicap = false.

  • Fixed bug where adjusting the system clock or time zone might interfere with the amount of time KataGo searches for, on some systems.

For Devs

  • Added a script to better support for synchronous training, documented here.

  • Added various new options and flags for the JSON analysis engine, including root info, raw policy, and the ability to override almost any search-related config parameter at runtime. Analysis engine now also defaults to finishing all tasks before quitting when stdin is closed instead of dropping them, although a command line flag can override this.

  • Reorganized the selfplay-related configs into a subdirectory within cpp/configs, along with some internal changes and cleanups to selfplay config parameters and logic. The example configs have been updated, you can diff them to see the relevant changes.

  • num_games_total, which used to behave buggily and unreliably, has now been entirely removed from the selfplay config file, and instead a command line argument so as to be much more easily changeable by a script:
    -max-games-total .

Enjoy!

Bigger board sizes (just for fun)

16 Apr 01:52
Compare
Choose a tag to compare
Pre-release

This is a just-for-fun side release of KataGo with a simple change to support board sizes up to 29x29, but not otherwise part of the "main" branch of development. This is because supporting larger board sizes currently has a major cost (which is why the normal KataGo releases don't support them by default).

See releases page, for more recent versions and newer and stronger neural nets.

Please mind these notes if using this side release:

  • You should only use this version if specifically wanting to play larger sizes.
    • Memory usage will be significantly increased and possibly performance will be decreased a little in this version,
    • This effect on memory and performance will remain true even if using this version on board sizes 19x19 and smaller!
  • KataGo's neural nets are NOT trained for sizes above 19x19 and might behave poorly in some cases.
    • However, they have had lots of training on different sizes 9x9 through 19x19 and so, even though there is no guarantee, most of the released nets can probably extrapolate somewhat beyond 19x19 and still usually do well.
    • How well different nets extrapolate might vary, there is some chance even that larger nets could do worse than smaller nets at extrapolation. In practice they all mostly seem very, very strong still.
    • The smaller the extrapolation (e.g. 21x21 and 23x23, instead of 27x27 or 29x29) very likely the better and stronger the net will still be.
    • Board evaluation may be a bit more biased or noisier on the larger boards, particularly precise estimation of the score in points, and possibly the net could fail to perceive and understand life/death/capture-races for extremely large dragons.
  • The GTP protocol, which is the universal language that engines like KataGo use to talk to GUIs, does NOT normally support sizes larger than 25x25, so many GUIs might not work.
    • This is due to how the protocol was designed to use alphabetic coordinates, and having no specification for going beyond "Z".
    • KataGo will continue with "AA", "AB", etc (still skipping any use of the letter "I", to be consistent) , but it is quite likely that many GUIs will not have this implemented and will therefore not work with sizes past 25x25.
    • For example, Lizzie doesn't appear to work beyond 25x25 currently.
  • On lower-end GPUs, there may be some chance that the biggest nets fail due to things like running your GPU out of memory due to such a huge board.
    • You might have to switch to a smaller net if you run into such issues.

And of course, there is some chance of a bug that makes KG itself not work on this branch (separately from any mistakes from the net) since large board sizes aren't tested heavily, if so, you can open an issue or otherwise let me know and I'll fix it. For OpenCL, this version might also re-run its tuning upon first startup.

Keeping the above details in mind though, KataGo should be able to provide some strong play on these larger board sizes even with no training for them, so have fun!

New Neural Nets

16 Apr 01:09
Compare
Choose a tag to compare
New Neural Nets Pre-release
Pre-release

This is not intended to be a release of the main KataGo program, but rather just an update to the neural nets available as the run is ongoing. Also, more recent releases can be found here.

More nets!

These are all the new strongest neural net of each size so far. The 30 and 40 block nets perhaps gained perhaps 65 Elo each according to some partially-underway tests, while the 20-block net gained perhaps 55 Elo, since last release. These differences do have some uncertainty though, on the order of +/- 25 Elo (95% confidence), due to measurement noise.

You may notice from the filename that the 20 block net being released here is slightly older than the very latest 20 block net. Preliminary testing showed that this one may have gotten lucky with a random fluctuation, as there sometimes is between net versions, and may be a little stronger actually than the very latest, so I packaged up this one instead.

  • g170-b30c320x2-s2846858752-d829865719 - The latest 30-block net.
  • g170-b40c256x2-s2990766336-d830712531 - The latest 40-block net.
  • g170e-b20c256x2-s3761649408-d809581368 - The latest released 20-block net (continuing extended training on games from the bigger nets).

Enjoy!

Bugfix, Analysis Engine Priority

31 Mar 02:01
Compare
Choose a tag to compare

This is a quick bugfix release following v1.3.4. See releases page for more recent versions, including the most recent neural nets!

Note if upgrading from v1.3.3 or earlier: KataGo OpenCL version on Windows will now keep its tuning data within the directory containing the executable, instead of from where you run it. This might mean that it will need to re-tune once more if you've been running it from a GUI that calls it from elsewhere. If so, then as usual, you can run the benchmark to let it tune as described in: https://github.com/lightvector/KataGo#how-to-use

Changes

  • Fixed a bug in printsgf GTP command that would cause it to fail for some possible rules configurations, and made rules parsing generally a little more lenient for certain aliases for rulesets.
  • JSON analysis engine (../katago analysis) now supports optional priority for queries, documentation here.

New Nets, Negative PDA, Improved defaults

29 Mar 20:21
Compare
Choose a tag to compare

If you're a new user, don't forget to check out this section for getting started and basic usage!

Note if upgrading from v1.3.3 or earlier: KataGo OpenCL version on Windows will now keep its tuning data within the directory containing the executable, instead of from where you run it. This might mean that it will need to re-tune once more if you've been running it from a GUI that calls it from elsewhere. If so, then as usual, you can run the benchmark to let it tune as described in: https://github.com/lightvector/KataGo#how-to-use

New Neural Nets!

Each of these might be around 40 Elo stronger than those in the previous release of neural nets. Training still continues. :)

  • g170-b30c320x2-s2271129088-d716970897 - ("g170 30 block s2.27G") - The latest 30-block net.
  • g170-b40c256x2-s2383550464-d716628997 - ("g170 40 block s2.38G") - The latest 40-block net.
  • g170e-b20c256x2-s3354994176-d716845198 - ("g170e 20 block s3.35G") - The latest 20-block net (continuing extended training on games from the bigger nets).

KataGo Code Changes this Release

UI and Configs

  • Default configs and models!

    • For commands that need a GTP config file, if there is a file called default_gtp.cfg located in the same directory as the executable, KataGo will load that file by default if -config is not specified.
    • For commands that need a neural net model file, if there is a file called default_model.bin.gz or default_model.txt.gz located in the same directory as the executable, KataGo will load that file by default if -model is not specified.
    • So, if these files are provided, then KataGo's main engine can be now invoked purely via ./katago gtp with no further arguments.
  • Adjusted a variety of minor default settings for GTP configs. KataGo should resign slightly more aggressively, have a higher maximum number of moves to display in its PV, use a better number of visits for genconfig, etc. NOTE: some of these may not take effect without switching to the new gtp configs included in the zip files for this release.

Play Improvements

  • "PDA" scaling should be much more reasonable now on small board sizes than it used to be, hopefully improving handicap game strength on small boards.

  • Negative "PDA" is used by default now for games when KataGo receives handicap stones. It can also be set explicitly in the config for testing non-standard unbalanced openings (but unlike handicap games, for arbitrary unbalanced positions it won't be detected by default). This should greatly improve KataGo's strength in handicap games as Black by causing it to play a little more solidly and safely. Thanks to Friday9i and others in the Leela Zero Discord chat for testing this. Note that the implemented scaling is probably still far from optimal - one can probably obtain better results by tuning PDA to a specific opponent + time control.

  • Added option avoidMYTDaggerHack = true that if added/enabled in GTP config, will have KataGo avoid a specific opening joseki that current released networks may play poorly. Against certain bots (such Leela Zero) this also tends to lead to much greater opening variety.

Dev-oriented Improvements

  • Implemented printsgf GTP command.

  • Implement a couple of new GTP extensions that involve a mild hack specifically to help support the Sabaki gui.

  • Removed a lot of old unused or deprecated code, significant internal refactors, added a bit of high-level documentation about what parts of the source code implement what.

  • Removed dependence on dirent.h.

Selfplay-training Changes

  • Added a new option to upweight training on positions with surprising evaluations, and a new option to help increase the proportion of "fairly" initialized games during selfplay.

New Neural Nets

14 Mar 16:37
Compare
Choose a tag to compare
New Neural Nets Pre-release
Pre-release

(This is not intended to be a release of the main KataGo program, but rather just an update to the neural nets available as the run is ongoing. A new software release may be out some time later this month or early next month.)

More nets!

These are all the new strongest neural net of each size so far. The 30 and 40 block nets have gained probably a bit more than 100 Elo since the nets last release and the 20-block extended training net has gained maybe 50 Elo since then.

  • g170-b30c320x2-s1840604672-d633482024 - The latest 30-block net.
  • g170-b40c256x2-s1929311744-d633132024 - The latest 40-block net.
  • g170e-b20c256x2-s2971705856-d633407024 - The latest 20-block net (continuing extended training on games from the bigger nets).

Enjoy!

New Nets, Friendlier Configuration, Faster Model Loading, KGS Support

28 Feb 03:17
Compare
Choose a tag to compare

If you're a new user, don't forget to check out this section for getting started and basic usage!

More Neural Nets!

After an unfortunately-long pause for a large chunk of February in which KataGo was not able to continue training due to hardware/logistical issues, KataGo's run has resumed!

  • g170-b20c256x2-s2107843328-d468617949 ("g170 20 block s2.11G") - This is the final 20-block net that was used in self-play for KataGo's current run, prior to switching to larger nets. It might be very slightly stronger than the 20 block net in the prior release, "g170 20 block s1.91G".

  • g170-b30c320x2-s1287828224-d525929064 ("g170 30 block s1.29G") - Bigger 30-block neural net! This is one of the larger sizes that KataGo is now attempting to train. Per-playout, this net should be noticeably stronger than prior nets, perhaps as much as 140 Elo stronger than "s1.91G". However, at least at low-thousands of playouts it is not as strong yet per-equal-compute-time. But the run is still ongoing. We'll see how things develop in the coming weeks/months!

  • g170-b40c256x2-s1349368064-d524332537 ("g170 40 block s1.35G") - A 40-block neural net, but with fewer channels than the 30-block net! This is the other of the larger sizes that KataGo is now attempting to train. Same thing for this one - should be stronger at equal playouts, but weaker at equal compute for modest amounts of compute.

  • g170e-b20c256x2-s2430231552-d525879064 ("g170e 20 block s2.43G") - We're continuing to extendedly-train the 20-block net on the games generated by the larger nets, even though it is not being used for self-play any more. This net might be somewhere around 70 Elo stronger than "s1.91G" by some rough tests.

Per playout, and in terms of raw judgment, either the 30-block or 40-block net should be the strongest KataGo net so far, but per compute time, the 20-block extended-training "s2.43G" is likely the strongest net. Extensive testing and comparison has not been done yet though.

The latter three nets are attached below. If you want the first one, or for all other currently-released g170 nets, take a look here: https://d3dndmfyhecmj0.cloudfront.net/g170/neuralnets/index.html

New Model Format

Starting with this release, KataGo is moving to a new model format which is a bit smaller on disk and faster to load, indicated by a new file extension".bin.gz" instead of ".txt.gz". The new format will NOT work with earlier KataGo versions. However, the version 1.3.3 in this release will still be able to load all older models.

If you are using some of the older/smaller nets from this run (for example, the much faster 10 or 15-block extended-training nets) and would like to get ".bin.gz" versions of prior nets, they are also available at: https://d3dndmfyhecmj0.cloudfront.net/g170/neuralnets/index.html

Other Changes this Release

Configuration and user-friendliness

  • There is a new top-level subcommand that can be used to automatically tune and generate a GTP config, editing the rules, thread settings, and memory usage settings within the config for you, based on your preferences: ./katago genconfig -model <NEURALNET>.gz -output <NAME_OF_NEW_GTP_CONFIG>.cfg. Hopefully this helps newer users, or people trying to set up things on behalf of newer users!

  • All the rules-related options in the GTP config can now be replaced with just a single line rules=chinese or rules=japanese or rules=tromp-taylor or other possible values if desired! As demonstrated in gtp_example.cfg. See the documentation for kata-set-rules here for what rules are possible besides those, and see here for a formal description of KataGo's full ruleset.

  • katago gtp now has a new argument -override-config KEY=VALUE,KEY=VALUE,... that can be used to specify or override arbitrary values in the GTP config on the command line.

  • OpenCL version will now detect CPU-based OpenCL devices, and might run on some pure CPU machines now with no GPU.

GTP extensions

  • KataGo now supports KGS's GTP extension commands kgs-rules and kgs-time_settings. They can be used to set KataGo's rules to the settings necessary for the possible rules that KGS games can be played under, as well as traditional Japanese-style byo-yomi that is very popular on a large number of online servers. See here for some documentation on KataGo's implementation of these commands.

  • Added kata-raw-nn GTP extension to dump raw evaluations of KataGo's neural net, documentation in the usual place.

Misc

  • Added a mild hack to fix some instability in some neural nets involving passing near the very end if the game that could cause the reported value to erroneously fluctuate by a percent or two.

  • For those who run self-play training, a new first argument is required for shuffle_and_export_loop.sh and/or export_model_for_selfplay.sh - you should provide a globally unique prefix to distinguish your models in any given run from any other run, including ideally those of other users. This prefix gets displayed in logs, so that if you share your models with others, users can know which model is from where.

  • Various other minor changes and cleanups.

Edit (2020-02-28) - fixed a bug where if for some reason you tried to ungzip the .bin.gz model files instead of loading them directly, the raw .bin could not be loaded. Bumped the release tag and updated the executables.