Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Investigations of OpenBench for Leela: An Automated Testing Framework #1453

Open
cn4750 opened this issue Oct 20, 2020 · 1 comment
Open

Comments

@cn4750
Copy link
Contributor

cn4750 commented Oct 20, 2020

In an effort to revisit @killerducky's efforts in issue #284 with starting up an OpenBench automated testing framework for Leela, I've ran into a few things that need addressing before we can ever fully utilize it without extensive hacks (which is how I got it to run at all). Some of the below mentioned issues can probably be addressed via some of killerducky's forked code, but I did not reference it while working on my own solution. The following is a stream of consciousness of some issues I faced and thoughts on what may need to be done to address them:

  1. Lack of a bench command
    We lack a bench command like Stockfish and other engines have. We also need to address what the bench command should measure. The bench command is supposed to behave like checksum on the engine as the final node count is important for comparison. Time control is scaled off of the bench command's resulting nodes per second.
  2. Lack of make building
    OpenBench expects to build the engines via make. We use a build script instead. We can probably address this with some special case hacking.
  3. Use of submodules
    OpenBench pulls code from GitHub via zip files from the API. When it pulls down our zipped code it doesn't pull down lczero-common and therefore never builds properly. We can probably address this with some special case hacking.
  4. EvalFile instead of WeightsFile
    A small issue but OpenBench expects the weights to be in a UCI variable called EvalFile instead of WeightsFile like we use.
  5. OpenBench Qt Issues?
    Qt didn't work out of the box and I had to install apt-get install libqtgui4 on my vast.ai instance.
  6. Running OpenBench web server
    I am not an expert in web servers so I had no idea what I was doing, but I ran it off of a free-tier AWS instance using gunicorn and nginx by essentially following this guide.
  7. Building on Windows requires MinGW
    Building on Windows requires MinGW by default, but we may not need it if we address our make build issues.

All in all my hacking was successful and did allow for a test to run: working-openbench

Much work is required to make OpenBench for Leela a serious proposal. I would first suggest that we have an official Leela fork of OpenBench so that others might be able to start fixing some of these issues. Second, I would suggest someone with more webdev experience manage the web server properly and safely.

Onwards to a stronger, better Leela. 😃

@Naphthalin
Copy link
Contributor

Linking this to #1734 as this kind of refactorization is necessary for reducing the entrance barrier and getting accessible patches.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants