Automatically create a hparams of hyper-parameters from global variables!
View Demo
·
Report Bug
Have you ever run some machine learning training run for hours, just to realize days later that you forgot to save some hyper-parameters? So you need to pray for your memory to be good, or re-run everything?
AutoHparams is a tool that dumps every variable in the global scope of your code to a dictionary that you can save using your favorite tool. This helps to avoid 80% of the missed hyper-parameters situations.
Using AutoHparams is a one-liner.
Install AutoHparams with pip :
pip install autohparams
Import it in your code, by adding this line :
from autohparams import get_auto_hparams
To get a hparams dictionnary, just do:
hparams = get_auto_hparams(globals())
Advanced tip
By default, get_auto_hparams
ignores variables whose name starts with an underscore _
. This is convenient to filter the variables you want to include in the hparams.
For example:
lr = 0.001 # we want to report learning rate
bs = 64 # we want to report batch size
_gpu = 0 # we don't want to report the gpu used
hparams = get_auto_hparams(globals())
You can now use it in any framework you want, for example:
Tensorboard
import tensorflow as tf
from tensorboard.plugins.hparams import api as hp
with tf.summary.create_file_writer('logs/hparam_tuning').as_default():
hp.hparams(hparams)
MLflow
import mlflow
with mlflow.start_run():
mlflow.log_params(hparams)
Weights & Biases (wandb)
import wandb
wandb.init(hparams=hparams)
Comet.ml
from comet_ml import Experiment
experiment = Experiment()
experiment.log_parameters(hparams)
Neptune.ai
import neptune.new as neptune
run = neptune.init()
run['parameters'] = hparams
Pytorch Lightning
import pytorch_lightning as pl
trainer = pl.Trainer(logger=...)
trainer.logger.log_hyperparams(hparams)
Guild AI
import guild
guild.run(hparams=hparams)
Polyaxon
import polyaxon_sdk
api_client = polyaxon_sdk.ApiClient()
api_client.create_hyper_params(run_uuid='uuid-of-your-run', body=hparams)
ClearML
from clearml import Task
task = Task.init()
task.set_parameters(hparams)
Kubeflow
from kubeflow.metadata import metadata
store = metadata.Store()
store.log_metadata(hparams)
If you like python wizardry, you can also import and use autohparams as a function:
import autohparams
config = autohparams(globals())
One cannot do easier!
Contributions are welcome!
Non-Code contribution :
Task | Importance | Difficulty | Contributor on it | Description |
---|---|---|---|---|
Adding documentation | 4/5 | 1/5 | NOBODY | Write basic tutorials with real-life scenarios, write a wiki for other contributors to better understand the functioning of the library. |
For every todo, just click on the link to find the discussion where I describe how I would do it.
See the discussions for a full list of proposed features (and known issues).
Contributing is an awesome way to learn, inspire, and help others. Any contributions you make are greatly appreciated, even if it's just about styling and best practices.
If you have a suggestion that would make this project better, please fork the repo and create a pull request.
Don't forget to give the project a star! Thanks again!
- Fork the Project
- Create your Feature Branch (
git checkout -b feature/YourAmazingFeature
) - Commit your Changes (
git commit -m 'Add some AmazingFeature'
) - Push to the Branch (
git push origin feature/AmazingFeature
) - Open a Pull Request
This library was created by Nicolas MICAUX.
You might also be interested by work from RobustPy