Skip to content

Code for the paper "Large Language Models for Human-Machine Collaborative Particle Accelerator Tuning through Natural Language"

Notifications You must be signed in to change notification settings

desy-ml/llm-accelerator-tuning-release

Repository files navigation

LLM Accelerator Tuning

To run accelerator tuning utilising our LLM-based approach and generate ypur own data for the latter, execute any of the try_langchain_*.ipynb notebooks, each of which is designed to use a different one of the presented prompts. Note that you will have to have an OpenAI API key or a running instance of Ollama, and that you will need to point to them in the respective locations.

To generate the data for the baseline algorithms, run publications/paper/generate_baseline_data.ipynb.

To compute the results from our experiments as presented in the paper, run publications/paper/results_table.ipynb with our data placed in the data/paper/ directory.

About

Code for the paper "Large Language Models for Human-Machine Collaborative Particle Accelerator Tuning through Natural Language"

Resources

Stars

Watchers

Forks

Packages

No packages published