Hyperopter is a sophisticated hyperparameter optimization framework designed for trading strategies. It provides a parallel, efficient, and modular architecture for finding optimal parameters that maximize strategy performance metrics like Sharpe ratio, or other loss functions. Hyperparameter optimization is primarily a tool for quant researchers, seeking alpha. This library can sit inside the CI/CD strategy pipeline of an algorithmic trading system, used in combination with other processes like backtesting or Montecarlo simulations to determine whether a particular strategy should be added to the "playbook" or whether it requires further refinement.
The only way to avoid overfitting, is by the operator (you) using hyperopter correctly. If you hyperparameter optimize over your entire sample you're overfitting. Ideally, you use hyperopter on a small but significant subsample and apply the frozen parameters on a larger OOS backtest, which should only serve to develop conviction to begin with.
- 🚀 Parallel optimization with configurable workers and batch sizes
- 📊 Flexible strategy evaluation with customizable metrics
- ⚙️ JSON-based configuration for parameter spaces and optimization settings
- 🔄 Robust error handling and result management
- 📈 Comprehensive logging and result tracking
- 🛠 Modular architecture for easy extension
Current Version: 0.1.0 (Development Phase) - as soon as I have time I will implement further loss functions, to allow optimizing for highest Sharpe, highest PnL in period, highest Sortino, highest Calmar, lowest maximum drawdown, and so on.
Python 3.12 or higher
Required packages:
- pandas
- numpy
- scikit-learn
- loguru
- jsonschema
- pytest
- psutil
# Clone the repository
git clone https://github.com/marwinsteiner/hyperopter.git
cd hyperopter
# Create and activate virtual environment
conda create -n [give it a name] python=[choose the correct python version, I use 3.12]
conda activate [the name you gave it]
poetry init # click through the menus
# Install dependencies, see pyproject.toml
- Create a configuration file (e.g.,
moving_average_config.json
):
{
"parameter_space": {
"fast_period": {
"type": "int",
"range": [2, 10],
"step": 1
},
"slow_period": {
"type": "int",
"range": [5, 20],
"step": 1
}
},
"optimization_settings": {
"max_iterations": 50,
"convergence_threshold": 0.001,
"parallel_trials": 4
}
}
- Implement your strategy evaluation function:
def evaluate_strategy(data: pd.DataFrame, params: dict) -> float:
# Your strategy logic here
return performance_metric
- Run optimization:
from integration import create_optimizer
optimizer = create_optimizer(
config_path="config.json",
data_path="data.csv",
strategy_evaluator=evaluate_strategy,
output_dir="results"
)
optimizer.optimize()
The framework consists of four core components:
- Integration Layer: Provides a clean interface for creating and running optimizations
- Parallel Optimizer: Handles parallel execution of strategy evaluations
- Results Manager: Manages optimization results and generates reports
- Configuration Manager: Handles parameter space and optimization settings
graph TD
A[Integration Layer] --> B[Parallel Optimizer]
A --> C[Results Manager]
A --> D[Config Manager]
B --> C
D --> B
The repository includes a complete example of optimizing a moving average crossover strategy:
examples/optimize_moving_average.py
: Example strategy implementationexamples/data/sample_data.csv
: Sample price dataconfig/moving_average_config.json
: Example configuration
I also use this dummy strategy to write the tests. All system components should work the same regardless of what strategy is being hyperopted.
# Run all tests
pytest tests/
# Run with coverage
pytest --cov=src tests/
- Fork the repository
- Create a feature branch
- Write tests for new features
- Ensure all tests pass
- Submit a pull request
Please report issues via GitHub Issues, including:
- Clear description of the problem
- Steps to reproduce
- Expected vs actual behavior
- System information
- Project Owner: Marwin Steiner
- Email: mailto:[email protected]
- GitHub: @marwinsteiner
Marwin Steiner, London, December 2024