Skip to content

Commit

Permalink
Revert "Merge branch 'dev' into tutorial-updates"
Browse files Browse the repository at this point in the history
This reverts commit dafd4c1, reversing
changes made to 2645e9d.
  • Loading branch information
sgbaird committed Sep 5, 2024
1 parent dafd4c1 commit a04e67a
Show file tree
Hide file tree
Showing 807 changed files with 71,176 additions and 4,890 deletions.
3 changes: 0 additions & 3 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -54,6 +54,3 @@ venv/*
.conda*/
.python-version
reports/~$honegumi-logo.pptx
docs/curriculum/assignments/Assignment_1_SOBO.ipynb
docs/curriculum/assignments/Assignment_2_MOBO.ipynb
src/honegumi/core/honegumi-pyodide-refactor-backup.html.jinja
14 changes: 7 additions & 7 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -18,13 +18,13 @@ repos:
- id: mixed-line-ending
args: ['--fix=auto'] # replace 'auto' with 'lf' to enforce Linux/Mac line endings or 'crlf' for Windows

# # If you want to automatically "modernize" your Python code:
# - repo: https://github.com/asottile/pyupgrade
# rev: v3.9.0
# hooks:
# - id: pyupgrade
# args: ['--py37-plus']
# exclude: ^tests/generated_scripts/
# If you want to automatically "modernize" your Python code:
- repo: https://github.com/asottile/pyupgrade
rev: v3.9.0
hooks:
- id: pyupgrade
args: ['--py37-plus']
exclude: ^tests/generated_scripts/

# # If you want to avoid flake8 errors due to unused vars or imports:
# Failing due to
Expand Down
41 changes: 7 additions & 34 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -103,45 +103,19 @@ python3 -m http.server --directory 'docs/_build/html'

## Code Contributions

For a high-level roadmap of Honegumi's development, see https://github.com/sgbaird/honegumi/discussions/2. Honegumi uses Python, Javascript, Jinja2, pytest, and GitHub actions to automate the generation, testing, and deployment of templates with a focus on Bayesian optimization packages. As of 2024-06-18, only [Meta's Ax Platform](https://ax.dev) is supported. The plumbing and logic that creates this is thorough and scalable.
For a high-level roadmap of Honegumi's development, see https://github.com/sgbaird/honegumi/discussions/2. Honegumi uses Python, Javascript, Jinja2, pytest, and GitHub actions to automate the generation, testing, and deployment of templates with a focus on Bayesian optimization packages. As of 2023-08-21, only a single package ([Meta's Ax Platform](https://ax.dev) for a small set of features. However, the plumbing and logic that creates this is thorough and scalable. I focused first on getting all the pieces together before scaling up to many features (and thus slowing down the development cycle).

Here are some ways you can help with the https://github.com/sgbaird/honegumi/blob/main/
Here are some ways you can help with the project:
1. Use the tool and let us know what you think 😉
2. [Provide feedback](https://github.com/sgbaird/honegumi/discussions/2) on the overall organization, logic, and workflow of the project
3. Extend the Ax features to additional options (i.e., additional rows and options within rows) via direct edits to [ax/main.py.jinja](https://github.com/sgbaird/honegumi/blob/main/src/honegumi/ax/main.py.jinja)
4.Extend the [`honegumi.html.jinja`](https://github.com/sgbaird/honegumi/blob/main/src/honegumi/core/honegumi.html.jinja) and [`main.py.jinja`](https://github.com/sgbaird/honegumi/blob/main/src/honegumi/ax/main.py.jinja) templates (make sure to run [`generate_scripts.py`](https://github.com/sgbaird/honegumi/blob/main/scripts/generate_scripts.py) after changes). See below for more information.
1. Extend Honegumi to additional platforms such as BoFire, Atlas, or BayBE
2. Spread the word about the tool
4. Improve the `honegumi.html` and `honegumi.ipynb` templates (may also need to update `generate_scripts.py`). See below for more information.
5. Extend Honegumi to additional platforms such as BoFire or Atlas
6. Spread the word about the tool

For those unfamiliar with Jinja2, see the Google Colab tutorial: [_A Gentle Introduction to Jinja2_](https://colab.research.google.com/github/sgbaird/honegumi/blob/main/notebooks/1.0-sgb-gentle-introduction-jinja.ipynb). The main template file for Meta's Adaptive Experimentation (Ax) Platform is [`ax/main.py.jinja`](https://github.com/sgbaird/honegumi/blob/main/src/honegumi/ax/main.py.jinja). The main file that interacts with this template is at [`scripts/generate_scripts.py`](https://github.com/sgbaird/honegumi/blob/main/scripts/generate_scripts.py). The generated scripts are [available on GitHub](https://github.com/sgbaird/honegumi/blob/main/docs/generated_scripts/ax). Each script is tested [via `pytest`](https://github.com/sgbaird/honegumi/blob/main/tests/) and [GitHub Actions](https://github.com/sgbaird/honegumi/actions/workflows/ci.yml) to ensure it can run error-free. Finally, the results are passed to [core/honegumi.html.jinja](https://github.com/sgbaird/honegumi/blob/main/src/honegumi/core/honegumi.html.jinja) and [core/honegumi.ipynb.jinja](https://github.com/sgbaird/honegumi/blob/main/src/honegumi/core/honegumi.ipynb.jinja) to create the scripts and notebooks, respectively.
For those unfamiliar with Jinja2, see the Google Colab tutorial: [_A Gentle Introduction to Jinja2_](https://colab.research.google.com/github/sgbaird/honegumi/blob/main/notebooks/1.0-sgb-gentle-introduction-jinja.ipynb). The main template file for Meta's Adaptive Experimentation (Ax) Platform is [`ax/main.py.jinja`](https://github.com/sgbaird/honegumi/blob/main/src/honegumi/ax/main.py.jinja). The main file that interacts with this template is at [`scripts/generate_scripts.py`](https://github.com/sgbaird/honegumi/blob/main/scripts/generate_scripts.py). The generated scripts are [available on GitHub](https://github.com/sgbaird/honegumi/tree/main/docs/generated_scripts/ax). Each script is tested [via `pytest`](https://github.com/sgbaird/honegumi/tree/main/tests) and [GitHub Actions](https://github.com/sgbaird/honegumi/actions/workflows/ci.yml) to ensure it can run error-free. Finally, the results are passed to [core/honegumi.html.jinja](https://github.com/sgbaird/honegumi/blob/main/src/honegumi/core/honegumi.html.jinja) and [core/honegumi.ipynb.jinja](https://github.com/sgbaird/honegumi/blob/main/src/honegumi/core/honegumi.ipynb.jinja) to create the scripts and notebooks, respectively.

```{figure} _static/honegumi-mermaid.png
Behind-the-scenes flowchart for Honegumi.
```

```{evalrst}
flowchart TD
A[main.py.jinja Template] -->|Used by| B[generate_scripts.py]
B -->|Generates| C[.py Files]
B -->|Generates| D[_test.py Files]
B -->|Generates| E[.ipynb Files]
B -->|Generates| F[honegumi.html]
D -->|Tested via| G[GitHub Actions running pytest]
G -->|If Tests Pass| H[Documentation]
F -->|Included in| H
click A href "https://github.com/sgbaird/honegumi/blob/main/src/honegumi/ax/main.py.jinja" "main.py.jinja Template"
click B href "https://github.com/sgbaird/honegumi/blob/main/scripts/generate_scripts.py" "generate_scripts.py"
click C href "https://github.com/sgbaird/honegumi/blob/main/docs/generated_scripts/ax" ".py Files"
click D href "https://github.com/sgbaird/honegumi/blob/main/tests/" "_test.py Files"
click E href "https://github.com/sgbaird/honegumi/blob/main/src/honegumi/core/honegumi.ipynb.jinja" ".ipynb Files"
click F href "https://github.com/sgbaird/honegumi/blob/main/src/honegumi/core/honegumi.html.jinja" "honegumi.html"
click G href "https://github.com/sgbaird/honegumi/actions/workflows/ci.yml" "GitHub Actions"
click H href "https://github.com/sgbaird/honegumi/blob/main/docs/generated_scripts/ax" "Documentation"
```

````{tip}
If you are committing some of the generated scripts or notebooks on Windows, you will [likely need to run the following command](https://stackoverflow.com/questions/22575662/filename-too-long-in-git-for-windows) in a terminal (e.g., git bash) as an administrator to avoid an `lstat(...) Filename too long` error:
NOTE: If you are committing some of the generated scripts or notebooks on Windows, you will [likely need to run this command](https://stackoverflow.com/questions/22575662/filename-too-long-in-git-for-windows) in a terminal (e.g., git bash) as an administrator to avoid an `lstat(...) Filename too long` error:

```bash
git config --system core.longpaths true
Expand All @@ -161,7 +135,6 @@ To only commit non-generated files, you can add all files and reset the generate
git add .
git reset docs/generated_scripts docs/generated_notebooks tests/generated_scripts
```
````

## Project Organization

Expand Down
Binary file removed docs/_static/honegumi-mermaid.png
Binary file not shown.
Original file line number Diff line number Diff line change
@@ -0,0 +1,53 @@
import numpy as np
from ax.service.ax_client import AxClient, ObjectiveProperties

obj1_name = "branin"
obj2_name = "branin_swapped"


def branin_moo(x1, x2):
y = float(
(x2 - 5.1 / (4 * np.pi**2) * x1**2 + 5.0 / np.pi * x1 - 6.0) ** 2
+ 10 * (1 - 1.0 / (8 * np.pi)) * np.cos(x1)
+ 10
)

# second objective has x1 and x2 swapped
y2 = float(
(x1 - 5.1 / (4 * np.pi**2) * x2**2 + 5.0 / np.pi * x2 - 6.0) ** 2
+ 10 * (1 - 1.0 / (8 * np.pi)) * np.cos(x2)
+ 10
)

return {obj1_name: y, obj2_name: y2}


ax_client = AxClient()

ax_client.create_experiment(
parameters=[
{"name": "x1", "type": "range", "bounds": [-5.0, 10.0]},
{"name": "x2", "type": "range", "bounds": [0.0, 10.0]},
],
objectives={
obj1_name: ObjectiveProperties(minimize=True),
obj2_name: ObjectiveProperties(minimize=True),
},
)


batch_size = 2


for _ in range(19):

parameterizations, optimization_complete = ax_client.get_next_trials(batch_size)
for trial_index, parameterization in list(parameterizations.items()):
# extract parameters
x1 = parameterization["x1"]
x2 = parameterization["x2"]

results = branin_moo(x1, x2)
ax_client.complete_trial(trial_index=trial_index, raw_data=results)

pareto_results = ax_client.get_pareto_optimal_parameters()
Original file line number Diff line number Diff line change
@@ -0,0 +1,50 @@
import numpy as np
from ax.service.ax_client import AxClient, ObjectiveProperties

obj1_name = "branin"
obj2_name = "branin_swapped"


def branin_moo(x1, x2):
y = float(
(x2 - 5.1 / (4 * np.pi**2) * x1**2 + 5.0 / np.pi * x1 - 6.0) ** 2
+ 10 * (1 - 1.0 / (8 * np.pi)) * np.cos(x1)
+ 10
)

# second objective has x1 and x2 swapped
y2 = float(
(x1 - 5.1 / (4 * np.pi**2) * x2**2 + 5.0 / np.pi * x2 - 6.0) ** 2
+ 10 * (1 - 1.0 / (8 * np.pi)) * np.cos(x2)
+ 10
)

return {obj1_name: y, obj2_name: y2}


ax_client = AxClient()

ax_client.create_experiment(
parameters=[
{"name": "x1", "type": "range", "bounds": [-5.0, 10.0]},
{"name": "x2", "type": "range", "bounds": [0.0, 10.0]},
],
objectives={
obj1_name: ObjectiveProperties(minimize=True),
obj2_name: ObjectiveProperties(minimize=True),
},
)


for _ in range(19):

parameterization, trial_index = ax_client.get_next_trial()

# extract parameters
x1 = parameterization["x1"]
x2 = parameterization["x2"]

results = branin_moo(x1, x2)
ax_client.complete_trial(trial_index=trial_index, raw_data=results)

pareto_results = ax_client.get_pareto_optimal_parameters()
Original file line number Diff line number Diff line change
@@ -0,0 +1,53 @@
import numpy as np
from ax.service.ax_client import AxClient, ObjectiveProperties

obj1_name = "branin"
obj2_name = "branin_swapped"


def branin_moo(x1, x2):
y = float(
(x2 - 5.1 / (4 * np.pi**2) * x1**2 + 5.0 / np.pi * x1 - 6.0) ** 2
+ 10 * (1 - 1.0 / (8 * np.pi)) * np.cos(x1)
+ 10
)

# second objective has x1 and x2 swapped
y2 = float(
(x1 - 5.1 / (4 * np.pi**2) * x2**2 + 5.0 / np.pi * x2 - 6.0) ** 2
+ 10 * (1 - 1.0 / (8 * np.pi)) * np.cos(x2)
+ 10
)

return {obj1_name: y, obj2_name: y2}


ax_client = AxClient()

ax_client.create_experiment(
parameters=[
{"name": "x1", "type": "range", "bounds": [-5.0, 10.0]},
{"name": "x2", "type": "range", "bounds": [0.0, 10.0]},
],
objectives={
obj1_name: ObjectiveProperties(minimize=True, threshold=25.0),
obj2_name: ObjectiveProperties(minimize=True, threshold=15.0),
},
)


batch_size = 2


for _ in range(19):

parameterizations, optimization_complete = ax_client.get_next_trials(batch_size)
for trial_index, parameterization in list(parameterizations.items()):
# extract parameters
x1 = parameterization["x1"]
x2 = parameterization["x2"]

results = branin_moo(x1, x2)
ax_client.complete_trial(trial_index=trial_index, raw_data=results)

pareto_results = ax_client.get_pareto_optimal_parameters()
Original file line number Diff line number Diff line change
@@ -0,0 +1,50 @@
import numpy as np
from ax.service.ax_client import AxClient, ObjectiveProperties

obj1_name = "branin"
obj2_name = "branin_swapped"


def branin_moo(x1, x2):
y = float(
(x2 - 5.1 / (4 * np.pi**2) * x1**2 + 5.0 / np.pi * x1 - 6.0) ** 2
+ 10 * (1 - 1.0 / (8 * np.pi)) * np.cos(x1)
+ 10
)

# second objective has x1 and x2 swapped
y2 = float(
(x1 - 5.1 / (4 * np.pi**2) * x2**2 + 5.0 / np.pi * x2 - 6.0) ** 2
+ 10 * (1 - 1.0 / (8 * np.pi)) * np.cos(x2)
+ 10
)

return {obj1_name: y, obj2_name: y2}


ax_client = AxClient()

ax_client.create_experiment(
parameters=[
{"name": "x1", "type": "range", "bounds": [-5.0, 10.0]},
{"name": "x2", "type": "range", "bounds": [0.0, 10.0]},
],
objectives={
obj1_name: ObjectiveProperties(minimize=True, threshold=25.0),
obj2_name: ObjectiveProperties(minimize=True, threshold=15.0),
},
)


for _ in range(19):

parameterization, trial_index = ax_client.get_next_trial()

# extract parameters
x1 = parameterization["x1"]
x2 = parameterization["x2"]

results = branin_moo(x1, x2)
ax_client.complete_trial(trial_index=trial_index, raw_data=results)

pareto_results = ax_client.get_pareto_optimal_parameters()
Original file line number Diff line number Diff line change
@@ -0,0 +1,69 @@
import numpy as np
from ax.service.ax_client import AxClient, ObjectiveProperties

obj1_name = "branin"
obj2_name = "branin_swapped"


def branin_moo(x1, x2, c1):
y = float(
(x2 - 5.1 / (4 * np.pi**2) * x1**2 + 5.0 / np.pi * x1 - 6.0) ** 2
+ 10 * (1 - 1.0 / (8 * np.pi)) * np.cos(x1)
+ 10
)

# add a made-up penalty based on category
penalty_lookup = {"A": 1.0, "B": 0.0, "C": 2.0}
y += penalty_lookup[c1]

# second objective has x1 and x2 swapped
y2 = float(
(x1 - 5.1 / (4 * np.pi**2) * x2**2 + 5.0 / np.pi * x2 - 6.0) ** 2
+ 10 * (1 - 1.0 / (8 * np.pi)) * np.cos(x2)
+ 10
)

# add a made-up penalty based on category
penalty_lookup = {"A": 0.0, "B": 2.0, "C": 1.0}
y2 += penalty_lookup[c1]

return {obj1_name: y, obj2_name: y2}


ax_client = AxClient()

ax_client.create_experiment(
parameters=[
{"name": "x1", "type": "range", "bounds": [-5.0, 10.0]},
{"name": "x2", "type": "range", "bounds": [0.0, 10.0]},
{
"name": "c1",
"type": "choice",
"is_ordered": False,
"values": ["A", "B", "C"],
},
],
objectives={
obj1_name: ObjectiveProperties(minimize=True),
obj2_name: ObjectiveProperties(minimize=True),
},
)


batch_size = 2


for _ in range(21):

parameterizations, optimization_complete = ax_client.get_next_trials(batch_size)
for trial_index, parameterization in list(parameterizations.items()):
# extract parameters
x1 = parameterization["x1"]
x2 = parameterization["x2"]

c1 = parameterization["c1"]

results = branin_moo(x1, x2, c1)
ax_client.complete_trial(trial_index=trial_index, raw_data=results)

pareto_results = ax_client.get_pareto_optimal_parameters()
Loading

0 comments on commit a04e67a

Please sign in to comment.