Skip to content

Commit

Permalink
Update documentation and examples
Browse files Browse the repository at this point in the history
  • Loading branch information
DimitriAlston committed Nov 6, 2023
1 parent 1eae9c0 commit 2788486
Show file tree
Hide file tree
Showing 16 changed files with 105 additions and 96 deletions.
48 changes: 24 additions & 24 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,10 +22,10 @@ EAGO is a deterministic global optimizer designed to address a wide variety of o

$$
\begin{align*}
f^{\*} = & \min_{\mathbf y \in Y \subset \mathbb R^{n_{y}}} f(\mathbf y)\\
{\rm s.t.} \\;\\; & \mathbf h(\mathbf y) = \mathbf 0\\
& \mathbf g(\mathbf y) \leq \mathbf 0\\
& Y = [\mathbf y^{\mathbf L}, \mathbf y^{\mathbf U}] \in \mathbb I \mathbb R^{n}\\
f^{\*} = & \min_{\mathbf y \in Y \subset \mathbb R^{n_{y}}} f(\mathbf y) \\
{\rm s.t.} \\;\\; & \mathbf h(\mathbf y) = \mathbf 0 \\
& \mathbf g(\mathbf y) \leq \mathbf 0 \\
& Y = [\mathbf y^{\mathbf L}, \mathbf y^{\mathbf U}] \in \mathbb{IR}^{n} \\
& \qquad \mathbf y^{\mathbf L}, \mathbf y^{\mathbf U} \in \mathbb R^{n}
\end{align*}
$$
Expand All @@ -52,15 +52,15 @@ EAGO makes use of the JuMP algebraic modeling language to improve the user's exp

$$
\begin{align*}
& \max_{\mathbf x \in X} 0.063 x_{4} x_{7} - 5.04 x_{1} - 0.035 x_{2} - 10 x_{3} - 3.36 x_{2}\\
{\rm s.t.} \\;\\; & x_{1} (1.12 + 0.13167 x_{8} - 0.00667 x_{8}^{2}) + x_{4} = 0\\
& -0.001 x_{4} x_{9} x_{6} / (98 - x_{6}) + x_{3} = 0\\
&-(1.098 x_{8} - 0.038 x_{8}^{2}) - 0.325 x_{6} + x_{7} = 0\\
&-(x_{2} + x_{5}) / x_{1} + x_{8} = 0\\
&-x_{1} + 1.22 x_{4} - x_{5} = 0\\
&x_{9} + 0.222 x_{10} - 35.82 = 0\\
&-3.0 x_{7} + x_{10} + 133.0 = 0\\
& X = [10, 2000] \times [0, 16000] \times [0, 120] \times [0, 5000]\\
& \max_{\mathbf x \in X} 0.063 x_{4} x_{7} - 5.04 x_{1} - 0.035 x_{2} - 10 x_{3} - 3.36 x_{2} \\
{\rm s.t.} \\;\\; & x_{1} (1.12 + 0.13167 x_{8} - 0.00667 x_{8}^{2}) + x_{4} = 0 \\
& -0.001 x_{4} x_{9} x_{6} / (98 - x_{6}) + x_{3} = 0 \\
& -(1.098 x_{8} - 0.038 x_{8}^{2}) - 0.325 x_{6} + x_{7} = 0 \\
& -(x_{2} + x_{5}) / x_{1} + x_{8} = 0 \\
& -x_{1} + 1.22 x_{4} - x_{5} = 0 \\
& x_{9} + 0.222 x_{10} - 35.82 = 0 \\
& -3.0 x_{7} + x_{10} + 133.0 = 0 \\
& X = [10, 2000] \times [0, 16000] \times [0, 120] \times [0, 5000] \\
& \qquad \times [0, 2000] \times [85, 93] \times [90,9 5] \times [3, 12] \times [1.2, 4] \times [145, 162]
\end{align*}
$$
Expand Down Expand Up @@ -162,15 +162,15 @@ Please report bugs or feature requests by opening an issue using the GitHub [iss
Please cite the following paper when using EAGO. In plain text form this is:

```
M. E. Wilhelm & M. D. Stuber (2022) EAGO.jl: easy advanced global optimization in Julia,
Optimization Methods and Software, 37:2, 425-450, DOI: 10.1080/10556788.2020.1786566
Wilhelm, M.E. and Stuber, M.D. EAGO.jl: easy advanced global optimization in Julia.
Optimization Methods and Software. 37(2): 425-450 (2022). DOI: 10.1080/10556788.2020.1786566
```

A BibTeX entry is given below and a corresponding .bib file is given in citation.bib.

```bibtex
@article{doi:10.1080/10556788.2020.1786566,
author = {M. E. Wilhelm and M. D. Stuber},
author = {Wilhelm, M.E. and Stuber, M.D.},
title = {EAGO.jl: easy advanced global optimization in Julia},
journal = {Optimization Methods and Software},
volume = {37},
Expand All @@ -186,14 +186,14 @@ eprint = {https://doi.org/10.1080/10556788.2020.1786566}

## Related Packages

- [ValidatedNumerics.jl](https://github.com/JuliaIntervals/ValidatedNumerics.jl): A Julia library for validated interval calculations, including basic interval extensions, constraint programming, and interval contractors.
- [MAiNGO](http://swmath.org/software/27878): An open-source mixed-integer nonlinear programming package in C++ that utilizes MC++ for relaxations.
- [MC++](https://github.com/coin-or/MCpp): A mature McCormick relaxation package in C++ that also includes McCormick-Taylor, Chebyshev Polyhedral, and Ellipsoidal arithmetics.
- [ValidatedNumerics.jl](https://github.com/JuliaIntervals/ValidatedNumerics.jl): A Julia library for validated interval calculations, including basic interval extensions, constraint programming, and interval contractors
- [MAiNGO](https://avt-svt.pages.rwth-aachen.de/public/maingo/): An open-source mixed-integer nonlinear programming package in C++ that utilizes MC++ for relaxations
- [MC++](https://github.com/coin-or/MCpp): A mature McCormick relaxation package in C++ that also includes McCormick-Taylor, Chebyshev Polyhedral, and Ellipsoidal arithmetics

## References

1. A. Mitsos, B. Chachuat, and P. I. Barton. **McCormick-based relaxations of algorithms.** *SIAM Journal on Optimization*, 20(2):573–601, 2009.
2. K.A. Khan, HAJ Watson, P.I. Barton. **Differentiable McCormick relaxations.** *Journal of Global Optimization*, 67(4):687-729 (2017).
3. Stuber, M.D., Scott, J.K., Barton, P.I.: **Convex and concave relaxations of implicit functions.** *Optim. Methods Softw.* 30(3), 424–460 (2015)
4. A., Wechsung JK Scott, HAJ Watson, and PI Barton. **Reverse propagation of McCormick relaxations.** *Journal of Global Optimization* 63(1):1-36 (2015).
5. Bracken, Jerome and McCormick, Garth P. **Selected Applications of Nonlinear Programming**, John Wiley and Sons, New York, 1968.
1. Mitsos, A., Chachuat, B., and Barton, P.I. **McCormick-based relaxations of algorithms.** *SIAM Journal on Optimization*. 20(2): 573–601 (2009).
2. Khan, K.A., Watson, H.A.J., and Barton, P.I. **Differentiable McCormick relaxations.** *Journal of Global Optimization*. 67(4): 687-729 (2017).
3. Stuber, M.D., Scott, J.K., and Barton, P.I.: **Convex and concave relaxations of implicit functions.** *Optimization Methods and Software* 30(3): 424–460 (2015).
4. Wechsung, A., Scott, J.K., Watson, H.A.J., and Barton, P.I. **Reverse propagation of McCormick relaxations.** *Journal of Global Optimization* 63(1): 1-36 (2015).
5. Bracken, J., and McCormick, G.P. *Selected Applications of Nonlinear Programming.* John Wiley and Sons, New York (1968).
2 changes: 1 addition & 1 deletion docs/Project.toml
Original file line number Diff line number Diff line change
Expand Up @@ -9,5 +9,5 @@ McCormick = "53c679d3-6890-5091-8386-c291e8c8aaa1"
StaticArrays = "90137ffa-7385-5640-81b9-e52037218182"

[compat]
Documenter = "0.27.9, 0.28"
JuMP = "1.12"
Documenter = "~1"
55 changes: 28 additions & 27 deletions docs/make.jl
Original file line number Diff line number Diff line change
Expand Up @@ -19,11 +19,10 @@ import EAGO: ExtensionType, Evaluator, variable_dbbt!,
import EAGO.Script: dag_flattening!, register_substitution!, Template_Graph,
Template_Node, scrub, scrub!, flatten_expression!

const MOI = MathOptInterface

@info "Making documentation..."
makedocs(modules = [EAGO, McCormick],
doctest = false,
warnonly = [:docs_block, :missing_docs],
format = Documenter.HTML(
prettyurls = get(ENV, "CI", nothing) == "true",
canonical = "https://PSORLab.github.io/EAGO.jl/stable/",
Expand All @@ -33,33 +32,35 @@ makedocs(modules = [EAGO, McCormick],
authors = "Matthew Wilhelm, Robert Gottlieb, Dimitri Alston, and Matthew Stuber",
sitename = "EAGO.jl",
pages = Any["Introduction" => "index.md",
"Quick Start" => Any["quick_start/qs_landing.md",
"quick_start/guidelines.md",
"quick_start/explicit_ann.md",
"quick_start/interval_bb.md",
"quick_start/quasiconvex.md",
"quick_start/alpha_bb.md"
],
"McCormick Operator Library" => Any["mccormick/overview.md",
"mccormick/usage.md",
"mccormick/operators.md",
"mccormick/type.md",
"mccormick/implicit.md"
],
"Optimizer" => Any["optimizer/optimizer.md",
"optimizer/bnb_back.md",
"optimizer/relax_back.md",
"optimizer/domain_reduction.md",
"optimizer/high_performance.md",
"optimizer/udf_utilities.md"
],
"Semi-Infinite Programming" => "semiinfinite/semiinfinite.md",
"Contributing to EAGO" => Any["dev/contributing.md",
"dev/future.md"
"Manual" => Any["Optimizer" => Any["optimizer/optimizer.md",
"optimizer/bnb_back.md",
"optimizer/relax_back.md",
"optimizer/domain_reduction.md",
"optimizer/high_performance.md",
"optimizer/udf_utilities.md"
],
"McCormick.jl" => Any["mccormick/overview.md",
"mccormick/usage.md",
"mccormick/operators.md",
"mccormick/type.md",
"mccormick/implicit.md"
],
"Semi-Infinite Programming" => "semiinfinite/semiinfinite.md",
],
"Customization" => "custom_guidelines.md",
"Examples" => Any["examples/explicit_ann.md",
"examples/interval_bb.md",
"examples/quasiconvex.md",
"examples/alpha_bb.md"
],
"API Reference" => Any["dev/api_types.md",
"dev/api_functions.md"
],
"Contributing" => "dev/contributing.md",
"News" => "news.md",
"Citing EAGO" => "cite.md",
"News" => "news.md",
"References" => "ref.md"]
"References" => "ref.md"
]
)

@info "Deploying documentation..."
Expand Down
4 changes: 2 additions & 2 deletions docs/src/cite.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,6 @@
Please cite the following paper when using EAGO.jl:

```
M. E. Wilhelm & M. D. Stuber (2022) EAGO.jl: easy advanced global optimization in Julia,
Optimization Methods and Software, 37:2, 425-450, DOI: 10.1080/10556788.2020.1786566
Wilhelm, M.E. and Stuber, M.D. EAGO.jl: easy advanced global optimization in Julia.
Optimization Methods and Software. 37(2): 425-450 (2022). DOI: 10.1080/10556788.2020.1786566
```
File renamed without changes.
6 changes: 6 additions & 0 deletions docs/src/dev/api_functions.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
# Functions

```@autodocs; canonical=false
Modules = [EAGO]
Order = [:function]
```
6 changes: 6 additions & 0 deletions docs/src/dev/api_types.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
# Types

```@autodocs; canonical=false
Modules = [EAGO]
Order = [:type]
```
4 changes: 2 additions & 2 deletions docs/src/dev/contributing.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# How to Contribute
# How to Contribute to EAGO

We're always happy to welcome work with additional collaborators and contributors. One of the easy ways for newcomers to contribute is by adding additional McCormick relaxations.

Expand All @@ -10,4 +10,4 @@ Please direct technical issues and/or bugs to the active developers:
- [Robert Gottlieb](https://psor.uconn.edu/person/robert-gottlieb/)
- [Dimitri Alston](https://psor.uconn.edu/person/dimitri-alston/)

All other questions should be directed to [Prof. Stuber](https://chemical-biomolecular.engr.uconn.edu/person/matthew-stuber/).
All other questions should be directed to [Prof. Stuber](https://chemical-biomolecular.engr.uconn.edu/people/faculty/stuber-matthew/).
Original file line number Diff line number Diff line change
Expand Up @@ -8,9 +8,9 @@ In this example, we will demonstrate the use of a user-defined lower-bounding pr

```math
\begin{aligned}
& \min_{\mathbf x \in \mathbb R^{2}} \frac{1}{2} \mathbf x^{\rm T} \mathbf Q_{f} \mathbf x + \mathbf c_{f}^{\rm T} \mathbf x\\
{\rm s.t.} \;\; & g_{1}(\mathbf x)= \frac{1}{2} \mathbf x^{\rm T} \mathbf Q_{g_{1}}\mathbf x + \mathbf c_{g_{1}}^{\rm T} \mathbf x \leq 0\\
& g_{2}(\mathbf x) = \frac{1}{2} \mathbf x^{\rm T} \mathbf Q_{g_{2}} \mathbf x + \mathbf c_{g_{2}}^{\rm T} \mathbf x \leq 0\\
& \min_{\mathbf x \in \mathbb{IR}^{2}} \frac{1}{2} \mathbf x^{\rm T} \mathbf Q_{f} \mathbf x + \mathbf c_{f}^{\rm T} \mathbf x \\
{\rm s.t.} \; \; & g_{1}(\mathbf x) = \frac{1}{2} \mathbf x^{\rm T} \mathbf Q_{g_{1}}\mathbf x + \mathbf c_{g_{1}}^{\rm T} \mathbf x \leq 0 \\
& g_{2}(\mathbf x) = \frac{1}{2} \mathbf x^{\rm T} \mathbf Q_{g_{2}} \mathbf x + \mathbf c_{g_{2}}^{\rm T} \mathbf x \leq 0 \\
\end{aligned}
```

Expand Down Expand Up @@ -117,7 +117,7 @@ end

!!! note

By default, EAGO solves the epigraph reformulation of your original problem, which increases the original problem dimensionality by +1 with the introduction of an auxiliary variable. When defining custom routines (such as the lower-bounding problem here) that are intended to work nicely with default EAGO routines (such as preprocessing), the user must account for the *new* dimensionality of the problem. In the code above, we wish to access the information of the specific B&B node and define an optimization problem based on that information. However, in this example, the node has information for 3 variables (the original 2 plus 1 for the auxiliary variable appended to the original variable vector) as ``(x_{1}, x_{2}, \eta)``. The lower-bounding problem was defined to optimize the relaxed problem with respect to the original 2 decision variables. When storing the results of this subproblem to the current B&B node, it is important to take care to store the information at the appropriate indices and not inadvertently redefine the problem dimensionality (i.e., by simply storing the optimization solution as the `lower_solution` of the current node). For problems that are defined to only branch on a subset of the original variables, the optimizer has a member `_sol_to_branch_map` that carries the mapping between the indices of the original variables to those of the variables being branched on. See the [Advanced-Use Example 1](@ref) to see how this is done.
By default, EAGO solves the epigraph reformulation of your original problem, which increases the original problem dimensionality by +1 with the introduction of an auxiliary variable. When defining custom routines (such as the lower-bounding problem here) that are intended to work nicely with default EAGO routines (such as preprocessing), the user must account for the *new* dimensionality of the problem. In the code above, we wish to access the information of the specific B&B node and define an optimization problem based on that information. However, in this example, the node has information for 3 variables (the original 2 plus 1 for the auxiliary variable appended to the original variable vector) as ``(x_{1}, x_{2}, \eta)``. The lower-bounding problem was defined to optimize the relaxed problem with respect to the original 2 decision variables. When storing the results of this subproblem to the current B&B node, it is important to take care to store the information at the appropriate indices and not inadvertently redefine the problem dimensionality (i.e., by simply storing the optimization solution as the `lower_solution` of the current node). For problems that are defined to only branch on a subset of the original variables, the optimizer has a member `_sol_to_branch_map` that carries the mapping between the indices of the original variables to those of the variables being branched on. Visit our [quasiconvex example](@ref "Advanced-Use Example 1") to see how this is done.

## (Optional) Turn Off Processing Routines

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,11 +4,11 @@ This example is also provided [here as a Jupyter Notebook](https://github.com/PS

### Solving an ANN to Optimality in EAGO

In [[1](#References),[2](#References)], a surrogate artificial neural network (ANN) model of bioreactor productivity was constructed by fitting results from computationally expensive computational fluid dynamics (CFD) simulations. The authors then optimized this surrogate model to obtain ideal processing conditions. The optimization problem is given by:
In [[1](#References), [2](#References)], a surrogate artificial neural network (ANN) model of bioreactor productivity was constructed by fitting results from computationally expensive computational fluid dynamics (CFD) simulations. The authors then optimized this surrogate model to obtain ideal processing conditions. The optimization problem is given by:

```math
\begin{aligned}
\max_{\mathbf x \in X} B_{2} + \sum_{r = 1}^{3} W_{2,r} \frac{2}{1 + \exp (-2y_{r} + B_{1,r})} \;\; {\rm where} \;\; y_{r} = \sum_{i = 1}^{8} W_{1,ir} x_{i}
\max_{\mathbf x \in X} B_{2} + \sum_{r = 1}^{3} W_{2,r} \frac{2}{1 + \exp (-2y_{r} + B_{1,r})} \; \; {\rm where} \; \; y_{r} = \sum_{i = 1}^{8} W_{1,ir} x_{i}
\end{aligned}
```

Expand All @@ -35,12 +35,12 @@ B2 = -0.46

# Variable bounds (Used to scale variables after optimization)
xLBD = [0.623, 0.093, 0.259, 6.56, 1114.0, 0.013, 0.127, 0.004]
xUBD = [5.89, 0.5, 1.0, 90.0, 25000.0, 0.149, 0.889, 0.049];
xUBD = [5.89, 0.5, 1.0, 90.0, 25000.0, 0.149, 0.889, 0.049]
```

## Construct the JuMP Model and Optimize

We now formulate the problem using standard JuMP [[3](#References)] syntax and optimize it. Note that we are forming an NLexpression object to handle the summation term to keep the code visually simple, but this could be placed directly in the JuMP [`@NLobjective`](https://jump.dev/JuMP.jl/stable/api/JuMP/#@NLobjective) expression instead.
We now formulate the problem using standard JuMP [[3](#References)] syntax and optimize it. Note that we are using the [`@NLexpression`](https://jump.dev/JuMP.jl/stable/api/JuMP/#JuMP.@NLexpression) macro to handle the summation term to keep the code visually simple, but this could be placed directly in the [`@NLobjective`](https://jump.dev/JuMP.jl/stable/api/JuMP/#JuMP.@NLobjective) macro instead.

```julia
# Model construction
Expand All @@ -66,7 +66,7 @@ status_prim = JuMP.primal_status(model)
println("EAGO terminated with a status of $status_term and a result code of $status_prim.")
println("The optimal value is: $(round(fval, digits=5)).")
println("The solution found is $(round.(xsol, digits=3)).")
println("")
println(" ")

# Rescale values back to physical space
rescaled_fval = ((fval + 1.0)/2.0)*0.07
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ In this example, we'll forgo extensive integration into the [`Optimizer`](@ref)

```math
\begin{aligned}
& \min_{\mathbf x \in X} \;\; \sin(x_{1}) x_{2}^{2} - \cos(x_{3}) / x_{4}\\
& \min_{\mathbf x \in X} \; \; \sin(x_{1}) x_{2}^{2} - \cos(x_{3}) / x_{4} \\
& X = [-10, 10] \times [-1, 1] \times [-10, 10] \times [2, 20].
\end{aligned}
```
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -8,11 +8,11 @@ In this example, we'll adapt EAGO to implement the bisection-based algorithm use

```math
\begin{aligned}
f^{*} = & \min_{\mathbf y \in Y} f(\mathbf y)\\
{\rm s.t.} \;\; & \sum_{i = 1}^{5} i \cdot y_{i} - 5 = 0\\
& \sum_{i = 1}^{5} y_{i}^{2} - 0.5\pi \leq 0\\
& -\bigg(\frac{1}{2} y_{1}^{2} + \frac{1}{2} y_{2}^{2} + 2 y_{1} y_{2} + 4 y_{1} y_{3} + 2 y_{2} y_{3} \bigg) \leq 0\\
& -y_{1}^{2} - 6 y_{1} y_{2} - 2 y_{2}^{2} + \cos (y_{1}) + \pi \leq 0\\
f^{*} = & \min_{\mathbf y \in Y} f(\mathbf y) \\
{\rm s.t.} \; \; & \sum_{i = 1}^{5} i \cdot y_{i} - 5 = 0 \\
& \sum_{i = 1}^{5} y_{i}^{2} - 0.5 \pi \leq 0 \\
& -\bigg(\frac{1}{2} y_{1}^{2} + \frac{1}{2} y_{2}^{2} + 2 y_{1} y_{2} + 4 y_{1} y_{3} + 2 y_{2} y_{3} \bigg) \leq 0 \\
& -y_{1}^{2} - 6 y_{1} y_{2} - 2 y_{2}^{2} + \cos (y_{1}) + \pi \leq 0 \\
& Y = [0, 5]^{5}
\end{aligned}
```
Expand All @@ -29,10 +29,10 @@ Interval analysis shows that the objective value is bounded by the interval ``F`

```math
\begin{aligned}
t^{*} = & \min_{\mathbf y \in Y, t \in T} t\\
{\rm s.t.} \;\; & (24) - (27)\\
& f(\mathbf y) - t \leq 0\\
& Y = [0,5]^{2}, \;\; T = [-5,0].
t^{*} = & \min_{\mathbf y \in Y, t \in T} t \\
{\rm s.t.} \; \; & (24) - (27) \\
& f(\mathbf y) - t \leq 0 \\
& Y = [0,5]^{2}, \; \; T = [-5,0].
\end{aligned}
```

Expand Down
Loading

0 comments on commit 2788486

Please sign in to comment.