From 33f6302731c8173dd6fba2dab316911d2c23969f Mon Sep 17 00:00:00 2001
From: Oscar Dowson
Date: Tue, 7 Nov 2023 12:47:52 +1300
Subject: [PATCH 1/4] Update docs/src/jump/README.md
---
docs/src/jump/README.md | 131 ++++++++++++++++++++++++++--------------
1 file changed, 85 insertions(+), 46 deletions(-)
diff --git a/docs/src/jump/README.md b/docs/src/jump/README.md
index e7ffbde5..9072c7af 100644
--- a/docs/src/jump/README.md
+++ b/docs/src/jump/README.md
@@ -1,18 +1,28 @@
-
+
# EAGO - Easy Advanced Global Optimization
-EAGO is an open-source development environment for **robust and global optimization** in Julia. See the full [README](https://github.com/PSORLab/EAGO.jl/blob/master/README.md) for more information.
+EAGO is an open-source development environment for **robust and global optimization**
+in Julia. See the full [README](https://github.com/PSORLab/EAGO.jl/blob/master/README.md)
+for more information.
| **PSOR Lab** | **Current Version** | **Build Status** | **Documentation** |
|:------------:|:-------------------:|:----------------:|:-----------------:|
| [![](https://img.shields.io/badge/Developed_by-PSOR_Lab-342674)](https://psor.uconn.edu/) | [![](https://docs.juliahub.com/EAGO/version.svg)](https://juliahub.com/ui/Packages/General/EAGO) | [![Build Status](https://github.com/PSORLab/EAGO.jl/workflows/CI/badge.svg?branch=master)](https://github.com/PSORLab/EAGO.jl/actions?query=workflow%3ACI) [![codecov](https://codecov.io/gh/PSORLab/EAGO.jl/branch/master/graph/badge.svg)](https://codecov.io/gh/PSORLab/EAGO.jl)| [![](https://img.shields.io/badge/docs-latest-blue.svg)](https://PSORLab.github.io/EAGO.jl/dev) |
-EAGO is a deterministic global optimizer designed to address a wide variety of optimization problems, emphasizing nonlinear programs (NLPs), by propagating McCormick relaxations along the factorable structure of each expression in the NLP. Most operators supported by modern automatic differentiation (AD) packages are supported by EAGO and a number utilities for sanitizing native Julia code and generating relaxations on a wide variety of user-defined functions have been included. Currently, EAGO supports problems that have a priori variable bounds defined and have differentiable constraints. That is, problems should be specified in the generic form below:
+EAGO is a deterministic global optimizer designed to address a wide variety of
+optimization problems, emphasizing nonlinear programs (NLPs), by propagating
+McCormick relaxations along the factorable structure of each expression in the
+NLP. Most operators supported by modern automatic differentiation (AD) packages
+are supported by EAGO and a number utilities for sanitizing native Julia code
+and generating relaxations on a wide variety of user-defined functions have been
+included. Currently, EAGO supports problems that have a priori variable bounds
+defined and have differentiable constraints. That is, problems should be
+specified in the generic form below:
$$
\begin{align*}
-f^{\*} = & \min_{\mathbf y \in Y \subset \mathbb R^{n_{y}}} f(\mathbf y) \\
+f^{\*} = \min_{\mathbf y \in Y \subset \mathbb R^{n_{y}}} & f(\mathbf y) \\
{\rm s.t.} \\;\\; & \mathbf h(\mathbf y) = \mathbf 0 \\
& \mathbf g(\mathbf y) \leq \mathbf 0 \\
& Y = [\mathbf y^{\mathbf L}, \mathbf y^{\mathbf U}] \in \mathbb{IR}^{n} \\
@@ -20,7 +30,10 @@ f^{\*} = & \min_{\mathbf y \in Y \subset \mathbb R^{n_{y}}} f(\mathbf y) \\
\end{align*}
$$
-For each nonlinear term, EAGO makes use of factorable representations to construct bounds and relaxations. In the case of $f(x) = x (x - 5) \sin(x)$, a list is generated and rules for constructing McCormick relaxations are used to formulate relaxations in the original decision space, $X$ [[1](#references)]:
+For each nonlinear term, EAGO makes use of factorable representations to
+construct bounds and relaxations. In the case of $f(x) = x (x - 5) \sin(x)$, a
+list is generated and rules for constructing McCormick relaxations are used to
+formulate relaxations in the original decision space, $X$ [[1](#references)]:
- $v_{1} = x$
- $v_{2} = v_{1} - 5$
@@ -30,9 +43,15 @@ For each nonlinear term, EAGO makes use of factorable representations to constru
- $f(x) = v_{5}$
-
+
+
-Either these original relaxations, differentiable McCormick relaxations [[2](#references)], or affine relaxations thereof can be used to construct relaxations of optimization problems useful in branch and bound routines for global optimization. Utilities are included to combine these with algorithms for relaxing implicit functions [[3](#references)] and forward-reverse propagation of McCormick arithmetic [[4](#references)].
+Either these original relaxations, differentiable McCormick relaxations
+[[2](#references)], or affine relaxations thereof can be used to construct
+relaxations of optimization problems useful in branch and bound routines for
+global optimization. Utilities are included to combine these with algorithms for
+relaxing implicit functions [[3](#references)] and forward-reverse propagation
+of McCormick arithmetic [[4](#references)].
## License
@@ -40,7 +59,8 @@ EAGO is licensed under the [MIT License](https://github.com/PSORLab/EAGO.jl/blob
## Installation
-EAGO is a registered Julia package and it can be installed using the Julia package manager:
+EAGO is a registered Julia package and it can be installed using the Julia
+package manager:
```julia
import Pkg
@@ -49,11 +69,13 @@ Pkg.add("EAGO")
## Use with JuMP
-EAGO makes use of JuMP to improve the user's experience in setting up optimization models. Consider the "process" problem instance from [[5](#references)]:
+EAGO makes use of JuMP to improve the user's experience in setting up
+optimization models. Consider the "process" problem instance from
+[[5](#references)]:
$$
\begin{align*}
-& \max_{\mathbf x \in X} 0.063 x_{4} x_{7} - 5.04 x_{1} - 0.035 x_{2} - 10 x_{3} - 3.36 x_{2} \\
+\max_{\mathbf x \in X} & 0.063 x_{4} x_{7} - 5.04 x_{1} - 0.035 x_{2} - 10 x_{3} - 3.36 x_{2} \\
{\rm s.t.} \\;\\; & x_{1} (1.12 + 0.13167 x_{8} - 0.00667 x_{8}^{2}) + x_{4} = 0 \\
& -0.001 x_{4} x_{9} x_{6} / (98 - x_{6}) + x_{3} = 0 \\
& -(1.098 x_{8} - 0.038 x_{8}^{2}) - 0.325 x_{6} + x_{7} = 0 \\
@@ -69,41 +91,58 @@ $$
This model can be formulated in Julia as:
```julia
-using JuMP, EAGO
-
+using JuMP
+import EAGO
# Build model using EAGO's optimizer
-m = Model(EAGO.Optimizer)
-
+model = Model(EAGO.Optimizer)
# Define bounded variables
-xL = [10.0; 0.0; 0.0; 0.0; 0.0; 85.0; 90.0; 3.0; 1.2; 145.0]
-xU = [2000.0; 16000.0; 120.0; 5000.0; 2000.0; 93.0; 95.0; 12.0; 4.0; 162.0]
-@variable(m, xL[i] <= x[i=1:10] <= xU[i])
-
+xL = [10.0, 0.0, 0.0, 0.0, 0.0, 85.0, 90.0, 3.0, 1.2, 145.0]
+xU = [2000.0, 16000.0, 120.0, 5000.0, 2000.0, 93.0, 95.0, 12.0, 4.0, 162.0]
+@variable(model, xL[i] <= x[i=1:10] <= xU[i])
# Define nonlinear constraints
-@NLconstraint(m, e1, -x[1]*(1.12 + 0.13167*x[8] - 0.00667*(x[8])^2) + x[4] == 0.0)
-@NLconstraint(m, e3, -0.001*x[4]*x[9]*x[6]/(98.0 - x[6]) + x[3] == 0.0)
-@NLconstraint(m, e4, -(1.098*x[8] - 0.038*(x[8])^2) - 0.325*x[6] + x[7] == 57.425)
-@NLconstraint(m, e5, -(x[2] + x[5])/x[1] + x[8] == 0.0)
-
+@NLconstraints(model, begin
+ -x[1]*(1.12 + 0.13167*x[8] - 0.00667*(x[8])^2) + x[4] == 0.0
+ -0.001*x[4]*x[9]*x[6]/(98.0 - x[6]) + x[3] == 0.0
+ -(1.098*x[8] - 0.038*(x[8])^2) - 0.325*x[6] + x[7] == 57.425
+ -(x[2] + x[5])/x[1] + x[8] == 0.0
+end)
# Define linear constraints
-@constraint(m, e2, -x[1] + 1.22*x[4] - x[5] == 0.0)
-@constraint(m, e6, x[9] + 0.222*x[10] == 35.82)
-@constraint(m, e7, -3.0*x[7] + x[10] == -133.0)
-
+@constraints(model, begin
+ -x[1] + 1.22*x[4] - x[5] == 0.0
+ x[9] + 0.222*x[10] == 35.82
+ -3.0*x[7] + x[10] == -133.0
+end)
# Define nonlinear objective
-@NLobjective(m, Max, 0.063*x[4]*x[7] - 5.04*x[1] - 0.035*x[2] - 10*x[3] - 3.36*x[5])
-
+@NLobjective(
+ model,
+ Max,
+ 0.063*x[4]*x[7] - 5.04*x[1] - 0.035*x[2] - 10*x[3] - 3.36*x[5],
+)
# Solve the optimization problem
-JuMP.optimize!(m)
+optimize!(model)
```
## Documentation
-EAGO has numerous features: a solver accessible from JuMP/MathOptInterface (MOI), domain reduction routines, McCormick relaxations, and specialized nonconvex semi-infinite program solvers. A full description of all features can be found on the [documentation website](https://psorlab.github.io/EAGO.jl/dev/). A series of example have been provided in the documentation and in the form of Jupyter Notebooks in the separate [EAGO-notebooks](https://github.com/PSORLab/EAGO-notebooks) repository.
+EAGO has numerous features: a solver accessible from JuMP/MathOptInterface (MOI),
+domain reduction routines, McCormick relaxations, and specialized nonconvex
+semi-infinite program solvers. A full description of all features can be found
+on the [documentation website](https://psorlab.github.io/EAGO.jl/dev/).
+
+A series of examples have been provided in the documentation and in the form of
+Jupyter Notebooks in the separate [EAGO-notebooks](https://github.com/PSORLab/EAGO-notebooks)
+repository.
## A Cautionary Note on Global Optimization
-As a global optimization platform, EAGO's solvers can be used to find solutions of general nonconvex problems with a guaranteed certificate of optimality. However, global solvers suffer from the curse of dimensionality and therefore their performance is outstripped by convex/local solvers. For users interested in large-scale applications, be warned that problems generally larger than a few variables may prove challenging for certain types of global optimization problems.
+As a global optimization platform, EAGO's solvers can be used to find solutions
+of general nonconvex problems with a guaranteed certificate of optimality.
+However, global solvers suffer from the curse of dimensionality and therefore
+their performance is outstripped by convex/local solvers.
+
+For users interested in large-scale applications, be warned that problems
+generally larger than a few variables may prove challenging for certain types of
+global optimization problems.
## Citing EAGO
@@ -118,24 +157,24 @@ As a BibTeX entry:
```bibtex
@article{doi:10.1080/10556788.2020.1786566,
-author = {Wilhelm, M.E. and Stuber, M.D.},
-title = {EAGO.jl: easy advanced global optimization in Julia},
-journal = {Optimization Methods and Software},
-volume = {37},
-number = {2},
-pages = {425-450},
-year = {2022},
-publisher = {Taylor & Francis},
-doi = {10.1080/10556788.2020.1786566},
-URL = {https://doi.org/10.1080/10556788.2020.1786566},
-eprint = {https://doi.org/10.1080/10556788.2020.1786566}
+ author = {Wilhelm, M.E. and Stuber, M.D.},
+ title = {EAGO.jl: easy advanced global optimization in Julia},
+ journal = {Optimization Methods and Software},
+ volume = {37},
+ number = {2},
+ pages = {425-450},
+ year = {2022},
+ publisher = {Taylor & Francis},
+ doi = {10.1080/10556788.2020.1786566},
+ URL = {https://doi.org/10.1080/10556788.2020.1786566},
+ eprint = {https://doi.org/10.1080/10556788.2020.1786566}
}
```
## References
1. Mitsos, A., Chachuat, B., and Barton, P.I. **McCormick-based relaxations of algorithms.** *SIAM Journal on Optimization*. 20(2): 573–601 (2009).
-2. Khan, K.A., Watson, H.A.J., and Barton, P.I. **Differentiable McCormick relaxations.** *Journal of Global Optimization*. 67(4): 687-729 (2017).
+2. Khan, K.A., Watson, H.A.J., and Barton, P.I. **Differentiable McCormick relaxations.** *Journal of Global Optimization*. 67(4): 687–729 (2017).
3. Stuber, M.D., Scott, J.K., and Barton, P.I.: **Convex and concave relaxations of implicit functions.** *Optimization Methods and Software* 30(3): 424–460 (2015).
-4. Wechsung, A., Scott, J.K., Watson, H.A.J., and Barton, P.I. **Reverse propagation of McCormick relaxations.** *Journal of Global Optimization* 63(1): 1-36 (2015).
-5. Bracken, J., and McCormick, G.P. *Selected Applications of Nonlinear Programming.* John Wiley and Sons, New York (1968).
\ No newline at end of file
+4. Wechsung, A., Scott, J.K., Watson, H.A.J., and Barton, P.I. **Reverse propagation of McCormick relaxations.** *Journal of Global Optimization* 63(1): 1–36 (2015).
+5. Bracken, J., and McCormick, G.P. *Selected Applications of Nonlinear Programming.* John Wiley and Sons, New York (1968).
From 847fa3808f54d8d8285c6309bef2dbb0f6b8dba1 Mon Sep 17 00:00:00 2001
From: Oscar Dowson
Date: Tue, 7 Nov 2023 13:24:39 +1300
Subject: [PATCH 2/4] Update README.md
---
docs/src/jump/README.md | 16 +++++++---------
1 file changed, 7 insertions(+), 9 deletions(-)
diff --git a/docs/src/jump/README.md b/docs/src/jump/README.md
index 9072c7af..8aa6ab78 100644
--- a/docs/src/jump/README.md
+++ b/docs/src/jump/README.md
@@ -33,7 +33,7 @@ $$
For each nonlinear term, EAGO makes use of factorable representations to
construct bounds and relaxations. In the case of $f(x) = x (x - 5) \sin(x)$, a
list is generated and rules for constructing McCormick relaxations are used to
-formulate relaxations in the original decision space, $X$ [[1](#references)]:
+formulate relaxations in the original decision space, $X$ [1]:
- $v_{1} = x$
- $v_{2} = v_{1} - 5$
@@ -46,12 +46,11 @@ formulate relaxations in the original decision space, $X$ [[1](#references)]:
-Either these original relaxations, differentiable McCormick relaxations
-[[2](#references)], or affine relaxations thereof can be used to construct
-relaxations of optimization problems useful in branch and bound routines for
-global optimization. Utilities are included to combine these with algorithms for
-relaxing implicit functions [[3](#references)] and forward-reverse propagation
-of McCormick arithmetic [[4](#references)].
+Either these original relaxations, differentiable McCormick relaxations [2], or
+affine relaxations thereof can be used to construct relaxations of optimization
+problems useful in branch and bound routines for global optimization. Utilities
+are included to combine these with algorithms for relaxing implicit functions
+[3] and forward-reverse propagation of McCormick arithmetic [4].
## License
@@ -70,8 +69,7 @@ Pkg.add("EAGO")
## Use with JuMP
EAGO makes use of JuMP to improve the user's experience in setting up
-optimization models. Consider the "process" problem instance from
-[[5](#references)]:
+optimization models. Consider the "process" problem instance from [5]:
$$
\begin{align*}
From 8b40836922a4a5688d11e4d633cfd70f7f33aecb Mon Sep 17 00:00:00 2001
From: Dimitri Alston
Date: Tue, 7 Nov 2023 14:13:11 -0500
Subject: [PATCH 3/4] Update README.md
---
docs/src/jump/README.md | 46 ++++++++++++++++++++++-------------------
1 file changed, 25 insertions(+), 21 deletions(-)
diff --git a/docs/src/jump/README.md b/docs/src/jump/README.md
index 8aa6ab78..6cca3343 100644
--- a/docs/src/jump/README.md
+++ b/docs/src/jump/README.md
@@ -20,27 +20,31 @@ included. Currently, EAGO supports problems that have a priori variable bounds
defined and have differentiable constraints. That is, problems should be
specified in the generic form below:
-$$
-\begin{align*}
-f^{\*} = \min_{\mathbf y \in Y \subset \mathbb R^{n_{y}}} & f(\mathbf y) \\
-{\rm s.t.} \\;\\; & \mathbf h(\mathbf y) = \mathbf 0 \\
+```math
+\begin{aligned}
+f^{*} = & \min_{\mathbf y \in Y \subset \mathbb R^{n_{y}}} f(\mathbf y) \\
+{\rm s.t.} \; \; & \mathbf h(\mathbf y) = \mathbf 0 \\
& \mathbf g(\mathbf y) \leq \mathbf 0 \\
-& Y = [\mathbf y^{\mathbf L}, \mathbf y^{\mathbf U}] \in \mathbb{IR}^{n} \\
-& \qquad \mathbf y^{\mathbf L}, \mathbf y^{\mathbf U} \in \mathbb R^{n}
-\end{align*}
-$$
+& Y = [\mathbf y^{L}, \mathbf y^{U}] \in \mathbb{IR}^{n} \\
+& \qquad \mathbf y^{L}, \mathbf y^{U} \in \mathbb R^{n}
+\end{aligned}
+```
For each nonlinear term, EAGO makes use of factorable representations to
construct bounds and relaxations. In the case of $f(x) = x (x - 5) \sin(x)$, a
list is generated and rules for constructing McCormick relaxations are used to
formulate relaxations in the original decision space, $X$ [1]:
-- $v_{1} = x$
-- $v_{2} = v_{1} - 5$
-- $v_{3} = \sin(v_{1})$
-- $v_{4} = v_{1} v_{2}$
-- $v_{5} = v_{4} v_{3}$
-- $f(x) = v_{5}$
+```math
+\begin{aligned}
+v_{1} & = x \\
+v_{2} & = v_{1} - 5 \\
+v_{3} & = \sin(v_{1}) \\
+v_{4} & = v_{1} v_{2} \\
+v_{5} & = v_{4} v_{3} \\
+f(x) & = v_{5} \\
+\end{aligned}
+```
@@ -58,7 +62,7 @@ EAGO is licensed under the [MIT License](https://github.com/PSORLab/EAGO.jl/blob
## Installation
-EAGO is a registered Julia package and it can be installed using the Julia
+EAGO is a registered Julia package that can be installed using the Julia
package manager:
```julia
@@ -71,10 +75,10 @@ Pkg.add("EAGO")
EAGO makes use of JuMP to improve the user's experience in setting up
optimization models. Consider the "process" problem instance from [5]:
-$$
-\begin{align*}
-\max_{\mathbf x \in X} & 0.063 x_{4} x_{7} - 5.04 x_{1} - 0.035 x_{2} - 10 x_{3} - 3.36 x_{2} \\
-{\rm s.t.} \\;\\; & x_{1} (1.12 + 0.13167 x_{8} - 0.00667 x_{8}^{2}) + x_{4} = 0 \\
+```math
+\begin{aligned}
+& \max_{\mathbf x \in X} 0.063 x_{4} x_{7} - 5.04 x_{1} - 0.035 x_{2} - 10 x_{3} - 3.36 x_{2} \\
+{\rm s.t.} \; \; & x_{1} (1.12 + 0.13167 x_{8} - 0.00667 x_{8}^{2}) + x_{4} = 0 \\
& -0.001 x_{4} x_{9} x_{6} / (98 - x_{6}) + x_{3} = 0 \\
& -(1.098 x_{8} - 0.038 x_{8}^{2}) - 0.325 x_{6} + x_{7} = 0 \\
& -(x_{2} + x_{5}) / x_{1} + x_{8} = 0 \\
@@ -83,8 +87,8 @@ $$
& -3.0 x_{7} + x_{10} + 133.0 = 0 \\
& X = [10, 2000] \times [0, 16000] \times [0, 120] \times [0, 5000] \\
& \qquad \times [0, 2000] \times [85, 93] \times [90,9 5] \times [3, 12] \times [1.2, 4] \times [145, 162]
-\end{align*}
-$$
+\end{aligned}
+```
This model can be formulated in Julia as:
From 102b8e032c04da114e413975a26f0d2b9eaddb63 Mon Sep 17 00:00:00 2001
From: Dimitri Alston
Date: Tue, 7 Nov 2023 15:41:51 -0500
Subject: [PATCH 4/4] Update README.md
---
docs/src/jump/README.md | 17 ++++++++++++++---
1 file changed, 14 insertions(+), 3 deletions(-)
diff --git a/docs/src/jump/README.md b/docs/src/jump/README.md
index 6cca3343..d0a8ffb7 100644
--- a/docs/src/jump/README.md
+++ b/docs/src/jump/README.md
@@ -1,4 +1,6 @@
+```@raw html
+```
# EAGO - Easy Advanced Global Optimization
@@ -31,9 +33,16 @@ f^{*} = & \min_{\mathbf y \in Y \subset \mathbb R^{n_{y}}} f(\mathbf y) \\
```
For each nonlinear term, EAGO makes use of factorable representations to
-construct bounds and relaxations. In the case of $f(x) = x (x - 5) \sin(x)$, a
-list is generated and rules for constructing McCormick relaxations are used to
-formulate relaxations in the original decision space, $X$ [1]:
+construct bounds and relaxations.
+
+For example, given the function
+
+```math
+f(x) = x (x - 5) \sin(x),
+```
+
+a list is generated and rules for constructing McCormick relaxations are used
+to formulate relaxations in the original decision space, $X$ [1]:
```math
\begin{aligned}
@@ -46,9 +55,11 @@ f(x) & = v_{5} \\
\end{aligned}
```
+```@raw html
+```
Either these original relaxations, differentiable McCormick relaxations [2], or
affine relaxations thereof can be used to construct relaxations of optimization