All notable Changes to the Julia package Manopt.jl
will be documented in this file. The file was started with Version 0.4
.
The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.
- icons upfront external links when they link to another package or wikipedia.
- An automated detection whether the tutorials are present
if not an also no quarto run is done, an automated
--exlcude-tutorials
option is added. - Support for ManifoldDiff 0.4
StopWhenChangeLess
,StopWhenGradientChangeLess
andStopWhenGradientLess
can now use the new idea (ManifoldsBase.jl 0.15.18) of different outer norms on manifolds with components like power and product manifolds and all others that support this from theManifolds.jl
Library, likeEuclidean
- stabilize
max_Stepzise
to also work wheninjectivity_radius
dos not exist. It however would warn new users, that activate tutorial mode. - Start a
ManoptTestSuite
subpackage to store dummy types and common test helpers in.
- three new symbols to easier state to record the
:Gradient
, the:GradientNorm
, and the:Stepsize
.
- fix a few typos in the documentation
- improved the documentation for the initial guess of
ArmijoLinesearchStepsize
.
- slightly improves the test for the
ExponentialFamilyProjection
text on the about page.
- the
proximal_point
method.
This breaking update is mainly concerned with improving a unified experience through all solvers and some usability improvements, such that for example the different gradient update rules are easier to specify.
In general we introduce a few factories, that avoid having to pass the manifold to keyword arguments
- A
ManifoldDefaultsFactory
that postpones the creation/allocation of manifold-specific fields in for example direction updates, step sizes and stopping criteria. As a rule of thumb, internal structures, like a solver state should store the final type. Any high-level interface, like the functions to start solvers, should accept such a factory in the appropriate places and call the internal_produce_type(factory, M)
, for example before passing something to the state. - a
documentation_glossary.jl
file containing a glossary of often used variables in fields, arguments, and keywords, to print them in a unified manner. The same for usual sections, tex, and math notation that is often used within the doc-strings.
- Any
Stepsize
now hase aStepsize
struct used internally as the originalstruct
s before. The newly exported terms aim to fitstepsize=...
in naming and create aManifoldDefaultsFactory
instead, so that any stepsize can be created without explicitly specifying the manifold.-
ConstantStepsize
is no longer exported, useConstantLength
instead. The length parameter is now a positional argument following the (optonal) manifold. Besides thatConstantLength
works as before,just that omitting the manifold fills the one specified in the solver now. -
DecreasingStepsize
is no longer exported, useDecreasingLength
instead.ConstantLength
works as before,just that omitting the manifold fills the one specified in the solver now. -
ArmijoLinesearch
is now calledArmijoLinesearchStepsize
.ArmijoLinesearch
works as before,just that omitting the manifold fills the one specified in the solver now. -
WolfePowellLinesearch
is now calledWolfePowellLinesearchStepsize
, its constantc_1
is now unified with Armijo and calledsufficient_decrease
,c_2
was renamed tosufficient_curvature
. Besides that,WolfePowellLinesearch
works as before, just that omitting the manifold fills the one specified in the solver now. -
WolfePowellBinaryLinesearch
is now calledWolfePowellBinaryLinesearchStepsize
, its constantc_1
is now unified with Armijo and calledsufficient_decrease
,c_2
was renamed tosufficient_curvature
. Besides that,WolfePowellBinaryLinesearch
works as before, just that omitting the manifold fills the one specified in the solver now. -
NonmonotoneLinesearch
is now calledNonmonotoneLinesearchStepsize
.NonmonotoneLinesearch
works as before, just that omitting the manifold fills the one specified in the solver now. -
AdaptiveWNGradient
is now calledAdaptiveWNGradientStepsize
. Its second positional argument, the gradient function was only evaluated once for thegradient_bound
default, so it has been replaced by the keywordX=
accepting a tangent vector. The last positional argumentp
has also been moved to a keyword argument. Besides that,AdaptiveWNGradient
works as before, just that omitting the manifold fills the one specified in the solver now.
-
- Any
DirectionUpdateRule
now has theRule
in its name, since the original name is used to create theManifoldDefaultsFactory
instead. The original constructor now no longer requires the manifold as a parameter, that is later done in the factory. TheRule
is, however, also no longer exported.-
AverageGradient
is now calledAverageGradientRule
.AverageGradient
works as before, but the manifold as its first parameter is no longer necessary andp
is now a keyword argument. - The
IdentityUpdateRule
now accepts a manifold optionally for consistency, and you can useGradient()
for short as well as its factory. Hencedirection=Gradient()
is now available. -
MomentumGradient
is now calledMomentumGradientRule
.MomentumGradient
works as before, but the manifold as its first parameter is no longer necessary andp
is now a keyword argument. -
Nesterov
is now calledNesterovRule
.Nesterov
works as before, but the manifold as its first parameter is no longer necessary andp
is now a keyword argument. -
ConjugateDescentCoefficient
is now calledConjugateDescentCoefficientRule
.ConjugateDescentCoefficient
works as before, but can now use the factory in between - the
ConjugateGradientBealeRestart
is now calledConjugateGradientBealeRestartRule
. For theConjugateGradientBealeRestart
the manifold is now a first parameter, that is not necessary and no longer themanifold=
keyword. -
DaiYuanCoefficient
is now calledDaiYuanCoefficientRule
. For theDaiYuanCoefficient
the manifold as its first parameter is no longer necessary and the vector transport has been unified/moved to thevector_transport_method=
keyword. -
FletcherReevesCoefficient
is now calledFletcherReevesCoefficientRule
.FletcherReevesCoefficient
works as before, but can now use the factory in between -
HagerZhangCoefficient
is now calledHagerZhangCoefficientRule
. For theHagerZhangCoefficient
the manifold as its first parameter is no longer necessary and the vector transport has been unified/moved to thevector_transport_method=
keyword. -
HestenesStiefelCoefficient
is now calledHestenesStiefelCoefficientRule
. For theHestenesStiefelCoefficient
the manifold as its first parameter is no longer necessary and the vector transport has been unified/moved to thevector_transport_method=
keyword. -
LiuStoreyCoefficient
is now calledLiuStoreyCoefficientRule
. For theLiuStoreyCoefficient
the manifold as its first parameter is no longer necessary and the vector transport has been unified/moved to thevector_transport_method=
keyword. -
PolakRibiereCoefficient
is now calledPolakRibiereCoefficientRule
. For thePolakRibiereCoefficient
the manifold as its first parameter is no longer necessary and the vector transport has been unified/moved to thevector_transport_method=
keyword. - the
SteepestDirectionUpdateRule
is now calledSteepestDescentCoefficientRule
. TheSteepestDescentCoefficient
is equivalent, but creates the new factory interims wise. -
AbstractGradientGroupProcessor
is now calledAbstractGradientGroupDirectionRule
- the
StochasticGradient
is now calledStochasticGradientRule
. TheStochasticGradient
is equivalent, but creates the new factory interims wise, so that the manifold is not longer necessary.
- the
- the
AlternatingGradient
is now calledAlternatingGradientRule
. TheAlternatingGradient
is equivalent, but creates the new factory interims wise, so that the manifold is not longer necessary.
-
-
quasi_Newton
had a keywordscale_initial_operator=
that was inconsistently declared (sometimes bool, sometimes real) and was unused. It is now calledinitial_scale=1.0
and scales the initial (diagonal, unit) matrix within the approximation of the Hessian additionally to the$\frac{1}{\lVert g_k\rVert}$ scaling with the norm of the oldest gradient for the limited memory variant. For the full matrix variant the initial identity matrix is now scaled with this parameter. - Unify doc strings and presentation of keyword arguments
- general indexing, for example in a vector, uses
i
- index for inequality constraints is unified to
i
running from1,...,m
- index for equality constraints is unified to
j
running from1,...,n
- iterations are using now
k
- general indexing, for example in a vector, uses
-
get_manopt_parameter
has been renamed toget_parameter
since it is internal, so internally that is clear; accessing it from outside hence reads anywaysManopt.get_parameter
-
set_manopt_parameter!
has been renamed toset_parameter!
since it is internal, so internally that is clear; accessing it from outside hence readsManopt.set_parameter!
- changed the
stabilize::Bool=
keyword inquasi_Newton
to the more flexibleproject!=
keyword, this is also more in line with the other solvers. Internally the same is done within theQuasiNewtonLimitedMemoryDirectionUpdate
. To adapt,- the previous
stabilize=true
is now set with(project!)=embed_project!
in general, and if the manifold is represented by points in the embedding, like the sphere,(project!)=project!
suffices - the new default is
(project!)=copyto!
, so by default no projection/stabilization is performed.
- the previous
- the positional argument
p
(usually the last or the third to last if subsolvers existed) has been moved to a keyword argumentp=
in all State constructors - in
NelderMeadState
thepopulation
moved from positional to keyword argument as well, - the way to initialise sub solvers in the solver states has been unified In the new variant
- the
sub_problem
is always a positional argument; namely the last one - if the
sub_state
is given as a optional positional argument after the problem, it has to be a manopt solver state - you can provide the new
ClosedFormSolverState(e::AbstractEvaluationType)
for the state to indicate that thesub_problem
is a closed form solution (function call) and how it has to be called - if you do not provide the
sub_state
as positional, the keywordevaluation=
is used to generate the stateClosedFormSolverState
. - when previously
p
and eventuallyX
where positional arguments, they are now moved to keyword arguments of the same name for start point and tangent vector. - in detail
-
AdaptiveRegularizationState(M, sub_problem [, sub_state]; kwargs...)
replaces the (anyways unused) variant to only provide the objective; bothX
andp
moved to keyword arguments. -
AugmentedLagrangianMethodState(M, objective, sub_problem; evaluation=...)
was added - ``AugmentedLagrangianMethodState(M, objective, sub_problem, sub_state; evaluation=...)
now has
p=rand(M)` as keyword argument instead of being the second positional one -
ExactPenaltyMethodState(M, sub_problem; evaluation=...)
was added andExactPenaltyMethodState(M, sub_problem, sub_state; evaluation=...)
now hasp=rand(M)
as keyword argument instead of being the second positional one -
DifferenceOfConvexState(M, sub_problem; evaluation=...)
was added andDifferenceOfConvexState(M, sub_problem, sub_state; evaluation=...)
now hasp=rand(M)
as keyword argument instead of being the second positional one -
DifferenceOfConvexProximalState(M, sub_problem; evaluation=...)
was added andDifferenceOfConvexProximalState(M, sub_problem, sub_state; evaluation=...)
now hasp=rand(M)
as keyword argument instead of being the second positional one
-
- bumped
Manifolds.jl
to version 0.10; this mainly means that any algorithm working on a productmanifold and requiringArrayPartition
now has to explicitly dousing RecursiveArrayTools
.
- the
- the
AverageGradientRule
filled its internal vector of gradients wrongly – or mixed it up in parallel transport. This is now fixed.
- the
convex_bundle_method
and itsConvexBundleMethodState
no longer accept the keywordsk_size
,p_estimate
norϱ
, they are superseded by just providingk_max
. - the
truncated_conjugate_gradient_descent(M, f, grad_f, hess_f)
has the Hessian now a mandatory argument. To use the old variant, provideApproxHessianFiniteDifference(M, copy(M, p), grad_f)
tohess_f
directly. - all deprecated keyword arguments and a few function signatures were removed:
get_equality_constraints
,get_equality_constraints!
,get_inequality_constraints
,get_inequality_constraints!
are removed. Use their singular forms and set the index to:
instead.StopWhenChangeLess(ε)
is removed, use ``StopWhenChangeLess(M, ε)` instead to fill for example the retraction properly used to determine the change
- In the
WolfePowellLinesearch
andWolfeBinaryLinesearch
thelinesearch_stopsize=
keyword is replaced bystop_when_stepsize_less=
DebugChange
andRecordChange
had amanifold=
and ainvretr
keyword that were replaced by the first positional argumentM
andinverse_retraction_method=
, respectively- in the
NonlinearLeastSquaresObjective
andLevenbergMarquardt
thejacB=
keyword is now calledjacobian_tangent_basis=
- in
particle_swarm
then=
keyword is replaced byswarm_size=
. update_stopping_criterion!
has been removed and unified withset_parameter!
. The code adaptions are- to set a parameter of a stopping criterion, just replace
update_stopping_criterion!(sc, :Val, v)
withset_parameter!(sc, :Val, v)
- to update a stopping criterion in a solver state, replace the old
update_stopping_criterion!(state, :Val, v)
tat passed down to the stopping criterion by the explicit pass down withset_parameter!(state, :StoppingCriterion, :Val, v)
- to set a parameter of a stopping criterion, just replace
- Improved performance of Interior Point Newton Method.
- an Interior Point Newton Method, the
interior_point_newton
- a
conjugate_residual
Algorithm to solve a linear system on a tangent space. ArmijoLinesearch
now allows for additionaladditional_decrease_condition
andadditional_increase_condition
keywords to add further conditions to accept additional conditions when to accept an decreasing or increase of the stepsize.- add a
DebugFeasibility
to have a debug print about feasibility of points in constrained optimisation employing the newis_feasible
function - add a
InteriorPointCentralityCondition
check that can be added for step candidates within the line search ofinterior_point_newton
- Add Several new functors
- the
LagrangianCost
,LagrangianGradient
,LagrangianHessian
, that based on a constrained objective allow to construct the hessian objective of its Lagrangian - the
CondensedKKTVectorField
and itsCondensedKKTVectorFieldJacobian
, that are being used to solve a linear system withininterior_point_newton
- the
KKTVectorField
as well as itsKKTVectorFieldJacobian
and ``KKTVectorFieldAdjointJacobian` - the
KKTVectorFieldNormSq
and itsKKTVectorFieldNormSqGradient
used within the Armijo line search ofinterior_point_newton
- the
- New stopping criteria
- A
StopWhenRelativeResidualLess
for theconjugate_residual
- A
StopWhenKKTResidualLess
for theinterior_point_newton
- A
max_stepsize
methods forHyperrectangle
.
- a few typos in the documentation
WolfePowellLinesearch
no longer usesmax_stepsize
with invalid point by default.
- Remove functions
estimate_sectional_curvature
,ζ_1
,ζ_2
,close_point
fromconvex_bundle_method
- Remove some unused fields and arguments such as
p_estimate
,ϱ
,α
, fromConvexBundleMethodState
in favor of jutk_max
- Change parameter
R
placement inProximalBundleMethodState
to fifth position
- refactor stopping criteria to not store a
sc.reason
internally, but instead only generate the reason (and hence allocate a string) when actually asked for a reason.
- Remodel the constraints and their gradients into separate
VectorGradientFunctions
to reduce code duplication and encapsulate the inner model of these functions and their gradients - Introduce a
ConstrainedManoptProblem
to model different ranges for the gradients in the newVectorGradientFunction
s beyond the defaultNestedPowerRepresentation
- introduce a
VectorHessianFunction
to also model that one can provide the vector of Hessians to constraints - introduce a more flexible indexing beyond single indexing, to also include arbitrary ranges when accessing vector functions and their gradients and hence also for constraints and their gradients.
- Remodel
ConstrainedManifoldObjective
to store anAbstractManifoldObjective
internally instead of directlyf
andgrad_f
, allowing also Hessian objectives therein and implementing access to this Hessian - Fixed a bug that Lanczos produced NaNs when started exactly in a minimizer, since we divide by the gradient norm.
- deprecate
get_grad_equality_constraints(M, o, p)
, useget_grad_equality_constraint(M, o, p, :)
from the more flexible indexing instead.
:reinitialize_direction_update
option for quasi-Newton behavior when the direction is not a descent one. It is now the new default forQuasiNewtonState
.- Quasi-Newton direction update rules are now initialized upon start of the solver with the new internal function
initialize_update!
.
- ALM and EPM no longer keep a part of the quasi-Newton subsolver state between runs.
- Quasi-Newton solvers:
:reinitialize_direction_update
is the new default behavior in case of detection of non-descent direction instead of:step_towards_negative_gradient
.:step_towards_negative_gradient
is still available when explicitly set using thenondescent_direction_behavior
keyword argument.
- bumped dependency of ManifoldsBase.jl to 0.15.9 and imported their numerical verify functions. This changes the
throw_error
keyword used internally to aerror=
with a symbol.
- Tests use
Aqua.jl
to spot problems in the code - introduce a feature-based list of solvers and reduce the details in the alphabetical list
- adds a
PolyakStepsize
- added a
get_subgradient
forAbstractManifoldGradientObjectives
since their gradient is a special case of a subgradient.
get_last_stepsize
was defined in quite different ways that caused ambiguities. That is now internally a bit restructured and should work nicer. Internally this means that the interim dispatch onget_last_stepsize(problem, state, step, vars...)
was removed. Now the only two left areget_last_stepsize(p, s, vars...)
and the one directly checkingget_last_stepsize(::Stepsize)
for stored values.- the accidentally exported
set_manopt_parameter!
is no longer exported
get_manopt_parameter
andset_manopt_parameter!
have been revised and better documented, they now use more semantic symbols (with capital letters) instead of direct field access (lower letter symbols). Since these are not exported, this is considered an internal, hence non-breaking change.- semantic symbols are now all nouns in upper case letters
:active
is changed to:Activity
RecordWhenActive
to allow records to be deactivated during runtime, symbol:WhenActive
RecordSubsolver
to record the result of a subsolver recording in the main solver, symbol:Subsolver
RecordStoppingReason
to record the reason a solver stopped- made the
RecordFactory
more flexible and quite similar toDebugFactory
, such that it is now also easy to specify recordings at the end of solver runs. This can especially be used to record final states of sub solvers.
- being a bit more strict with internal tools and made the factories for record non-exported, so this is the same as for debug.
- The name
:Subsolver
to generateDebugWhenActive
was misleading, it is now called:WhenActive
referring to “print debug only when set active, that is by the parent (main) solver”. - the old version of specifying
Symbol => RecordAction
for later access was ambiguous, since it could also mean to store the action in the dictionary under that symbol. Hence the order for access was switched toRecordAction => Symbol
to resolve that ambiguity.
- A Riemannian variant of the CMA-ES (Covariance Matrix Adaptation Evolutionary Strategy) algorithm,
cma_es
.
- The constructor dispatch for
StopWhenAny
withVector
had incorrect element type assertion which was fixed.
- more advanced methods to add debug to the beginning of an algorithm, a step, or the end of
the algorithm with
DebugAction
entries at:Start
,:BeforeIteration
,:Iteration
, and:Stop
, respectively. - Introduce a Pair-based format to add elements to these hooks, while all others ar
now added to :Iteration (no longer to
:All
) - (planned) add an easy possibility to also record the initial stage and not only after the first iteration.
- Changed the symbol for the
:Step
dictionary to be:Iteration
, to unify this with the symbols used in recording, and removed the:All
symbol. On the fine granular scale, all but:Start
debugs are now reset on init. Since these are merely internal entries in the debug dictionary, this is considered non-breaking. - introduce a
StopWhenSwarmVelocityLess
stopping criterion forparticle_swarm
replacing the current default of the swarm change, since this is a bit more effective to compute
- fixed the outdated documentation of
TruncatedConjugateGradientState
, that now correctly state thatp
is no longer stored, but the algorithm runs onTpM
. - implemented the missing
get_iterate
forTruncatedConjugateGradientState
.
convex_bundle_method
uses thesectional_curvature
fromManifoldsBase.jl
.convex_bundle_method
no longer has the unusedk_min
keyword argument.ManifoldsBase.jl
now is running on Documenter 1.3,Manopt.jl
documentation now uses DocumenterInterLinks to refer to sections and functions fromManifoldsBase.jl
- fixes a type that when passing
sub_kwargs
totrust_regions
caused an error in the decoration of the sub objective.
- The option
:step_towards_negative_gradient
fornondescent_direction_behavior
in quasi-Newton solvers does no longer emit a warning by default. This has been moved to amessage
, that can be accessed/displayed withDebugMessages
DebugMessages
now has a second positional argument, specifying whether all messages, or just the first (:Once
) should be displayed.
- Option
nondescent_direction_behavior
for quasi-Newton solvers. By default it checks for non-descent direction which may not be handled well by some stepsize selection algorithms.
- unified documentation, especially function signatures further.
- fixed a few typos related to math formulae in the doc strings.
convex_bundle_method
optimization algorithm for non-smooth geodesically convex functionsproximal_bundle_method
optimization algorithm for non-smooth functions.StopWhenSubgradientNormLess
,StopWhenLagrangeMultiplierLess
, and stopping criteria.
- Doc strings now follow a vale.sh policy. Though this is not fully working, this PR improves a lot of the doc strings concerning wording and spelling.
- fixes two storage action defaults, that accidentally still tried to initialize a
:Population
(as modified back to:Iterate
0.4.49). - fix a few typos in the documentation and add a reference for the subgradient method.
- introduce an environment persistent way of setting global values with the
set_manopt_parameter!
function using Preferences.jl. - introduce such a value named
:Mode
to enable a"Tutorial"
mode that shall often provide more warnings and information for people getting started with optimisation on manifolds
- A
StopWhenSubgradientNormLess
stopping criterion for subgradient-based optimization. - Allow the
message=
of theDebugIfEntry
debug action to contain a format element to print the field in the message as well.
- Fix Quasi Newton on complex manifolds.
- A
StopWhenEntryChangeLess
to be able to stop on arbitrary small changes of specific fields - generalises
StopWhenGradientNormLess
to accept arbitrarynorm=
functions - refactor the default in
particle_swarm
to no longer “misuse” the iteration change, but actually the new one the:swarm
entry
- fixes an imprecision in the interface of
get_iterate
that sometimes led to the swarm ofparticle_swarm
being returned as the iterate. - refactor
particle_swarm
in naming and access functions to avoid this also in the future. To access the whole swarm, one now should useget_manopt_parameter(pss, :Population)
- fixed a bug, where the retraction set in
check_Hessian
was not passed on to the optional innercheck_gradient
call, which could lead to unwanted side effects, see #342.
- An error is thrown when a line search from
LineSearches.jl
reports search failure. - Changed default stopping criterion in ALM algorithm to mitigate an issue occurring when step size is very small.
- Default memory length in default ALM subsolver is now capped at manifold dimension.
- Replaced CI testing on Julia 1.8 with testing on Julia 1.10.
- A bug in
LineSearches.jl
extension leading to slower convergence. - Fixed a bug in L-BFGS related to memory storage, which caused significantly slower convergence.
- Introduce
sub_kwargs
andsub_stopping_criterion
fortrust_regions
as noticed in #336
WolfePowellLineSearch
,ArmijoLineSearch
step sizes now allocate lesslinesearch_backtrack!
is now available- Quasi Newton Updates can work in-place of a direction vector as well.
- Faster
safe_indices
in L-BFGS.
Formally one could consider this version breaking, since a few functions have been moved, that in earlier versions (0.3.x) have been used in example scripts. These examples are now available again within ManoptExamples.jl, and with their “reappearance” the corresponding costs, gradients, differentials, adjoint differentials, and proximal maps have been moved there as well. This is not considered breaking, since the functions were only used in the old, removed examples. Each and every moved function is still documented. They have been partly renamed, and their documentation and testing has been extended.
- Bumped and added dependencies on all 3 Project.toml files, the main one, the docs/, an the tutorials/ one.
artificial_S2_lemniscate
is available asManoptExample.Lemniscate
and works on arbitrary manifolds now.artificial_S1_signal
is available asManoptExample.artificial_S1_signal
artificial_S1_slope_signal
is available asManoptExamples.artificial_S1_slope_signal
artificial_S2_composite_bezier_curve
is available asManoptExamples.artificial_S2_composite_Bezier_curve
artificial_S2_rotation_image
is available asManoptExamples.artificial_S2_rotation_image
artificial_S2_whirl_image
is available asManoptExamples.artificial_S2_whirl_image
artificial_S2_whirl_patch
is available asManoptExamples.artificial_S2_whirl_path
artificial_SAR_image
is available asManoptExamples.artificial_SAR_image
artificial_SPD_image
is available asManoptExamples.artificial_SPD_image
artificial_SPD_image2
is available asManoptExamples.artificial_SPD_image
adjoint_differential_forward_logs
is available asManoptExamples.adjoint_differential_forward_logs
adjoint:differential_bezier_control
is available asManoptExamples.adjoint_differential_Bezier_control_points
BezierSegment
is available asManoptExamples.BeziérSegment
cost_acceleration_bezier
is available asManoptExamples.acceleration_Bezier
cost_L2_acceleration_bezier
is available asManoptExamples.L2_acceleration_Bezier
costIntrICTV12
is available asManoptExamples.Intrinsic_infimal_convolution_TV12
costL2TV
is available asManoptExamples.L2_Total_Variation
costL2TV12
is available asManoptExamples.L2_Total_Variation_1_2
costL2TV2
is available asManoptExamples.L2_second_order_Total_Variation
costTV
is available asManoptExamples.Total_Variation
costTV2
is available asManoptExamples.second_order_Total_Variation
de_casteljau
is available asManoptExamples.de_Casteljau
differential_forward_logs
is available asManoptExamples.differential_forward_logs
differential_bezier_control
is available asManoptExamples.differential_Bezier_control_points
forward_logs
is available asManoptExamples.forward_logs
get_bezier_degree
is available asManoptExamples.get_Bezier_degree
get_bezier_degrees
is available asManoptExamples.get_Bezier_degrees
get_Bezier_inner_points
is available asManoptExamples.get_Bezier_inner_points
get_bezier_junction_tangent_vectors
is available asManoptExamples.get_Bezier_junction_tangent_vectors
get_bezier_junctions
is available asManoptExamples.get_Bezier_junctions
get_bezier_points
is available asManoptExamples.get_Bezier_points
get_bezier_segments
is available asManoptExamples.get_Bezier_segments
grad_acceleration_bezier
is available asManoptExamples.grad_acceleration_Bezier
grad_L2_acceleration_bezier
is available asManoptExamples.grad_L2_acceleration_Bezier
grad_Intrinsic_infimal_convolution_TV12
is available asManoptExamples.Intrinsic_infimal_convolution_TV12
grad_TV
is available asManoptExamples.grad_Total_Variation
costIntrICTV12
is available asManoptExamples.Intrinsic_infimal_convolution_TV12
project_collaborative_TV
is available asManoptExamples.project_collaborative_TV
prox_parallel_TV
is available asManoptExamples.prox_parallel_TV
grad_TV2
is available asManoptExamples.prox_second_order_Total_Variation
prox_TV
is available asManoptExamples.prox_Total_Variation
prox_TV2
is available asManopExamples.prox_second_order_Total_Variation
- vale.sh as a CI to keep track of a consistent documentation
- add
Manopt.JuMP_Optimizer
implementing JuMP's solver interface
trust_regions
is now more flexible and the sub solver (Steihaug-Toint tCG by default) can now be exchanged.adaptive_regularization_with_cubics
is now more flexible as well, where it previously was a bit too much tightened to the Lanczos solver as well.- Unified documentation notation and bumped dependencies to use DocumenterCitations 1.3
- add a
--help
argument todocs/make.jl
to document all available command line arguments - add a
--exclude-tutorials
argument todocs/make.jl
. This way, when quarto is not available on a computer, the docs can still be build with the tutorials not being added to the menu such that documenter does not expect them to exist.
- Bump dependencies to
ManifoldsBase.jl
0.15 andManifolds.jl
0.9 - move the ARC CG subsolver to the main package, since
TangentSpace
is now already available fromManifoldsBase
.
- also use the pair of a retraction and the inverse retraction (see last update) to perform the relaxation within the Douglas-Rachford algorithm.
- avoid allocations when calling
get_jacobian!
within the Levenberg-Marquard Algorithm.
- Fix a lot of typos in the documentation
- add more of the Riemannian Levenberg-Marquard algorithms parameters as keywords, so they can be changed on call
- generalize the internal reflection of Douglas-Rachford, such that is also works with an arbitrary pair of a reflection and an inverse reflection.
- Fixed a bug that caused non-matrix points and vectors to fail when working with approximate
- The access to functions of the objective is now unified and encapsulated in proper
get_
functions.
- an
ManifoldEuclideanGradientObjective
to allow the cost, gradient, and Hessian and other first or second derivative based elements to be Euclidean and converted when needed. - a keyword
objective_type=:Euclidean
for all solvers, that specifies that an Objective shall be created of the new type
ConstantStepsize
andDecreasingStepsize
now have an additional fieldtype::Symbol
to assess whether the step-size should be relatively (to the gradient norm) or absolutely constant.
- The adaptive regularization with cubics (ARC) solver.
- A
:Subsolver
keyword in thedebug=
keyword argument, that activates the newDebugWhenActive`` to de/activate subsolver debug from the main solvers
DebugEvery`.
- References in the documentation are now rendered using DocumenterCitations.jl
- Asymptote export now also accepts a size in pixel instead of its default
4cm
size andrender
can be deactivated setting it tonothing
.
- fixed a bug, where
cyclic_proximal_point
did not work with decorated objectives.
max_stepsize
was specialized forFixedRankManifold
to follow Matlab Manopt.
- The
AdaptiveWNGrad
stepsize is available as a new stepsize functor.
- Levenberg-Marquardt now possesses its parameters
initial_residual_values
andinitial_jacobian_f
also as keyword arguments, such that their default initialisations can be adapted, if necessary
- simplify usage of gradient descent as sub solver in the DoC solvers.
- add a
get_state
function - document
indicates_convergence
.
- Fixes an allocation bug in the difference of convex algorithm
- another workflow that deletes old PR renderings from the docs to keep them smaller in overall size.
- bump dependencies since the extension between Manifolds.jl and ManifoldsDiff.jl has been moved to Manifolds.jl
- More details on the Count and Cache tutorial
- loosen constraints slightly
- A tutorial on how to implement a solver
- A
ManifoldCacheObjective
as a decorator for objectives to cache results of calls, using LRU Caches as a weak dependency. For now this works with cost and gradient evaluations - A
ManifoldCountObjective
as a decorator for objectives to enable counting of calls to for example the cost and the gradient - adds a
return_objective
keyword, that switches the return of a solver to a tuple(o, s)
, whereo
is the (possibly decorated) objective, ands
is the “classical” solver return (state or point). This way the counted values can be accessed and the cache can be reused. - change solvers on the mid level (form
solver(M, objective, p)
) to also accept decorated objectives
- Switch all Requires weak dependencies to actual weak dependencies starting in Julia 1.9
- the default tolerances for the numerical
check_
functions were loosened a bit, such thatcheck_vector
can also be changed in its tolerances.
- the sub solver for
trust_regions
is now customizable and can now be exchanged.
- slightly changed the definitions of the solver states for ALM and EPM to be type stable
- A function
check_Hessian(M, f, grad_f, Hess_f)
to numerically verify the (Riemannian) Hessian of a functionf
- A new interface of the form
alg(M, objective, p0)
to allow to reuse objectives without creatingAbstractManoptSolverState
s and callingsolve!
. This especially still allows for any decoration of the objective and/or the state usingdebug=
, orrecord=
.
- All solvers now have the initial point
p
as an optional parameter making it more accessible to first time users,gradient_descent(M, f, grad_f)
is equivalent togradient_descent(M, f, grad_f, rand(M))
- Unified the framework to work on manifold where points are represented by numbers for several solvers
- the inner products used in
truncated_gradient_descent
now also work thoroughly on complex matrix manifolds
trust_regions(M, f, grad_f, hess_f, p)
now has the Hessianhess_f
as well as the start pointp0
as an optional parameter and approximate it otherwise.trust_regions!(M, f, grad_f, hess_f, p)
has the Hessian as an optional parameter and approximate it otherwise.
- support for
ManifoldsBase.jl
0.13.x, since with the definition ofcopy(M,p::Number)
, in 0.14.4, that one is used instead of defining it ourselves.
particle_swarm
now uses much more in-place operations
particle_swarm
used quite a fewdeepcopy(p)
commands still, which were replaced bycopy(M, p)
get_message
to obtain messages from sub steps of a solverDebugMessages
to display the new messages in debug- safeguards in Armijo line search and L-BFGS against numerical over- and underflow that report in messages
- Introduce the Difference of Convex Algorithm (DCA)
difference_of_convex_algorithm(M, f, g, ∂h, p0)
- Introduce the Difference of Convex Proximal Point Algorithm (DCPPA)
difference_of_convex_proximal_point(M, prox_g, grad_h, p0)
- Introduce a
StopWhenGradientChangeLess
stopping criterion
- adapt tolerances in tests to the speed/accuracy optimized distance on the sphere in
Manifolds.jl
(part II)
- adapt tolerances in tests to the speed/accuracy optimized distance on the sphere in
Manifolds.jl
- introduce a wrapper that allows line searches from LineSearches.jl to be used within Manopt.jl, introduce the manoptjl.org/stable/extensions/ page to explain the details.
- a
status_summary
that displays the main parameters within several structures of Manopt, most prominently a solver state
- Improved storage performance by introducing separate named tuples for points and vectors
- changed the
show
methods ofAbstractManoptSolverState
s to display their `state_summary - Move tutorials to be rendered with Quarto into the documentation.
- Bump
[compat]
entry of ManifoldDiff to also include 0.3
- Fixed a few stopping criteria even indicated to stop before the algorithm started.
- the new default functions that include
p
are used where possible - a first step towards faster storage handling
- Introduce
ConjugateGradientBealeRestart
to allow CG restarts using Beale‘s rule
- fix a type in
HestenesStiefelCoefficient
- the CG coefficient
β
can now be complex - fix a bug in
grad_distance
- the usage of
inner
in line search methods, such that they work well with complex manifolds as well
- a
max_stepsize
per manifold to avoid leaving the injectivity radius, which it also defaults to
- Dependency on
ManifoldDiff.jl
and a start of moving actual derivatives, differentials, and gradients there. AbstractManifoldObjective
to store the objective within theAbstractManoptProblem
- Introduce a
CostGrad
structure to store a function that computes the cost and gradient within one function. - started a
changelog.md
to thoroughly keep track of changes
AbstractManoptProblem
replacesProblem
- the problem now contains a
AbstractManoptSolverState
replacesOptions
random_point(M)
is replaced byrand(M)
from `ManifoldsBase.jlrandom_tangent(M, p)
is replaced byrand(M; vector_at=p)