Skip to content

A Unified Pytorch Optimizer for Numerical Optimization

License

Notifications You must be signed in to change notification settings

MagiFeeney/MagiOPT

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

28 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Why MagiOPT?

  • Unified framework for both uncontrained and constrained optimization
  • Efficient and powered by PyTorch's automatic differentiation engine
  • User-friendly and easily modifiable
  • Visualization for both curve (or surface) and contour plots

Installation

$ git clone ...
$ cd MagiOPT

How to use

  • Unconstrained
import MagiOPT as optim
    
def func(x):
    ...
        
optimizer = optim.SD(func) # Steepest Descent
x = optimizer.step(x0)     # On-the-fly
optimizer.plot()           # Visualize
  • Constrained
import MagiOPT as optim
    
def object(x):
    ...
def constr1(x):
    ...
def constr2(x):
    ...
...
    
optimizer = optim.Penalty(object, 
                          sigma, 
                          (constr1, '<='), 
                          (constr2, '>='), 
                          plot=True)        # Penalty methoed
optimizer.BFGS()                            # Inner optimizer
x = optimizer.step(x0)                      # On-the-fly

Supported Optimizers

Unconstrained Constrained
Steepest Descent Penalty Method
Amortized Newton Method Log-Barrier Method
SR1 Inverse-Barrier Method
DFP Augmented Lagrangian Method
BFGS
Broyden
FR
PRP
CG for Qudratic Function
CG for Linear Equation
BB1
BB2
Gauss-Newton
LMF
Dogleg

Visualization

Use a simple line of code for unconstrained optimizer

optimizer.plot()

we can visualize the 2D curve with iterated sequence, such that

Or with the 3D surface with iterated sequence, and its contour with iterated sequence.

Or use

optimizer = optim.Penalty(..., plot=True)

we can visualize the function and sequence of each inner iteration with the 3D surface with iterated sequence, and its contour with iterated sequence.

Reminder

  • The majority of algorithms are sensitive to the initial point; choosing an appropriate starting point can save significant effort.
  • In ill-conditioned situations, constrained optimizers may require trial and error.
  • The Barzilai-Borwein method is unstable for non-quadratic problems; however, you can still infer the optimization path through intermediate visualizations.
  • You can easily extract the optimization sequence using:
    optimization.sequence
  • Your function should be supported by PyTorch operations; however, the input doesn't have to be. It can be a NumPy array, PyTorch tensor, or even a list.

Requirements

  • Pytorch 3.7 or above