Skip to content

Commit

Permalink
Merge pull request #36 from zapatacomputing/dev
Browse files Browse the repository at this point in the history
Update main
  • Loading branch information
mstechly authored Feb 8, 2022
2 parents 7b3c413 + d45161d commit 76a3f98
Show file tree
Hide file tree
Showing 40 changed files with 1,958 additions and 1,201 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/style.yml
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ jobs:
- name: Install dependencies
run: |
python3 -m pip install --upgrade pip
pip install .
python3 -m pip install '.[dev]'
- name: Python Code Quality and Lint
# You may pin to the exact commit or the version.
Expand Down
17 changes: 15 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,8 @@ In [this notebook](https://github.com/zapatacomputing/orqviz/blob/main/docs/exam
We recently published a paper on arXiv where we review the tools available with `orqviz`:\
[ORQVIZ: Visualizing High-Dimensional Landscapes in Variational Quantum Algorithms](https://arxiv.org/abs/2111.04695)

Find a brief overview of the visualization techniques on [YouTube](https://www.youtube.com/watch?v=_3x4NI6PcH4)!

## Installation

You can install our package using the following command:
Expand Down Expand Up @@ -57,6 +59,17 @@ This code results in the following plot:

![Image](docs/example_plot.png)

## FAQ

**What are the expected type and shape for the parameters?**\
Parameters should be of type `numpy.ndarray` filled with real numbers. In recent releases, the shape of the parameters can be arbitrary, as long as `numpy` allows it, i.e., you cannot have inconsistent sizes per dimension. Until version `0.1.1`, the parameter array needed to be one-dimensional.

**What is the format of the `loss_function` that most `orqviz` methods expect?**\
We define a `loss_function` as a function which receives only the parameters of the model and returns a floating point/ real number. That value could for example be the cost function of an optimization problem, the prediction of a classifier, or the fidelity with respect to a fixed quantum state. All the calculation that needs to be performed to get to these values needs to happen in your function. Check out the above code as a minimal example.

**What can I do if my loss function requires additional arguments?**\
In that case you need to wrap the function into another function such that it again receives only the parameters of the model. We built a wrapper class called `LossFunctionWrapper` that you can import from `orqviz.loss_function`. It is a thin wrapper with helpful perks such as measuring the average evaluation time of a single loss function call, and the total number of calls.

## Authors

The leading developer of this package is Manuel Rudolph at Zapata Computing.\
Expand All @@ -71,9 +84,9 @@ You can also contact us or ask general questions using [GitHub Discussions](http

For more specific code issues, bug fixes, etc. please open a [GitHub issue](https://github.com/zapatacomputing/orqviz/issues) in the `orqviz` repository.

If you are doing research using `orqviz`, please cite our paper:
If you are doing research using `orqviz`, please cite [our `orqviz` paper](https://arxiv.org/abs/2111.04695):

[ORQVIZ: Visualizing High-Dimensional Landscapes in Variational Quantum Algorithms](https://arxiv.org/abs/2111.04695)
> Manuel S. Rudolph, Sukin Sim, Asad Raza, Michał Stęchły, Jarrod R. McClean, Eric R. Anschuetz, Luis Serrano, and Alejandro Perdomo-Ortiz. ORQVIZ: Visualizing High-Dimensional Landscapes in Variational Quantum Algorithms. 2021. arXiv:2111.04695
## How to contribute

Expand Down
5 changes: 5 additions & 0 deletions docs/CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,11 @@ Pull requests are a great way to get your ideas into this repository.

When deciding if we merge in a pull request we look at the following things:

### Base branch

For development we use `dev` branch. When we make a release, we merge `dev` branch into `main`.
Therefore, if you want to contribute please branch off the `dev` branch and create your PR with `dev` as the base branch.

### Automatic checks

Keep in mind that we have automatic checks configured for this project. We won't merge your PR unless it:
Expand Down
1,293 changes: 690 additions & 603 deletions docs/examples/advanced_example_notebook.ipynb

Large diffs are not rendered by default.

19 changes: 14 additions & 5 deletions docs/examples/gradient_descent_optimizer.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,22 +2,31 @@

import numpy as np

from orqviz.aliases import (
ArrayOfParameterVectors,
FullGradientFunction,
LossFunction,
ParameterVector,
)
from orqviz.gradients import calculate_full_gradient


def gradient_descent_optimizer(
init_params: np.ndarray,
loss_function: Callable,
init_params: ParameterVector,
loss_function: LossFunction,
n_iters: int,
learning_rate: float = 0.1,
full_gradient_function: Optional[Callable] = None,
full_gradient_function: FullGradientFunction = None,
eval_loss_during_training: bool = True,
) -> Tuple[np.ndarray, np.ndarray]:
) -> Tuple[ArrayOfParameterVectors, np.ndarray]:
"""Function perform gradient descent optimization on a loss function.
Args:
init_params: Initial parameter vector from which to start the optimization.
loss_function: Loss function with respect to which the gradient is calculated.
loss_function: Function with respect to which the gradient is calculated.
It must receive only a numpy.ndarray of parameters, and return
a real number. If your function requires more arguments, consider using the
'LossFunctionWrapper' class from 'orqviz.loss_function'.
n_iters: Number of iterations to optimize.
learning_rate: Learning rate for gradient descent. The calculated gradient
is multiplied with this value and then updates the parameter vector.
Expand Down
278 changes: 166 additions & 112 deletions docs/examples/orqviz_tutorial_cirq.ipynb
100644 → 100755

Large diffs are not rendered by default.

273 changes: 168 additions & 105 deletions docs/examples/orqviz_tutorial_orquestra.ipynb

Large diffs are not rendered by default.

128 changes: 64 additions & 64 deletions docs/examples/orqviz_tutorial_pennylane.ipynb

Large diffs are not rendered by default.

344 changes: 204 additions & 140 deletions docs/examples/orqviz_tutorial_qiskit.ipynb
100644 → 100755

Large diffs are not rendered by default.

2 changes: 1 addition & 1 deletion setup.cfg
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[metadata]
name = orqviz
version = 0.1.1
version = 0.2.0
description = Python package for visualizing the loss landscapes of Variational Quantum Algorithms
long_description = file: README.md
long_description_content_type = text/markdown; charset=UTF-8
Expand Down
2 changes: 1 addition & 1 deletion src/orqviz/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,9 +4,9 @@
geometric,
gradients,
hessians,
io,
pca,
plot_utils,
plots,
scans,
utils,
)
18 changes: 15 additions & 3 deletions src/orqviz/aliases.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
from typing import Callable

import numpy as np

"""
Expand All @@ -8,7 +10,17 @@
the only dimension, is always of size number_of_parameters, while the other dimensions
indicate how many of them there are.
"""
ParameterVector = np.ndarray # 1D array
ArrayOfParameterVectors = np.ndarray # 2D array
GridOfParameterVectors = np.ndarray # 3D array
ParameterVector = np.ndarray # ND array
ArrayOfParameterVectors = np.ndarray # Array of ND arrays
GridOfParameterVectors = np.ndarray # Grid of ND arrays
Weights = np.ndarray # 1D vector of floats from 0-1
DirectionVector = np.ndarray # ND array with same shape as ParameterVector
LossFunction = Callable[
[ParameterVector], float
] # Function that can be scanned with orqviz
GradientFunction = Callable[
[ParameterVector, DirectionVector], float
] # Returns partial derrivative of LossFunction wrt DirectionVector
FullGradientFunction = Callable[
[ParameterVector], np.ndarray
] # Returns all partial derrivatives of LossFunction wrt each parameter
18 changes: 12 additions & 6 deletions src/orqviz/elastic_band/auto_neb.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,16 +3,16 @@
import numpy as np
from scipy.interpolate import interp1d

from ..aliases import ParameterVector
from ..aliases import FullGradientFunction, LossFunction, ParameterVector
from .data_structures import Chain
from .neb import run_NEB


# Nudged-Elastic-Band
def run_AutoNEB(
init_chain: Chain,
loss_function: Callable[[ParameterVector], float],
full_gradient_function: Optional[Callable[[ParameterVector], np.ndarray]] = None,
loss_function: LossFunction,
full_gradient_function: FullGradientFunction = None,
n_cycles: int = 4,
n_iters_per_cycle: int = 10,
max_new_pivots: int = 1,
Expand All @@ -39,7 +39,10 @@ def run_AutoNEB(
Args:
init_chain: Initial chain that is optimized with the algorithm.
loss_function: Loss function that is used to optimize the chain.
loss_function: Function that is used to optimize the chain. It must receive
only a numpy.ndarray of parameters, and return a real number.
If your function requires more arguments, consider using the
'LossFunctionWrapper' class from 'orqviz.loss_function'.
full_gradient_function: Function to calculate the gradient w.r.t.
the loss function for all parameters. Defaults to None.
n_cycles: Number of cycles between which new pivots can be inserted.
Expand Down Expand Up @@ -118,7 +121,7 @@ def run_AutoNEB(

def _insert_pivots_to_improve_approximation(
chain: Chain,
loss_function: Callable[[ParameterVector], float],
loss_function: LossFunction,
max_new_pivots: int = 1,
percentage_tol: float = 0.2,
absolute_tol: float = 0.0,
Expand All @@ -129,7 +132,10 @@ def _insert_pivots_to_improve_approximation(
Args:
chain: Current Chain
loss_function: Loss function for the NEB training
loss_function: Function for NEB training. It must receive only a
numpy.ndarray of parameters, and return a real number.
If your function requires more arguments, consider using the
'LossFunctionWrapper' class from 'orqviz.loss_function'.
max_new_pivots: Maximum number of pivots inserted to Chain. Defaults to 1.
percentage_tol: Percentage error threshold to insert new pivots.
Be mindful of the magnitude and sign of typical loss values.
Expand Down
21 changes: 14 additions & 7 deletions src/orqviz/elastic_band/data_structures.py
Original file line number Diff line number Diff line change
@@ -1,11 +1,12 @@
from __future__ import annotations

from typing import Callable, NamedTuple
from typing import Callable, NamedTuple, Tuple

import numpy as np
from scipy.interpolate import interp1d

from ..aliases import ArrayOfParameterVectors, Weights
from ..aliases import ArrayOfParameterVectors, LossFunction, ParameterVector, Weights
from ..geometric import _norm_of_arrayofparametervectors
from ..scans import eval_points_on_path


Expand All @@ -21,14 +22,14 @@ class Chain(NamedTuple):
pivots: ArrayOfParameterVectors

def get_weights(self) -> Weights:
chain_weights = np.linalg.norm(np.diff(self.pivots, axis=0), axis=1)
chain_weights = _norm_of_arrayofparametervectors(np.diff(self.pivots, axis=0))
chain_weights /= np.sum(chain_weights)
cum_weights = np.cumsum(chain_weights)
matching_cum_weights = np.insert(cum_weights, 0, 0)
matching_cum_weights[-1] = 1
return matching_cum_weights

def evaluate_on_pivots(self, loss_function: Callable) -> np.ndarray:
def evaluate_on_pivots(self, loss_function: LossFunction) -> np.ndarray:
return eval_points_on_path(self.pivots, loss_function)

@property
Expand All @@ -37,7 +38,11 @@ def n_pivots(self) -> int:

@property
def n_params(self) -> int:
return len(self.pivots[0])
return int(np.prod(self.param_shape))

@property
def param_shape(self) -> Tuple[int, ...]:
return np.atleast_1d(self.pivots[0]).shape


class ChainPath(NamedTuple):
Expand Down Expand Up @@ -65,7 +70,7 @@ def generate_uniform_chain(self, n_points: int) -> Chain:
return self._get_chain_from_weights(weights)

def evaluate_points_on_path(
self, n_points: int, loss_function: Callable, weighted: bool = False
self, n_points: int, loss_function: LossFunction, weighted: bool = False
) -> np.ndarray:
if weighted:
chain = self.generate_chain(n_points)
Expand All @@ -74,8 +79,10 @@ def evaluate_points_on_path(
return chain.evaluate_on_pivots(loss_function)

def _get_chain_from_weights(self, weights: Weights) -> Chain:
distance_between_pivots = np.diff(self.primary_chain.pivots, axis=0)

chain_diff = np.cumsum(
np.linalg.norm(np.diff(self.primary_chain.pivots, axis=0), axis=1)
_norm_of_arrayofparametervectors(distance_between_pivots)
)
chain_diff /= max(chain_diff)
chain_diff = np.insert(chain_diff, 0, 0)
Expand Down
38 changes: 27 additions & 11 deletions src/orqviz/elastic_band/neb.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,17 +2,23 @@

import numpy as np

from ..aliases import ParameterVector, Weights
from ..aliases import (
DirectionVector,
FullGradientFunction,
LossFunction,
ParameterVector,
Weights,
)
from ..gradients import calculate_full_gradient
from .data_structures import Chain, ChainPath


def run_NEB(
init_chain: Chain,
loss_function: Callable[[ParameterVector], float],
full_gradient_function: Optional[Callable[[ParameterVector], np.ndarray]] = None,
loss_function: LossFunction,
full_gradient_function: FullGradientFunction = None,
n_iters: int = 10,
eps: float = 0.1,
eps: float = 1e-3,
learning_rate: float = 0.1,
stochastic: bool = False,
calibrate_tangential: bool = False,
Expand All @@ -29,12 +35,16 @@ def run_NEB(
Args:
init_chain: Initial chain that is optimized with the algorithm.
loss_function: Loss function that is used to optimize the chain.
loss_function: Function that is used to optimize the chain. It must
receive only a numpy.ndarray of parameters, and return a real number.
If your function requires more arguments, consider using the
'LossFunctionWrapper' class from 'orqviz.loss_function'.
full_gradient_function: Function to calculate the gradient w.r.t.
the loss function for all parameters. Defaults to None.
n_iters: Number of optimization iterations. Defaults to 10.
eps: Stencil for finite difference gradient if full_gradient_function
is not provided. Defaults to 0.1.
is not provided. For noisy loss functions,
we recommend increasing this value. Defaults to 1e-3.
learning_rate: Learning rate/ step size for the gradient descent optimization.
Defaults to 0.1.
stochastic: Flag to indicate whether to perform stochastic gradient descent
Expand Down Expand Up @@ -86,16 +96,19 @@ def _full_gradient_function(pars: ParameterVector) -> ParameterVector:

def _get_gradients_on_pivots(
chain: Chain,
loss_function: Callable[[ParameterVector], float],
full_gradient_function: Callable[[ParameterVector], np.ndarray],
loss_function: LossFunction,
full_gradient_function: FullGradientFunction,
calibrate_tangential: bool = False,
) -> np.ndarray:
"""Calculates gradient for every pivot on the chain w.r.t. the loss function
using the gradient function.
Args:
chain: Chain to calculate the gradients on.
loss_function: Loss function for which to calculate the gradient.
loss_function: Function that is used to optimize the chain. It must receive
only a numpy.ndarray of parameters, and return a real number.
If your function requires more arguments, consider using the
'LossFunctionWrapper' class from 'orqviz.loss_function'.
full_gradient_function: Function to calculate the gradient w.r.t.
the loss function for all parameters.
calibrate_tangential: Flag to indicate whether next neighbor for finding
Expand All @@ -105,7 +118,7 @@ def _get_gradients_on_pivots(

# We initialize with zeros, as we always want first and last gradient
# to be equal to 0.
gradients_on_pivots = np.zeros(shape=(chain.n_pivots, chain.n_params))
gradients_on_pivots = np.zeros(shape=(chain.n_pivots, *chain.param_shape))

for ii in range(1, chain.n_pivots - 1):
before = chain.pivots[ii - 1]
Expand All @@ -118,7 +131,10 @@ def _get_gradients_on_pivots(
if calibrate_tangential and loss_function(after) > loss_function(before):
tan = after - this
tan /= np.linalg.norm(tan)
tangential_grad = np.dot(full_grad, tan) * tan
ax_indices = tuple(range(len(full_grad.shape)))
tangential_grad = (
np.tensordot(full_grad, tan, axes=(ax_indices, ax_indices)) * tan
)
# save update
gradients_on_pivots[ii] = full_grad - tangential_grad

Expand Down
9 changes: 6 additions & 3 deletions src/orqviz/elastic_band/plots.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,23 +3,26 @@
import matplotlib
import numpy as np

from ..aliases import ParameterVector
from ..aliases import LossFunction, ParameterVector
from ..plot_utils import _check_and_create_fig_ax
from ..scans import eval_points_on_path
from .neb import Chain


def plot_all_chains_losses(
all_chains: List[Chain],
loss_function: Callable[[ParameterVector], float],
loss_function: LossFunction,
ax: Optional[matplotlib.axes.Axes] = None,
**plot_kwargs,
) -> None:
"""Function to plot
Args:
all_chains: List of Chains to evaluate the loss on.
loss_function: Loss function to evaluate the Chains
loss_function: Function to evaluate the chain pivots on. It must receive only a
numpy.ndarray of parameters, and return a real number.
If your function requires more arguments, consider using the
'LossFunctionWrapper' class from 'orqviz.loss_function'.
ax: Matplotlib axis to plot on. If None, a new axis is created
from the current figure. Defaults to None.
plot_kwargs: kwargs for plotting with matplotlib.pyplot.plot (plt.plot)
Expand Down
Loading

0 comments on commit 76a3f98

Please sign in to comment.