Skip to content

Commit

Permalink
Add installation instructions
Browse files Browse the repository at this point in the history
  • Loading branch information
calcmogul committed May 9, 2024
1 parent b4dad9e commit 90a8bb0
Showing 1 changed file with 54 additions and 34 deletions.
88 changes: 54 additions & 34 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -88,45 +88,27 @@ Generated by [tools/generate-scalability-results.sh](https://github.com/Sleipnir

Ipopt uses MUMPS by default because it has free licensing. Commercial linear solvers may be much faster.

### How we improved performance

#### Make more decisions at compile time

During problem setup, equality and inequality constraints are encoded as different types, so the appropriate setup behavior can be selected at compile time via operator overloads.
See [benchmark details](https://github.com/SleipnirGroup/Sleipnir/?tab=readme-ov-file#benchmark-details) for more.

#### Reuse autodiff computation results that are still valid (aka caching)

The autodiff library automatically records the linearity of every node in the computational graph. Linear functions have constant first derivatives, and quadratic functions have constant second derivatives. The constant derivatives are computed in the initialization phase and reused for all solver iterations. Only nonlinear parts of the computational graph are recomputed during each solver iteration.

For quadratic problems, we compute the Lagrangian Hessian and constraint Jacobians once with no problem structure hints from the user.
## Install

#### Use a performant linear algebra library with fast sparse solvers

[Eigen](https://gitlab.com/libeigen/eigen/) provides these. It also has no required dependencies, which makes cross compilation much easier.

#### Use a pool allocator for autodiff expression nodes

This promotes fast allocation/deallocation and good memory locality.
### C++ library

We could mitigate the solver's high last-level-cache miss rate (~42% on the machine above) further by breaking apart the expression nodes into fields that are commonly iterated together. We used to use a tape, which gave computational graph updates linear access patterns, but tapes are monotonic buffers with no way to reclaim storage.
See the [build instructions](https://github.com/SleipnirGroup/Sleipnir/?tab=readme-ov-file#c-library-1).

### Running the benchmarks
### Python library

Benchmark projects are in the [benchmarks folder](https://github.com/SleipnirGroup/Sleipnir/tree/main/benchmarks). To compile and run them, run the following in the repository root:
```bash
# Install CasADi and [matplotlib, numpy, scipy] pip packages first
cmake -B build -S .
cmake --build build
./tools/generate-scalability-results.sh
pip install sleipnirgroup-jormungandr
```

See the contents of `./tools/generate-scalability-results.sh` for how to run specific benchmarks.

## Examples

See the [examples](https://github.com/SleipnirGroup/Sleipnir/tree/main/examples) and [optimization unit tests](https://github.com/SleipnirGroup/Sleipnir/tree/main/test/optimization).

## Dependencies
## Build

### Dependencies

* C++20 compiler
* On Windows, install [Visual Studio Community 2022](https://visualstudio.microsoft.com/vs/community/) and select the C++ programming language during installation
Expand All @@ -147,21 +129,17 @@ See the [examples](https://github.com/SleipnirGroup/Sleipnir/tree/main/examples)

Library dependencies which aren't installed locally will be automatically downloaded and built by CMake.

If [CasADi](https://github.com/casadi/casadi) is installed locally, the benchmark executables will be built.
The benchmark executables require [CasADi](https://github.com/casadi/casadi) to be installed locally.

## Build instructions
### C++ library

On Windows, open a [Developer PowerShell](https://learn.microsoft.com/en-us/visualstudio/ide/reference/command-prompt-powershell?view=vs-2022). On Linux or macOS, open a Bash shell.

Clone the repository.
```bash
# Clone the repository
git clone git@github.com:SleipnirGroup/Sleipnir
cd Sleipnir
```

### C++ library

```bash
# Configure; automatically downloads library dependencies
cmake -B build -S .

Expand Down Expand Up @@ -198,7 +176,13 @@ The following build types can be specified via `-DCMAKE_BUILD_TYPE` during CMake

### Python library

On Windows, open a [Developer PowerShell](https://learn.microsoft.com/en-us/visualstudio/ide/reference/command-prompt-powershell?view=vs-2022). On Linux or macOS, open a Bash shell.

```bash
# Clone the repository
git clone git@github.com:SleipnirGroup/Sleipnir
cd Sleipnir

# Setup
pip install --user build

Expand All @@ -218,6 +202,42 @@ Passing the `--enable-diagnostics` flag to the test executable enables solver di

Some test problems generate CSV files containing their solutions. These can be plotted with [tools/plot_test_problem_solutions.py](https://github.com/SleipnirGroup/Sleipnir/blob/main/tools/plot_test_problem_solutions.py).

## Benchmark details

### Running the benchmarks

Benchmark projects are in the [benchmarks folder](https://github.com/SleipnirGroup/Sleipnir/tree/main/benchmarks). To compile and run them, run the following in the repository root:
```bash
# Install CasADi and [matplotlib, numpy, scipy] pip packages first
cmake -B build -S .
cmake --build build
./tools/generate-scalability-results.sh
```

See the contents of `./tools/generate-scalability-results.sh` for how to run specific benchmarks.

### How we improved performance

#### Make more decisions at compile time

During problem setup, equality and inequality constraints are encoded as different types, so the appropriate setup behavior can be selected at compile time via operator overloads.

#### Reuse autodiff computation results that are still valid (aka caching)

The autodiff library automatically records the linearity of every node in the computational graph. Linear functions have constant first derivatives, and quadratic functions have constant second derivatives. The constant derivatives are computed in the initialization phase and reused for all solver iterations. Only nonlinear parts of the computational graph are recomputed during each solver iteration.

For quadratic problems, we compute the Lagrangian Hessian and constraint Jacobians once with no problem structure hints from the user.

#### Use a performant linear algebra library with fast sparse solvers

[Eigen](https://gitlab.com/libeigen/eigen/) provides these. It also has no required dependencies, which makes cross compilation much easier.

#### Use a pool allocator for autodiff expression nodes

This promotes fast allocation/deallocation and good memory locality.

We could mitigate the solver's high last-level-cache miss rate (~42% on the machine above) further by breaking apart the expression nodes into fields that are commonly iterated together. We used to use a tape, which gave computational graph updates linear access patterns, but tapes are monotonic buffers with no way to reclaim storage.

## Branding

### Logo
Expand Down

0 comments on commit 90a8bb0

Please sign in to comment.