diff --git a/Makefile b/Makefile
index 4383e66f8f..180fd2c55b 100644
--- a/Makefile
+++ b/Makefile
@@ -1,6 +1,7 @@
.PHONY: build docs test
BUILDDIR := $(PWD)
+BUILD_ARGS := # set nightly to build nightly release
CHECKDIRS := examples tests src utils notebooks setup.py
PYCHECKGLOBS := 'examples/**/*.py' 'scripts/**/*.py' 'src/**/*.py' 'tests/**/*.py' 'utils/**/*.py' setup.py
DOCDIR := docs
@@ -43,7 +44,7 @@ docs:
# creates wheel file
build:
- python3 setup.py sdist bdist_wheel
+ python3 setup.py sdist bdist_wheel $(BUILD_ARGS)
# clean package
clean:
diff --git a/README.md b/README.md
index f841f4827c..a63c937031 100644
--- a/README.md
+++ b/README.md
@@ -16,14 +16,13 @@ limitations under the License.
# ![icon for DeepSparse](https://raw.githubusercontent.com/neuralmagic/deepsparse/main/docs/source/icon-deepsparse.png) DeepSparse Engine
-### CPU inference engine that delivers unprecedented performance for sparse models
+### Neural network inference engine that delivers GPU-class performance for sparsified models on CPUs
-
-
+
@@ -47,20 +46,29 @@ limitations under the License.
## Overview
-The DeepSparse Engine is a CPU runtime that delivers unprecedented performance by taking advantage of natural sparsity within neural networks to reduce compute required as well as accelerate memory bound workloads. It is focused on model deployment and scaling machine learning pipelines, fitting seamlessly into your existing deployments as an inference backend.
+The DeepSparse Engine is a CPU runtime that delivers GPU-class performance by taking advantage of sparsity within neural networks to reduce compute required as well as accelerate memory bound workloads.
+It is focused on model deployment and scaling machine learning pipelines, fitting seamlessly into your existing deployments as an inference backend.
-This repository includes package APIs along with examples to quickly get started learning about and actually running sparse models.
+This repository includes package APIs along with examples to quickly get started benchmarking and inferencing sparse models.
-### Related Products
+## Sparsification
-- [SparseZoo](https://github.com/neuralmagic/sparsezoo):
- Neural network model repository for highly sparse models and optimization recipes
-- [SparseML](https://github.com/neuralmagic/sparseml):
- Libraries for state-of-the-art deep neural network optimization algorithms,
- enabling simple pipelines integration with a few lines of code
-- [Sparsify](https://github.com/neuralmagic/sparsify):
- Easy-to-use autoML interface to optimize deep neural networks for
- better inference performance and a smaller footprint
+Sparsification is the process of taking a trained deep learning model and removing redundant information from the overprecise and over-parameterized network resulting in a faster and smaller model.
+Techniques for sparsification are all encompassing including everything from inducing sparsity using [pruning](https://neuralmagic.com/blog/pruning-overview/) and [quantization](https://arxiv.org/abs/1609.07061) to enabling naturally occurring sparsity using [activation sparsity](http://proceedings.mlr.press/v119/kurtz20a.html) or [winograd/FFT](https://arxiv.org/abs/1509.09308).
+When implemented correctly, these techniques result in significantly more performant and smaller models with limited to no effect on the baseline metrics.
+For example, pruning plus quantization can give noticeable improvements in performance while recovering to nearly the same baseline accuracy.
+
+The Deep Sparse product suite builds on top of sparsification enabling you to easily apply the techniques to your datasets and models using recipe-driven approaches.
+Recipes encode the directions for how to sparsify a model into a simple, easily editable format.
+- Download a sparsification recipe and sparsified model from the [SparseZoo](https://github.com/neuralmagic/sparsezoo).
+- Alternatively, create a recipe for your model using [Sparsify](https://github.com/neuralmagic/sparsify).
+- Apply your recipe with only a few lines of code using [SparseML](https://github.com/neuralmagic/sparseml).
+- Finally, for GPU-level performance on CPUs, deploy your sparse-quantized model with the [DeepSparse Engine](https://github.com/neuralmagic/deepsparse).
+
+
+**Full Deep Sparse product flow:**
+
+
## Compatibility
@@ -68,21 +76,22 @@ The DeepSparse Engine ingests models in the [ONNX](https://onnx.ai/) format, all
## Quick Tour
-To expedite inference and benchmarking on real models, we include the `sparsezoo` package. [SparseZoo](https://github.com/neuralmagic/sparsezoo) hosts inference optimized models, trained on repeatable optimization recipes using state-of-the-art techniques from [SparseML](https://github.com/neuralmagic/sparseml).
+To expedite inference and benchmarking on real models, we include the `sparsezoo` package. [SparseZoo](https://github.com/neuralmagic/sparsezoo) hosts inference-optimized models, trained on repeatable sparsification recipes using state-of-the-art techniques from [SparseML](https://github.com/neuralmagic/sparseml).
### Quickstart with SparseZoo ONNX Models
-**MobileNetV1 Dense**
+**ResNet-50 Dense**
-Here is how to quickly perform inference with DeepSparse Engine on a pre-trained dense MobileNetV1 from SparseZoo.
+Here is how to quickly perform inference with DeepSparse Engine on a pre-trained dense ResNet-50 from SparseZoo.
```python
from deepsparse import compile_model
from sparsezoo.models import classification
+
batch_size = 64
# Download model and compile as optimized executable for your machine
-model = classification.mobilenet_v1()
+model = classification.resnet_50()
engine = compile_model(model, batch_size=batch_size)
# Fetch sample input and predict output using engine
@@ -90,44 +99,68 @@ inputs = model.data_inputs.sample_batch(batch_size=batch_size)
outputs, inference_time = engine.timed_run(inputs)
```
-**MobileNetV1 Optimized**
+**ResNet-50 Sparsified**
When exploring available optimized models, you can use the `Zoo.search_optimized_models` utility to find models that share a base.
-Let us try this on the dense MobileNetV1 to see what is available.
+Try this on the dense ResNet-50 to see what is available:
```python
from sparsezoo import Zoo
from sparsezoo.models import classification
-print(Zoo.search_optimized_models(classification.mobilenet_v1()))
+
+model = classification.resnet_50()
+print(Zoo.search_optimized_models(model))
```
Output:
```shell
-[Model(stub=cv/classification/mobilenet_v1-1.0/pytorch/sparseml/imagenet/base-none),
- Model(stub=cv/classification/mobilenet_v1-1.0/pytorch/sparseml/imagenet/pruned-conservative),
- Model(stub=cv/classification/mobilenet_v1-1.0/pytorch/sparseml/imagenet/pruned-moderate),
- Model(stub=cv/classification/mobilenet_v1-1.0/pytorch/sparseml/imagenet/pruned_quant-moderate)]
+[
+ Model(stub=cv/classification/resnet_v1-50/pytorch/sparseml/imagenet/base-none),
+ Model(stub=cv/classification/resnet_v1-50/pytorch/sparseml/imagenet/pruned-conservative),
+ Model(stub=cv/classification/resnet_v1-50/pytorch/sparseml/imagenet/pruned-moderate),
+ Model(stub=cv/classification/resnet_v1-50/pytorch/sparseml/imagenet/pruned_quant-moderate),
+ Model(stub=cv/classification/resnet_v1-50/pytorch/sparseml/imagenet-augmented/pruned_quant-aggressive)
+]
```
-Great. We can see there are two pruned versions targeting FP32, `conservative` at 100% and `moderate` at >= 99% of baseline accuracy. There is also a `pruned_quant` variant targetting INT8.
+We can see there are two pruned versions targeting FP32 and two pruned, quantized versions targeting INT8.
+The `conservative`, `moderate`, and `aggressive` tags recover to 100%, >=99%, and <99% of baseline accuracy respectively.
-Let's say you want to evaluate best performance on FP32 and are okay with a small drop in accuracy, so we can choose `pruned-moderate` over `pruned-conservative`.
+For a version of ResNet-50 that recovers close to the baseline and is very performant, choose the pruned_quant-moderate model.
+This model will run [nearly 7x faster](https://neuralmagic.com/blog/benchmark-resnet50-with-deepsparse) than the baseline model on a compatible CPU (with the VNNI instruction set enabled).
+For hardware compatibility, see the Hardware Support section.
```python
from deepsparse import compile_model
-from sparsezoo.models import classification
-batch_size = 64
-
-model = classification.mobilenet_v1(optim_name="pruned", optim_category="moderate")
-engine = compile_model(model, batch_size=batch_size)
+import numpy
-inputs = model.data_inputs.sample_batch(batch_size=batch_size)
-outputs, inference_time = engine.timed_run(inputs)
+batch_size = 64
+sample_inputs = [numpy.random.randn(batch_size, 3, 224, 224).astype(numpy.float32)]
+
+# run baseline benchmarking
+engine_base = compile_model(
+ model="zoo:cv/classification/resnet_v1-50/pytorch/sparseml/imagenet/base-none",
+ batch_size=batch_size,
+)
+benchmarks_base = engine_base.benchmark(sample_inputs)
+print(benchmarks_base)
+
+# run sparse benchmarking
+engine_sparse = compile_model(
+ model="zoo:cv/classification/resnet_v1-50/pytorch/sparseml/imagenet/pruned_quant-moderate",
+ batch_size=batch_size,
+)
+if not engine_sparse.cpu_vnni:
+ print("WARNING: VNNI instructions not detected, quantization speedup not well supported")
+benchmarks_sparse = engine_sparse.benchmark(sample_inputs)
+print(benchmarks_sparse)
+
+print(f"Speedup: {benchmarks_sparse.items_per_second / benchmarks_base.items_per_second:.2f}x")
```
-### Quickstart with custom ONNX models
+### Quickstart with Custom ONNX Models
We accept ONNX files for custom models, too. Simply plug in your model to compare performance with other solutions.
diff --git a/docs/source/conf.py b/docs/source/conf.py
index 7ae82eb6cc..ad5a4bb2d6 100644
--- a/docs/source/conf.py
+++ b/docs/source/conf.py
@@ -84,7 +84,7 @@
# a list of builtin themes.
#
html_theme = "sphinx_rtd_theme"
-html_logo = "icon-engine.png"
+html_logo = "icon-deepsparse.png"
html_theme_options = {
'analytics_id': 'UA-128364174-1', # Provided by Google in your dashboard
diff --git a/docs/source/icon-engine.png b/docs/source/icon-deepsparse.png
similarity index 100%
rename from docs/source/icon-engine.png
rename to docs/source/icon-deepsparse.png
diff --git a/docs/source/index.rst b/docs/source/index.rst
index d140b6f814..ef78b48a2a 100644
--- a/docs/source/index.rst
+++ b/docs/source/index.rst
@@ -17,13 +17,16 @@
DeepSparse |version|
====================
-CPU inference engine that delivers unprecedented performance for sparse models.
+Neural network inference engine that delivers GPU-class performance for sparsified models on CPUs
.. raw:: html