Skip to content

Commit

Permalink
Create linkcheck.yml (#1005)
Browse files Browse the repository at this point in the history
* Create linkcheck.yml

* Update linkcheck.yml

* Update linkcheck.yml

* Fix dead links

* Missed one

* sparseml -> deepsparse

* Change on flag for all push or prs

* Update linkcheck.yml

* Fixup new links

* Ignore server links
  • Loading branch information
mgoin authored Apr 26, 2023
1 parent 8423c8d commit 2b39a73
Show file tree
Hide file tree
Showing 10 changed files with 33 additions and 6 deletions.
21 changes: 21 additions & 0 deletions .github/workflows/linkcheck.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
name: Check Markdown links

on:
push:
branches:
- main
pull_request:
branches:
- main

# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:

jobs:
markdown-link-check:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: gaurav-nelson/github-action-markdown-link-check@v1
with:
use-quiet-mode: 'yes'
2 changes: 1 addition & 1 deletion docs/old/source/scheduler.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ However, there are circumstances in which more cores does not imply better perfo

An alternative, "multi-stream" scheduler is provided with the software. In cases where parallelism is low, sending multiple requests simultaneously can more adequately saturate the available cores. In other words, if speedup can't be achieved by adding more cores, then perhaps speedup can be achieved by adding more work.

If increasing core count doesn't decrease latency, that's a strong indicator that parallelism is low in your particular model/batch-size combination. It may be that total throughput can be increased by making more requests simultaneously. Using the [deepsparse.engine.Scheduler API,](https://docs.neuralmagic.com/deepsparse/api/deepsparse.html) the multi-stream scheduler can be selected, and requests made by multiple Python threads will be handled concurrently.
If increasing core count doesn't decrease latency, that's a strong indicator that parallelism is low in your particular model/batch-size combination. It may be that total throughput can be increased by making more requests simultaneously. Using the [deepsparse.engine.Scheduler API,](https://docs.neuralmagic.com/archive/deepsparse/api/deepsparse.html) the multi-stream scheduler can be selected, and requests made by multiple Python threads will be handled concurrently.

<img src="https://raw.githubusercontent.com/neuralmagic/deepsparse/main/docs/source/multi-stream.png" alt="multi stream diagram" />

Expand Down
2 changes: 2 additions & 0 deletions docs/use-cases/cv/image-classification.md
Original file line number Diff line number Diff line change
Expand Up @@ -212,7 +212,9 @@ deepsparse.server \
--task image_classification \
--model_path zoo:cv/classification/resnet_v1-50/pytorch/sparseml/imagenet/pruned95_quant-none
```
<!-- markdown-link-check-disable -->
You should see Uvicorn report that it is running on http://0.0.0.0:5543. Once launched, a /docs path is created with full endpoint descriptions and support for making sample requests.
<!-- markdown-link-check-enable -->

Here is an example client request, using the Python requests library for formatting the HTTP:

Expand Down
2 changes: 2 additions & 0 deletions docs/use-cases/nlp/question-answering.md
Original file line number Diff line number Diff line change
Expand Up @@ -221,7 +221,9 @@ deepsparse.server \
--task question-answering \
--model_path zoo:nlp/question_answering/obert-base/pytorch/huggingface/squad/pruned90_quant-none # or path/to/onnx
```
<!-- markdown-link-check-disable -->
You should see Uvicorn report that it is running on http://0.0.0.0:5543. Once launched, a /docs path is created with full endpoint descriptions and support for making sample requests.
<!-- markdown-link-check-enable -->

Here is an example client request, using the Python requests library for formatting the HTTP:
```python
Expand Down
2 changes: 2 additions & 0 deletions docs/use-cases/nlp/token-classification.md
Original file line number Diff line number Diff line change
Expand Up @@ -219,7 +219,9 @@ deepsparse.server \
--task token_classification \
--model_path "zoo:nlp/token_classification/obert-base/pytorch/huggingface/conll2003/pruned90_quant-none" # or path/to/onnx
```
<!-- markdown-link-check-disable -->
You should see Uvicorn report that it is running on http://0.0.0.0:5543. Once launched, a /docs path is created with full endpoint descriptions and support for making sample requests.
<!-- markdown-link-check-enable -->

Here is an example client request, using the Python requests library for formatting the HTTP:
```python
Expand Down
2 changes: 1 addition & 1 deletion examples/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ Open a Pull Request to [contribute](https://github.com/neuralmagic/deepsparse/bl
| [Image Classification](https://github.com/neuralmagic/deepsparse/tree/main/examples/classification/) | How to use image classification models from SparseZoo to perform inference and benchmarking with the DeepSparse Engine. |
| [Object Detection](https://github.com/neuralmagic/deepsparse/tree/main/examples/detection/) | How to use object detection models from SparseZoo to perform inference and benchmarking with the DeepSparse Engine. |
| [Instance Segmentation](https://github.com/neuralmagic/deepsparse/tree/main/examples/dbolya-yolact/) | How to use an optimized YOLACT model and the DeepSparse Engine to perform real-time instance segmentation. |
| [AWS Lambda Integration](https://github.com/neuralmagic/deepsparse/tree/main/examples/aws-lambda/) | How to deploy a DeepSparse pipeline on AWS Lambda. |
| [AWS Lambda Integration](https://github.com/neuralmagic/deepsparse/tree/main/examples/aws-serverless/) | How to deploy a DeepSparse pipeline on AWS Lambda. |
| [AWS Sagemaker Integration](https://github.com/neuralmagic/deepsparse/tree/main/examples/aws-sagemaker/) | How to deploy a DeepSparse inference server on SageMaker. |
| [Google Cloud Run](https://github.com/neuralmagic/deepsparse/tree/main/examples/google-cloud-run) | How to deploy a DeepSparse inference server on Cloud Run. |
| [Google Kubernetes Engine](https://github.com/neuralmagic/deepsparse/tree/main/examples/google-kubernetes-engine/) | How to deploy a DeepSparse inference server on GKE. |
Expand Down
2 changes: 1 addition & 1 deletion examples/aws-serverless/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -76,7 +76,7 @@ An example `sentiment-inputs.csv` file in the `sample` directory is available to

#### Fargate Compute Configuration

To edit the hardware configuration of the Fargate container, you can edit the default values in the [template.yaml](https://github.com/neuralmagic/deepsparse/examples/aws-serverless/batch/template.yaml) file in the `batch` directory.
To edit the hardware configuration of the Fargate container, you can edit the default values in the [template.yaml](https://github.com/neuralmagic/deepsparse/tree/main/examples/aws-serverless/batch/template.yaml) file in the `batch` directory.

Fargate is currently configured to deploy with 4 vCPUs and 8GB of RAM.

Expand Down
2 changes: 1 addition & 1 deletion integrations/haystack/README.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
# Haystack: Information Retrieval #
The relevant features added as a part of the Haystack information retrieval integration are a [Haystack pipeline](/src/deepsparse/transformers/haystack/pipeline.py), an [embedding extraction pipeline](/src/deepsparse/transformers/pipelines/embedding_extraction.py), and two classes, [DeepSparseEmbeddingRetriever](/src/deepsparse/transformers/haystack/nodes.py) and [DeepSparseDensePassageRetriever](/src/deepsparse/transformers/haystack/nodes.py).
The relevant features added as a part of the Haystack information retrieval integration are a [Haystack pipeline](https://github.com/neuralmagic/deepsparse/tree/main/src/deepsparse/transformers/haystack/pipeline.py), an [embedding extraction pipeline](https://github.com/neuralmagic/deepsparse/tree/main/src/deepsparse/transformers/pipelines/embedding_extraction.py), and two classes, [DeepSparseEmbeddingRetriever](https://github.com/neuralmagic/deepsparse/tree/main/src/deepsparse/transformers/haystack/nodes.py) and [DeepSparseDensePassageRetriever](https://github.com/neuralmagic/deepsparse/tree/main/src/deepsparse/transformers/haystack/nodes.py).

These features allow a user to perform information retrieval tasks using the Haystack library as well as substitute in sparse retrieval nodes into their existing Haystack systems.

Expand Down
2 changes: 1 addition & 1 deletion src/deepsparse/transformers/haystack/README.md
Original file line number Diff line number Diff line change
@@ -1,2 +1,2 @@
# Information Retrieval with Haystack #
For more information about setup, usage, and examples see [integrations/haystack/README.md](/integrations/haystack/README.md)
For more information about setup, usage, and examples see [integrations/haystack/README.md](https://github.com/neuralmagic/deepsparse/tree/main/integrations/haystack/README.md)
2 changes: 1 addition & 1 deletion src/deepsparse/yolact/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ Below we describe two possibilities to obtain the required ONNX model.

### Exporting the ONNX File From the Contents of a Local Directory
This pathway is relevant if you intend to deploy a model created using the [SparseML](https://github.com/neuralmagic/sparseml) library.
For more information refer to the [appropriate YOLACT integration documentation in SparseML](https://github.com/neuralmagic/sparseml/tree/main/integrations/dbolya-yolact)
For more information refer to the [appropriate YOLACT integration documentation in SparseML](https://github.com/neuralmagic/sparseml/tree/main/integrations/old-examples/dbolya-yolact)

After training your model with `SparseML`, locate the `.pth` file for the model you'd like to export and run the `SparseML` integrated YOLACT ONNX export script below.

Expand Down

0 comments on commit 2b39a73

Please sign in to comment.