Skip to content

Commit

Permalink
Minor fixes to quick-tour.mdx (#139)
Browse files Browse the repository at this point in the history
* Update quick-tour.mdx

* Update src/content/index/quick-tour.mdx

Co-authored-by: Jeannie Finks <74554921+jeanniefinks@users.noreply.github.com>

* Update src/content/index/quick-tour.mdx

Co-authored-by: Jeannie Finks <74554921+jeanniefinks@users.noreply.github.com>

Co-authored-by: Jeannie Finks <74554921+jeanniefinks@users.noreply.github.com>
  • Loading branch information
robertgshaw2-redhat and jeanniefinks authored Dec 13, 2022
1 parent b91452a commit 22b0653
Showing 1 changed file with 8 additions and 9 deletions.
17 changes: 8 additions & 9 deletions src/content/index/quick-tour.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -163,7 +163,7 @@ from deepsparse.pipelines.custom_pipeline import CustomTaskPipeline

def preprocess(inputs):
pass # define your function
def postprocess(outputs)
def postprocess(outputs):
pass # define your function

custom_pipeline = CustomTaskPipeline(
Expand All @@ -182,7 +182,7 @@ pipeline_outputs = custom_pipeline(pipeline_inputs)
**Additional Resources**

- Get Started and [Use A Model](/get-started/use-a-model)
- Get Started and [Use A Model in a Custom Use Case)](/get-started/use-a-model/custom-use-case)
- Get Started and [Use A Model in a Custom Use Case](/get-started/use-a-model/custom-use-case)
- Refer to [Use Cases](/use-cases) for details on usage of supported use cases
- List of Supported Use Cases [Docs Coming Soon]

Expand All @@ -207,20 +207,19 @@ predictions.

DeepSparse Server is launched from the CLI, with configuration via either command line arguments or a configuration file.

With the command line argument path, users specify a use case via the `task` argument (e.g. `image_classification` or `question_answering`) as
With the command line argument path, users specify a use case via the `task` argument (e.g., `image_classification` or `question_answering`) as
well as a model (either a local ONNX file or a SparseZoo stub) via the `model_path` argument:
```bash
deepsparse.server task [use_case_name] --model_path [model_path]
deepsparse.server --task [use_case_name] --model_path [model_path]
```

With the config file path, users create a YAML file that specifies the server configuration. A YAML file looks like the following:

```yaml
num_workers: 4 # specify multi-stream (more than one worker)
endpoints:
- task: [task_name] # specifiy use case (e.g. image_classification, question_answering)
- task: task_name # specifiy use case (e.g., image_classification, question_answering)
route: /predict # specify the route of the endpoint
model: [model_path] # specify sparsezoo stub or path to local onnx file
model: model_path # specify sparsezoo stub or path to local onnx file
name: any_name_you_want

# - ... add as many endpoints as neeede
Expand All @@ -229,7 +228,7 @@ endpoints:
The Server is then launched with the following:

```bash
deepsparse.server config_file config.yaml
deepsparse.server --config_file config.yaml
```

Clients interact with the Server via HTTP. Because the Server uses Pipelines internally,
Expand Down Expand Up @@ -284,7 +283,7 @@ onnx_filepath = "path/to/onnx/model.onnx"
batch_size = 64

# Generate random sample input
inputs = generate_random_inputs(model=onnx_filepath, batch_size=batch_size)
inputs = generate_random_inputs(onnx_filepath, batch_size)

# Compile and run
engine = Engine(onnx_filepath, batch_size)
Expand Down

0 comments on commit 22b0653

Please sign in to comment.