layout | title | nav_order | description | permalink |
---|---|---|---|---|
default |
Quick start |
1 |
fastDeploy enables you to build production ready APIs for Deep Learning models. |
/ |
{: .fs-9 }
fastDeploy provides a convenient way to serve DL/ ML models with minimal extra code. {: .fs-6 .fw-300 }
Download CLI{: .btn .btn-primary .fs-5 .mb-4 .mb-md-0 .mr-2 } View it on GitHub{: .btn .fs-5 .mb-4 .mb-md-0 }
- Optimal batch size is estimated and inputs are automatically batched.
- Prediction can be run in sync or async using
/sync
and/async
endpoints. - Supports everything supported by Python; Tensorflow, Keras, Pytorch, MXNet, Kaldi ..
- Minimal extra code: You don't need to write any fastDeploy specific code.
Docker is the only dependency.
fastDeploy's CLI is a simple, helpful wrapper over docker.
# See all the arguments supported.
./fastDeploy.py --help
# Print list of available recipes with descriptions.
./fastDeploy.py --list_recipes
# Run a recipe (eg: craft_text_detection).
./fastDeploy.py --run craft_text_detection --name craft_text_detection_test_run
fastDeploy is built by notAI.tech.
fastDeploy is distributed by an MIT license.
When contributing to this repository, please first discuss the change you wish to make via Github issue
notAI.tech is committed to fostering a welcoming community.
View our Code of Conduct on our GitHub repository.