-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enhancement: Support Mozilla Deepspeech #340
Comments
Beside the API call, the lib need an public API endpoint for different languages. |
DeepSpeech is an offline library (like Sphinx), not a web-based service. |
There are API endpoints for 'as a service' pretrained models and voice detection. For example mycroft.ai |
They have a websocket. |
FWIW DeepSpeech's focus is on client-side, offline recognition. If this package works offline with PocketSphinx there's no reason it shouldn't be able to work with DeepSpeech. Similar constraints apply, namely needing to download models for languages you want to transcribe. |
Hi all, |
I've given up on deepspeech. It isn't accurate enough. I'm using Julius for a wake word and google for everything else. Thank you all. |
I'm surprised I'm the first person to raise this one. :)
Mozilla have been building datasets and software for privacy-respecting speech recognition, over at DeepSpeech. It's cool that
speech_recognition
will allow abstraction over so many APIs, so that software can be built to treat the SR backend as a utility. It would be especially cool if this let users of all those programs written for Google Voice or whatever switch over to DeepSpeech, and remove another surveillance device from the world.DeepSpeech now ship models in their releases channel, and Mycroft are switching their backend to DeepSpeech shortly, so it's getting traction and the kind of support that should rapidly improve it.
Thanks!
The text was updated successfully, but these errors were encountered: