Skip to content

Is InferenceSession.Run thread-safe when using DirectML provider? #9441

Discussion options

You must be logged in to vote

i found this document https://onnxruntime.ai/docs/execution-providers/DirectML-ExecutionProvider.html

Additionally, as the DirectML execution provider does not support parallel execution, it does not support multi-threaded calls to Run on the same inference session. That is, if an inference session using the DirectML execution provider, only one thread may call Run at a time. Multiple threads are permitted to call Run simultaneously if they operate on different inference session objects.

Performance Tuning
The DirectML execution provider works most efficiently when tensor shapes are known at the time a session is created. This provides a few performance benefits: 1) Because constant foldi…

Replies: 1 comment

Comment options

You must be logged in to vote
0 replies
Answer selected by wanglvhang
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
1 participant