You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
GPU gets used and BLAS parameter is 1 when I download the CUDA supported llamacpp python version from this url (https://jllllll.github.io/llama-cpp-python-cuBLAS-wheels/AVX2/cu117/llama-cpp-python/)
#1712
Why doesn't GPU get recognized when I download from here instead ? (https://abetlen.github.io/llama-cpp-python/whl/cu121/llama-cpp-python/). My cluster Driver and worker CUDA version is 12.2. Any suggestions to why this is happening as I want to leverage the CUDA 12.1 supported whl file with llamacppversion >0.2.72 to use the CVE fixed version for prod
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Why doesn't GPU get recognized when I download from here instead ? (https://abetlen.github.io/llama-cpp-python/whl/cu121/llama-cpp-python/). My cluster Driver and worker CUDA version is 12.2. Any suggestions to why this is happening as I want to leverage the CUDA 12.1 supported whl file with llamacppversion >0.2.72 to use the CVE fixed version for prod
Beta Was this translation helpful? Give feedback.
All reactions