-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GPU cuda implementation #160
Comments
Just set an environment variable, like |
So... I didn't follow the exact suggested order of installation and installed marker-pdf first. (marker) d:\marker>echo %TORCH_DEVICE% (marker) d:\marker>echo %INFERENCE_RAM% (marker) d:\marker>marker 1-input 2-output
the log continues : Loaded recognition model vikp/surya_rec on device cuda with dtype torch.float16 When discussing it with the brand new chatGPT it suggests me tampering with the code, which seems a bit strange and tbh foreign to me. Would you mind pushing me one more time? p.s.: I also noticed that after re-install the project completely [git clone, new env] and first run - the models were not downloaded again. If they aren't in the project repo or in the conda env, where they are? |
I'm very sorry for my ignorance, but apparently I'm not doing well with a very basic matter. I can't setup gpu-cuda. I tried different writing options in marker/settings.py:
or
TORCH_DEVICE: Optional[str] = cuda
together with
or
INFERENCE_RAM: int = 16
all 4 combinations, but i still get:
cpu on 100% , nothing on the side of GPU usage or memmory. I do have 4060ti / 16gb and using torch with gpu in other applications...
Would someone be kind enough to explain to the noob where he is making a mistake?
The text was updated successfully, but these errors were encountered: