Skip to content

Commit

Permalink
When device is Auto (the default) then we will only consider discrete…
Browse files Browse the repository at this point in the history
… GPU's otherwise fallback to CPU.
  • Loading branch information
manyoso committed Sep 13, 2023
1 parent 8f99dca commit 891ddaf
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion gpt4all-chat/chatllm.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -275,7 +275,7 @@ bool ChatLLM::loadModel(const ModelInfo &modelInfo)
if (requestedDevice != "CPU") {
const size_t requiredMemory = m_llModelInfo.model->requiredMem(filePath.toStdString());
std::vector<LLModel::GPUDevice> availableDevices = m_llModelInfo.model->availableGPUDevices(requiredMemory);
if (!availableDevices.empty() && requestedDevice == "Auto") {
if (!availableDevices.empty() && requestedDevice == "Auto" && devices.front().type == 2 /*a discrete gpu*/) {
m_llModelInfo.model->initializeGPUDevice(devices.front());
} else {
for (LLModel::GPUDevice &d : availableDevices) {
Expand Down

0 comments on commit 891ddaf

Please sign in to comment.