Skip to content

Commit

Permalink
Fix a bug where we're not properly falling back to CPU.
Browse files Browse the repository at this point in the history
  • Loading branch information
manyoso committed Sep 13, 2023
1 parent 0458c9b commit 21a3244
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions gpt4all-chat/chatllm.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -275,8 +275,8 @@ bool ChatLLM::loadModel(const ModelInfo &modelInfo)
if (requestedDevice != "CPU") {
const size_t requiredMemory = m_llModelInfo.model->requiredMem(filePath.toStdString());
std::vector<LLModel::GPUDevice> availableDevices = m_llModelInfo.model->availableGPUDevices(requiredMemory);
if (!availableDevices.empty() && requestedDevice == "Auto" && devices.front().type == 2 /*a discrete gpu*/) {
m_llModelInfo.model->initializeGPUDevice(devices.front());
if (!availableDevices.empty() && requestedDevice == "Auto" && availableDevices.front().type == 2 /*a discrete gpu*/) {
m_llModelInfo.model->initializeGPUDevice(availableDevices.front());
} else {
for (LLModel::GPUDevice &d : availableDevices) {
if (QString::fromStdString(d.name) == requestedDevice) {
Expand Down

0 comments on commit 21a3244

Please sign in to comment.