Replies: 1 comment 2 replies
-
Hey there, @getzelus! I'm here to help you with your technical questions and bugs. Let's squash those bugs together! 🐞 To resolve the 404 Not Found error when using the Here's how you can set the import { ChatOllama } from "@langchain/ollama";
const llm = new ChatOllama({
model: "llama3.2-1b-instruct",
baseUrl: "http://127.0.0.1:1337", // Ensure this matches your server's address and port
temperature: 0,
});
try {
const res = await llm.invoke("What's the capital of Japan?");
console.log(res);
} catch (error) {
console.error("Error invoking LLM:", error);
} Make sure the To continue talking to Dosu, mention @dosu. Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Other |
Beta Was this translation helpful? Give feedback.
-
Hello guys,
Im trying to use langchainjs with a local llm (llama3.2-1b-instruct) that i installed in Jan AI and I start its server.
I can do a simple call without langchainjs and its working.
But when I try with langchainjs, i have an error.
Its difficult to find the solution in the documentation, forum or chatgpt.
Here is the code working without langchainjs :
Here is the code for langchainjs that doesnt work :
I tried many options, like changing the url, the name of the model, using chatopenAI package etc...
the error i got :
Thank you for your attention
System Info
Windows 11
Node 20.14
"@langchain/core": "^0.3.30",
"@langchain/ollama": "^0.1.4",
"@langchain/openai": "^0.3.17",
"axios": "^1.7.9",
"dotenv": "^16.4.7",
"langchain": "^0.3.11"
Beta Was this translation helpful? Give feedback.
All reactions