Replies: 1 comment 2 replies
-
ohh silly me, env var was https but server is http, all OK now! |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I'm kindly asking for guidance with local private LLM usage. I installed fabric on my home linux pc, and used the --setup option, added my OpenAI key, I can listmodels and see gpt models.
I have jan.ai (is compatible with OpenAI API) on my home windows pc, running as a server with llama3-8b-instruct from Meta.
The linux web browser comms are good, URL (e.g. http://10.0.1.12:1337/) renders with an interface to models, chat, messages, threads, and assistants.
In my linux shell, I need to understand how to use the fabric environment variables and/or the option --remoteOllamaServer but so far I get Connection Error or a list of GPT models but not my local llama3? Hoping to see llama3 listed, and pass a simple question to the model getting an ai pattern output.
Thanks,
Chris
Beta Was this translation helpful? Give feedback.
All reactions