Which local LLM models can be used for kernel function calls. #9942
-
I tried running cpu-int4-rtn-block-32 using onnx and phi-3. It does not appear to support kernel functions. I would like to know which LLMs do support them and that can be run locally. |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
Hi, Non-expert doing exploration here. I had a similar problem and came across this overview: https://learn.microsoft.com/en-us/semantic-kernel/concepts/ai-services/chat-completion/function-calling/function-choice-behaviors?pivots=programming-language-csharp#supported-ai-connectors Not sure if it solves your issue, though. Also, I got functions working with Ollama even though the table in the link above says it's not supported yet. I'm using NuGet package |
Beta Was this translation helpful? Give feedback.
Hi,
Non-expert doing exploration here. I had a similar problem and came across this overview: https://learn.microsoft.com/en-us/semantic-kernel/concepts/ai-services/chat-completion/function-calling/function-choice-behaviors?pivots=programming-language-csharp#supported-ai-connectors
Not sure if it solves your issue, though.
Also, I got functions working with Ollama even though the table in the link above says it's not supported yet. I'm using NuGet package
Microsoft.SemanticKernel.Connectors.Ollama 1.32.0-alpha