[Question]: When calling the Create Knowledge Base API and selecting the vllm-deployed bge-m3 embedding model, I received {'code': 102, 'message': "embedding_model
bge-m3@Xinference doesn't exist"}
#6489
Labels
🙋♀️ question
Further information is requested
Self Checks
Describe your problem
When calling the Create Knowledge Base API and selecting the vllm-deployed bge-m3 embedding model, I received {'code': 102, 'message': "
embedding_model
bge-m3 doesn't exist"}code:
create_data = {
"name": DATASET_NAME,
"chunk_method": "naive",
"embedding_id": "bge-m3",
}
create_response = requests.post(
datasets_url,
headers={
"Content-Type": "application/json",
"Authorization": f"Bearer {api_key}",
},
json=create_data,
)
When replacing "bge-m3" with "BAAI/bge-large-zh-v1.5@BAAI", it can be successfully operational.
When creating a knowledge base by selecting "bge-m3" in the web version succeeds, how should I adjust the code?
The text was updated successfully, but these errors were encountered: