Support the project by giving it a star! Your feedback and contributions are greatly appreciated.
This repository contains a sample .NET project demonstrating the use of Kernel Memory for semantic search and Retrieval-Augmented Generation (RAG) on a small commercial products dataset. It mimics an e-shop environment where users can search for products, and the application retrieves the most relevant results.
The project features a serverless setup of Kernel Memory, with services embedded directly in the .NET application. You can run this sample using either Postgres (with pgVector) or Qdrant as the vector database.
This sample uses OpenAI's gpt-4o-mini
as the language model and text-embedding-ada-002
as the embedding model. Other models are also supported; check the Kernel Memory repository for all supported models.
-
Configure API Key:
Open the
appsettings.json
file in the project root and insert your API token underKernelMemory:Services:OpenAI:APIKey
. This key is required to authenticate with the OpenAI services used in the sample.{ "KernelMemory": { "Services": { "OpenAI": { "APIKey": "your-api-key-here" } } } }
-
Run the Application:
To start the services, run
docker-compose up -d
from the repository root.Alternatively, you can run the application through the
docker-compose
startup project directly from your IDE (Visual Studio/Rider/VS Code) -
Ingest Sample Dataset:
After the application is running, open your browser and navigate to http://localhost:9000. From there, you can ingest the sample dataset located at
/utils/dataset/products.csv
(link)
- http://localhost:9000 - Application UI
- http://localhost:9000/swagger - Swagger API Documentation
- http://localhost:5341 - Seq (structured logs, traces). Default login:
admin
Default password:password
- http://localhost:6333/dashboard - Qdrant Dashboard
Feel free to open discussions, submit pull requests, or share suggestions to help improve the project! The authors are very friendly and open to feedback and contributions.