Skip to content

[Feat] KnowledgeBase/Vector Store - Log StandardLoggingVectorStoreRequest for requests made when a vector store is used #10509

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 4 commits into from
May 2, 2025

Conversation

ishaan-jaff
Copy link
Contributor

@ishaan-jaff ishaan-jaff commented May 2, 2025

[Feat] KnowledgeBase/Vector Store - Log StandardLoggingVectorStoreRequest for requests made when a vector store is used

Example of what litellm logs for users in StandardLoggingPayload (SLP)

[
    {
        "vector_store_id": "T37J8R4WTM",
        "query": "what is litellm?",
        "vector_store_search_response": {
            "search_query": "what is litellm?",
            "data": [
                {
                    "score": 0.47120637,
                    "content": [
                        {
                            "text": "Try LiteLLM Enterprise Try LiteLLM Enterprise Deploy LiteLLM Open Source Deploy LiteLLM Open Source #### \ud83d\ude85 LiteLLM Cost Tracking Cost Tracking Batches API Guardrails Model Access Model Access Budgets Budgets LLM Observability Rate Limiting Rate Limiting Prompt Management Prompt Management s3 Logging Pass-Through Endpoints User User User * * * * * * * 0M+ 0M+ docker pulls 1B+ 1B+ requests served 80% 80% uptime 425\\+ 425\\+ contributors ## What is LiteLLM? LiteLLM simplifies **model access**, **spend tracking** and **fallbacks** across 100+ LLMs. Watch this demo, to learn more. ## Features LiteLLM makes it easy for Platform teams to give developers LLM access Spend Tracking Budgets & Rate Limits OpenAI-Compatible LLM Fallbacks Accurately charge teams for their usage. * Attribute cost to key/user/team/org -> * Automatic spend tracking across OpenAI/Azure/Bedrock/GCP/etc. -> * Tag-based spend tracking -> * Log spend to s3/gcs/etc. -> * Prompt formatting support for HF models -> LiteLLM makes it easy for Platform teams to give developers LLM access Spend Tracking Budgets & Rate Limits OpenAI-Compatible LLM Fallbacks Accurately charge teams for their usage. * Attribute cost to key/user/team/org -> * Automatic spend",
                            "type": "text"
                        }
                    ]
                }
            ]
]

Relevant issues

Pre-Submission checklist

Please complete all items before asking a LiteLLM maintainer to review your PR

  • I have Added testing in the tests/litellm/ directory, Adding at least 1 test is a hard requirement - see details
  • I have added a screenshot of my new test passing locally
  • My PR passes all unit tests on make test-unit
  • My PR's scope is as isolated as possible, it only solves 1 specific problem

Type

🆕 New Feature
✅ Test

Changes

Copy link

vercel bot commented May 2, 2025

The latest updates on your projects. Learn more about Vercel for Git ↗︎

Name Status Preview Comments Updated (UTC)
litellm ✅ Ready (Inspect) Visit Preview 💬 Add feedback May 2, 2025 3:23pm

@ishaan-jaff ishaan-jaff merged commit 28cb7cc into main May 2, 2025
38 of 44 checks passed
S1LV3RJ1NX pushed a commit that referenced this pull request May 6, 2025
…quest` for requests made when a vector store is used (#10509)

* ensure vector store results are logged in SLP

* fix tests

* fix tests with vector_store_request_metadata

* fix linting
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant