Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

BedrockChat does not send amazon-bedrock-guardrailConfig when using Anthropic models #7541

Closed
5 tasks done
madmed88 opened this issue Jan 17, 2025 · 2 comments
Closed
5 tasks done
Labels
auto:bug Related to a bug, vulnerability, unexpected error with an existing feature

Comments

@madmed88
Copy link
Contributor

Checked other resources

  • I added a very descriptive title to this issue.
  • I searched the LangChain.js documentation with the integrated search.
  • I used the GitHub search to find a similar question and didn't find it.
  • I am sure that this is a bug in LangChain.js rather than my code.
  • The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).

Example Code

const model = new BedrockChat({
     model: "anthropic.claude-3-5-sonnet-20240620-v1:0",
     region: process.env.BEDROCK_AWS_REGION,
     credentials: {
         accessKeyId: process.env.BEDROCK_AWS_ACCESS_KEY_ID!,
         secretAccessKey: process.env.BEDROCK_AWS_SECRET_ACCESS_KEY!,
     },
     trace: "ENABLED",
     guardrailIdentifier: "your-guardrail-id",
     guardrailVersion: "1.0",
     guardrailConfig: {
        tagSuffix: "example",
        streamProcessingMode: "SYNCHRONOUS",
     },
    });

Error Message and Stack Trace (if applicable)

No response

Description

The issue is caused by the return here https://github.com/madmed88/langchainjs/blame/main/libs/langchain-community/src/utils/bedrock/index.ts#L287

that prevents adding the guardrail config to the input here https://github.com/madmed88/langchainjs/blame/main/libs/langchain-community/src/utils/bedrock/index.ts#L314

System Info

This started with @langchain/community 0.2.9

@dosubot dosubot bot added the auto:bug Related to a bug, vulnerability, unexpected error with an existing feature label Jan 17, 2025
Copy link

dosubot bot commented Jan 17, 2025

Hey @madmed88! 👋 I'm here to help you with bug fixes, questions, and becoming a contributor.

The issue you're encountering with the BedrockChat model not sending the amazon-bedrock-guardrailConfig when using Anthropic models is due to the way the prepareInput and prepareMessagesInput methods are implemented in the BedrockLLMInputOutputAdapter class. Specifically, the code checks for the provider type and excludes the guardrailConfig for the "anthropic" provider. This behavior is intentional, as seen in the following code snippet:

if (provider === "anthropic") {
  inputBody.prompt = prompt;
  inputBody.max_tokens_to_sample = maxTokens;
  inputBody.temperature = temperature;
  inputBody.stop_sequences = stopSequences;
  // Guardrail config is not included for Anthropic
} else if (provider === "ai21") {
  // Other provider logic
}

// Guardrail config inclusion logic
if (
  guardrailConfig &&
  guardrailConfig.tagSuffix &&
  guardrailConfig.streamProcessingMode
) {
  inputBody["amazon-bedrock-guardrailConfig"] = guardrailConfig;
}

This means that the amazon-bedrock-guardrailConfig is not applied to Anthropic models, which is why it is not sent when using these models starting from version 0.2.9 of the @langchain/community package [1].

To continue talking to Dosu, mention @dosu.


Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Other

@jacoblee93
Copy link
Collaborator

Thanks for the PR! Fixed in #7542

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
auto:bug Related to a bug, vulnerability, unexpected error with an existing feature
Projects
None yet
Development

No branches or pull requests

2 participants