Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG]0.3.1版本使用one-api在线大语音模型,提示词模版不起作用 #5191

Open
xiliu opened this issue Jan 15, 2025 · 1 comment
Labels
bug Something isn't working

Comments

@xiliu
Copy link

xiliu commented Jan 15, 2025

        chat_box.ai_say("正在思考...")
        text = ""
        started = False
        client = openai.Client(base_url=f"{api_address()}/chat", api_key="NONE")
        extra_body = dict(
            temperature=temperature,
            prompt_name=prompt_name,
        )
        stream = True
        params = dict(
            messages=messages,
            model=llm_model,
            stream=stream,  # TODO:xinference qwen-vl-chat 流式输出会出错,后续看更新
            extra_body=extra_body,
        )
        if stream:
            try:
                for d in client.chat.completions.create(**params):
                    # import rich
                    # rich.print(d)
                    # print(d)
                    message_id = d.message_id
                    metadata = {
                        "message_id": message_id,
                    }
                    # clear initial message
                    if not started:
                        chat_box.update_msg("", streaming=False)

这里的prompt_name是是来自prompt_settings.yaml中,大模型返回的信息,与模版没关系,这是什么问题

@xiliu xiliu added the bug Something isn't working label Jan 15, 2025
Copy link

dosubot bot commented Jan 15, 2025

Hey @xiliu! I'm here to help you with any bugs, questions, or contributions you have. Let's tackle this together!

The issue you're encountering seems to be related to the prompt_name parameter not functioning as expected when using the one-api online large language model. The prompt_name is intended to specify different prompt templates configured in the application, allowing you to select and use various prompt templates for generating responses. This parameter is typically used in the extra_body dictionary when making API calls to the completions endpoint.

If the prompt_name is not affecting the output as expected, it could be due to several reasons:

  1. Configuration Issue: Ensure that the prompt_name you are using is correctly defined in the prompt_settings.yaml or the relevant configuration file. The prompt template should be properly set up and accessible by the application.

  2. API Handling: Verify that the API endpoint you are using correctly interprets the prompt_name parameter. There might be an issue with how the API processes this parameter, especially if it's not directly related to the template configuration.

  3. Code Logic: Check the logic in the code where the prompt_name is used. Ensure that the parameter is being passed and utilized correctly in the API call. The function responsible for fetching and applying the prompt template should be correctly implemented.

  4. Version Compatibility: Since you mentioned version 0.3.1, there might be changes or bugs in this version affecting the functionality. Consider checking the release notes or any related issues in the repository for known bugs or updates.

If these steps do not resolve the issue, further investigation into the specific implementation details and configuration might be necessary.

To continue talking to Dosu, mention @dosu.


Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Other

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant