This repository has been archived by the owner on Oct 15, 2024. It is now read-only.
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Upgrades the Tongyi LLM and ChatTongyi Model (langchain-ai#14793)
- **Description:** fixes and upgrades for the Tongyi LLM and ChatTongyi Model - Fixed typos; it should be `Tongyi`, not `OpenAI`. - Fixed a bug in `stream_generate_with_retry`; it's a real stream generator now. - Fixed a bug in `validate_environment`; the `dashscope_api_key` should be properly handled when set by environment variables or initialization parameters. - Changed the `dashscope` response to incremental output by setting the parameter `incremental_output`, which eliminates the need for the prefix-removal trick. - Removed some unused parameters, like `n`, `prefix_messages`. - Added `_stream` method. - Added async methods support, such as `_astream`, `_agenerate`, `_abatch`. - **Dependencies:** No new dependencies. - **Tag maintainer:** @hwchase17 > PS: Some may be confused about the terms `dashscope`, `tongyi`, and `Qwen`: > - `dashscope`: A platform to deploy LLMs and provide APIs to invoke the LLM. > - `tongyi`: A brand name or overall term about Alibaba Cloud's LLM/AI. > - `Qwen`: An LLM that is open-sourced and deployed in `dashscope`. > > We use the `dashscope` SDK to interact with the `tongyi`-`Qwen` LLM. --------- Co-authored-by: Harrison Chase <hw.chase.17@gmail.com>
- Loading branch information