-
I've currently got the following kind of Custom LLM Agent Executor setup, and I'm wondering if your Streaming can be used with this type of agent? Would greatly appreciate any pointers as to how to implement! ...
# AGENT🤖
tool_names = [tool.name for tool in tools]
agent = LLMSingleActionAgent(
llm_chain=llm_chain,
output_parser=output_parser,
stop=["\nObservation:"],
allowed_tools=tool_names,
)
# AGENT EXECUTOR🤖▶
agent_executor = AgentExecutor.from_agent_and_tools(
agent=agent,
tools=tools,
verbose=True,
memory=memBuffer, # 🧠
max_iterations=3, # 🔄
)
# API STUFF
from pydantic import BaseModel
class HumanMsg(BaseModel):
message: str
chat_history: str | None = None
class AIMsg(BaseModel):
status: str | None = "🤔"
message: str
# description: str | None = None
# price: float
# tax: float | None = None
# TOKEN COUNT🎫
from langchain.callbacks import get_openai_callback
# EXPORT FUNCTION FOR API
def bettyAI(input: HumanMsg):
with get_openai_callback() as cb:
result = agent_executor.run(input.message)
print("$", cb.total_cost)
return {
"message": result,
} |
Beta Was this translation helpful? Give feedback.
Replies: 3 comments 17 replies
-
Hi! I will check this out. The way how streaming works is we use callback handlers. The async callback handler has a method call In case of agents, there are 2 scenarios:
first case is easy. we can simply reuse the available callback handler to stream everything. The second one is a bit tricky because we need to detect when the agent has reached the final answer. It will take some time but if you already figure something out, you're more than welcome to contribute with a pull request! |
Beta Was this translation helpful? Give feedback.
-
@ajndkr Really? Can I use it in the above where the agent_executor is within a callback for Token counts? def bettyAI(input: HumanMsg):
with get_openai_callback() as cb:
result = agent_executor.run(input.message)
print("$", cb.total_cost)
return {
"message": result,
} I re-wrote everything earlier but it failed so i changed it back to the above. Possible to integrate into the above?? or do i need to abandon the get_openai_callback in order to achieve streaming?? |
Beta Was this translation helpful? Give feedback.
-
@hgoona I've recently added support for Langchain Agents. You can check it out and let me know if it solves your use case. Demo example: https://github.com/ajndkr/lanarky/tree/main/examples#zero-shot-agent Make sure you install the correct version!
|
Beta Was this translation helpful? Give feedback.
@hgoona I've recently added support for Langchain Agents. You can check it out and let me know if it solves your use case.
Demo example: https://github.com/ajndkr/lanarky/tree/main/examples#zero-shot-agent
Code: https://github.com/ajndkr/lanarky/blob/main/examples/app/zero_shot_agent.py
Make sure you install the correct version!