You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
Currently you can define three research options - FAST_LLM, SMART_LLM, and STRATEGIC_LLM
It would be exceptionally useful to be able to change these at query time
Describe the solution you'd like
Open-webui has probably got the functionality hierarchy right where you can define connections (i.e. endpoints), group rights against models and then the user in these groups can select the model at query time
By defining the endpoints, models are retrieved real-time via the api list functionality. New models are then available for administrators to make available to rights groups
For gpt-researcher, a further layers could mimic the rights groups into research groups - e.g. groups of fast_llm, smart_llm and strategic_llm models where the user can select which model to use to accomplish the task
I think the layering is important. We host litellm behind which we have hosted models on Ollama and paid api access to most providers.
Mostly our hosted models are sufficient, but it would be useful to include access to paid APIs on a permissioned basis.
The layering really helps and would demand a user management layer to gpt_researcher - I will put in a separate feature request for this.
Describe alternatives you've considered
It is possible to alias models in Litellm and so alias three models for gpt_researcher there and then chnage the model in litellm, but that does not give sufficient real-time flexibility.
The text was updated successfully, but these errors were encountered:
a) We're currently at the stage of considering these user stories:
Search Providers and Model Selection
For example: Users can choose from 10 different models, 10 different retrievers and hybrid
Implementation option: we can pass these parameters via the Headers object of the websocket request & store many API keys on the server so that the relevant keys are leveraged depending on the Search Provider or LLM the user passed to the Research Agent
Is your feature request related to a problem? Please describe.
Currently you can define three research options - FAST_LLM, SMART_LLM, and STRATEGIC_LLM
It would be exceptionally useful to be able to change these at query time
Describe the solution you'd like
Open-webui has probably got the functionality hierarchy right where you can define connections (i.e. endpoints), group rights against models and then the user in these groups can select the model at query time
By defining the endpoints, models are retrieved real-time via the api list functionality. New models are then available for administrators to make available to rights groups
For gpt-researcher, a further layers could mimic the rights groups into research groups - e.g. groups of fast_llm, smart_llm and strategic_llm models where the user can select which model to use to accomplish the task
I think the layering is important. We host litellm behind which we have hosted models on Ollama and paid api access to most providers.
Mostly our hosted models are sufficient, but it would be useful to include access to paid APIs on a permissioned basis.
The layering really helps and would demand a user management layer to gpt_researcher - I will put in a separate feature request for this.
Describe alternatives you've considered
It is possible to alias models in Litellm and so alias three models for gpt_researcher there and then chnage the model in litellm, but that does not give sufficient real-time flexibility.
The text was updated successfully, but these errors were encountered: