Skip to content

LLM backend signature not unified #44

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
allenanie opened this issue Feb 25, 2025 · 3 comments
Open

LLM backend signature not unified #44

allenanie opened this issue Feb 25, 2025 · 3 comments

Comments

@allenanie
Copy link
Collaborator

Currently LLM backend __init__ signature is not unified (this is a legacy backward compatibility issue mostly with AutoGen).

For three backends, the signatures are:

class AutoGenLLM(AbstractModel):
    def __init__(self, config_list: List = None, filter_dict: Dict = None, reset_freq: Union[int, None] = None) -> None

class LiteLLM(AbstractModel):
    def __init__(self, model: Union[str, None] = None, reset_freq: Union[int, None] = None,
                 cache=True) -> None:

class CustomLLM(AbstractModel):
    def __init__(self, model: Union[str, None] = None, reset_freq: Union[int, None] = None,
                 cache=True) -> None:

Essentially two of them allow user to set model, but one can't. It is true that with AutoGen, the model setup/configuration is from the config_list side -- but we could consider unifying the AutoGen interface by dynamically creating a config_list by grabbing the API key from environment variables, see this convenience function:

def auto_construct_oai_config_list_from_env() -> List:
    """
    Collect various API keys saved in the environment and return a format like:
    [{"model": "gpt-4", "api_key": xxx}, {"model": "claude-3.5-sonnet", "api_key": xxx}]

    Note this is a lazy function that defaults to gpt-40 and claude-3.5-sonnet.
    If you want to specify your own model, please provide an OAI_CONFIG_LIST in the environment or as a file
    """
    config_list = []
    if os.environ.get("OPENAI_API_KEY") is not None:
        config_list.append(
            {"model": "gpt-4o", "api_key": os.environ.get("OPENAI_API_KEY")}
        )
    if os.environ.get("ANTHROPIC_API_KEY") is not None:
        config_list.append(
            {
                "model": "claude-3-5-sonnet-latest",
                "api_key": os.environ.get("ANTHROPIC_API_KEY"),
            }
        )
    return config_list

If you approve this solution, I can implement this! Otherwise, we can stay with the current design.

@allenanie allenanie changed the title LLM signature not unified LLM backend signature not unified Feb 25, 2025
@allenanie
Copy link
Collaborator Author

Ohhh I believe the current AutoGen model specification is through:

AutoGenLLM(filter_dict={"model": [model]})

Then if this is the case, a simple modification would work -- just change rewrite the logic for filter_dict to allow it to accept model instead.

@chinganc
Copy link
Collaborator

chinganc commented Feb 26, 2025

Sounds good. Let's make this change to make the API consistent.

@chinganc
Copy link
Collaborator

@allenanie Let's resolve this in 0.1.3.7 update. Can you push a fix there?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants