Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add model config for llama model so that users can override max_seq_len #1605

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

tripokey
Copy link
Contributor

Resolves #1566

@tripokey tripokey force-pushed the llama_model_config branch from c94581e to 20be5f4 Compare March 6, 2024 13:02
@tripokey tripokey force-pushed the llama_model_config branch from 20be5f4 to b6a2eae Compare March 11, 2024 13:56
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Hardcoded MAX_SEQ_LEN for quantized llama models
1 participant