Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add character limits to edit prediction prompt generation #23814

Merged
merged 2 commits into from
Jan 29, 2025

Conversation

mgsloan
Copy link
Contributor

@mgsloan mgsloan commented Jan 29, 2025

Limits the size of the buffer excerpt and the size of change history.

Release Notes:

  • N/A

@cla-bot cla-bot bot added the cla-signed The user has signed the Contributor License Agreement label Jan 29, 2025
@mgsloan mgsloan marked this pull request as draft January 29, 2025 00:28
@mgsloan mgsloan force-pushed the prompt-character-limits branch 2 times, most recently from b59e21e to bad3fa5 Compare January 29, 2025 19:29
mgsloan and others added 2 commits January 29, 2025 14:46
Co-authored-by: Richard <richard@zed.dev>
Co-authored-by: Joao <joao@zed.dev>
@mgsloan mgsloan force-pushed the prompt-character-limits branch from bad3fa5 to e7b9d70 Compare January 29, 2025 21:46
@mgsloan mgsloan marked this pull request as ready for review January 29, 2025 21:48
@mgsloan mgsloan enabled auto-merge (squash) January 29, 2025 21:49
@mgsloan mgsloan merged commit ade3e45 into main Jan 29, 2025
11 checks passed
@mgsloan mgsloan deleted the prompt-character-limits branch January 29, 2025 21:56
mgsloan added a commit that referenced this pull request Jan 30, 2025
mgsloan added a commit that referenced this pull request Jan 30, 2025
Happily this could be done by copy-modifying some of the code from #23814
mgsloan added a commit that referenced this pull request Jan 30, 2025
Happily this could be done by copy-modifying some of the code from #23814
mgsloan added a commit that referenced this pull request Jan 30, 2025
Realized that the logic in #23814 was more than needed, and harder to
maintain. Something like that could make sense if using the tokenizer
and wanting to precisely hit a token limit. However in the case of edit
predictions it's more of a latency+expense vs capability tradeoff, and
so such precision is unnecessary.

Happily this change didn't require much extra work, just copy-modifying
parts of that change was sufficient.

Release Notes:

- N/A
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cla-signed The user has signed the Contributor License Agreement
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant