Skip to content

Commit

Permalink
Error during code context and indexing (#32)
Browse files Browse the repository at this point in the history
* fix: comment out unused token length calculation

* chore: updated changelog and removed notes
  • Loading branch information
magesh-presidio authored Jan 30, 2025
1 parent 92f505c commit 7dcb828
Show file tree
Hide file tree
Showing 3 changed files with 10 additions and 22 deletions.
8 changes: 7 additions & 1 deletion CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,17 +1,23 @@
# Changelog

## [3.0.2] - YYYY-MM-DD
## [3.0.2] - 2025-01-30

### Added

- Merged changes from Cline 3.2.0 (see [changelog](https://github.com/cline/cline/blob/main/CHANGELOG.md#320)). 
- Added copy to clipboard for HAI tasks
- Added ability to add custom instruction markdown files to the workspace
- Added ability to dynamically choose custom instructions while conversing
- Added inline editing (Ability to select a piece of code and edit it with HAI)

### Fixed

- Fixed AWS Bedrock session token preserved in the global state
- Fixed unnecessary LLM and embedding validation occurring on every indexing update
- Fixed issue causing the extension host to terminate unexpectedly
- Fixed LLM and embedding validation errors appearing on the welcome page post-installation
- Fixed embedding configuration incorrectly validating when an LLM model name is provided
- Fixed errors encountered during code context processing and indexing operations

## [3.0.1] - 2024-12-20

Expand Down
5 changes: 3 additions & 2 deletions src/integrations/code-prep/CodeContextAddition.ts
Original file line number Diff line number Diff line change
Expand Up @@ -140,9 +140,10 @@ export class CodeContextAdditionAgent extends EventEmitter {

// TODO: Figure out the way to calculate the token based on the selected model
// currently `tiktoken` doesn't support other then GPT models.
// commented the code since tokenLength not in use

const encoding = encodingForModel("gpt-4o")
const tokenLength = encoding.encode(fileContent).length
// const encoding = encodingForModel("gpt-4o")
// const tokenLength = encoding.encode(fileContent).length

// TODO: `4096` is arbitrary, we need to figure out the optimal value for this. incase of `getModel` returns `null`
const maxToken = llmApi.getModel().info.maxTokens ?? 4096 * 4 // 1 token ~= 4 char
Expand Down
19 changes: 0 additions & 19 deletions webview-ui/src/components/settings/ApiOptions.tsx
Original file line number Diff line number Diff line change
Expand Up @@ -650,17 +650,6 @@ const ApiOptions = ({
placeholder={`Default: ${azureOpenAiDefaultApiVersion}`}
/>
)}
<p
style={{
fontSize: "12px",
marginTop: 3,
color: "var(--vscode-descriptionForeground)",
}}>
<span style={{ color: "var(--vscode-errorForeground)" }}>
(<span style={{ fontWeight: 500 }}>Note:</span> HAI uses complex prompts and works best with Claude
models. Less capable models may not work as expected.)
</span>
</p>
</div>
)}

Expand Down Expand Up @@ -784,10 +773,6 @@ const ApiOptions = ({
local server
</VSCodeLink>{" "}
feature to use it with this extension.{" "}
<span style={{ color: "var(--vscode-errorForeground)" }}>
(<span style={{ fontWeight: 500 }}>Note:</span> HAI uses complex prompts and works best with Claude
models. Less capable models may not work as expected.)
</span>
</p>
</div>
)}
Expand Down Expand Up @@ -845,10 +830,6 @@ const ApiOptions = ({
style={{ display: "inline", fontSize: "inherit" }}>
quickstart guide.
</VSCodeLink>
<span style={{ color: "var(--vscode-errorForeground)" }}>
(<span style={{ fontWeight: 500 }}>Note:</span> HAI uses complex prompts and works best with Claude
models. Less capable models may not work as expected.)
</span>
</p>
</div>
)}
Expand Down

0 comments on commit 7dcb828

Please sign in to comment.