diff --git a/DocGpt.Vsix/source.extension.vsixmanifest b/DocGpt.Vsix/source.extension.vsixmanifest index 3fdb084..23d9cb7 100644 --- a/DocGpt.Vsix/source.extension.vsixmanifest +++ b/DocGpt.Vsix/source.extension.vsixmanifest @@ -1,18 +1,19 @@  - + Doc GPT Adds XML Documentation derived from an OpenAI GPT endpoint to .NET members. Can be used with either Azure OpenAI service or OpenAI.com account. https://github.com/bc3tech/docgpt LICENSE README.md icon.jpg + demo.gif dotnet, gpt, llm, ai, xmldoc, xml, documentation, openai, azure true - + amd64 @@ -21,7 +22,7 @@ amd64 - + arm64 diff --git a/README.md b/README.md index c9e4895..f8de1aa 100644 --- a/README.md +++ b/README.md @@ -30,7 +30,7 @@ The deployment name is found in your Deployment list in Azure AI Studio: ## Usage -The extension ships with two main components: +The extension ships with the following components: 1. Analyzer which finds undocumented members 1. Code fix which generates documentation for the member @@ -40,15 +40,15 @@ The analyzer details can be found in the [Shipped](DocGpt.CodeFixes/AnalyzerRele The code fix also reacts to the built-in XML Documentation diagnostic ([CS1591](https://learn.microsoft.com/en-us/dotnet/csharp/language-reference/compiler-messages/cs1591)) -> Note: If you'd rather not have the diagnostic fire, you can disable it in your editorconfig, global suppression file, or inline. Then, you can use the refactor (will only fix a single member at a time) to generate documentation. +> Note: If you'd rather not have the diagnostic fire, you can disable it in your .editorconfig, global suppression file, or inline. Then, you can use the refactor (will only fix a single member at a time) to generate documentation. ![DEMO](docs/img/demo.gif) ([demo video](docs/img/demo.mp4)) ## Notes -Sending code to GPT can *very* quickly run into token throttling based on endpoint/account configurations. Additionally, please be conscious of the fact that you are very often charged **per token** sent to the API. Sending large code files to the API can quickly run up a large bill. +Sending code to GPT can *very* quickly run into token throttling based on endpoint/account configurations. Additionally, please be conscious of the fact that you are charged **per token** sent to the API. Sending large code files to the API can quickly run up a large bill. -Using the code fixer in a "fix all" scenario results in numerous back-to-back calls to the GPT endpoint. This can request-based throttling. If you encounter this, please try again in a few minutes. +Using the code fixer in a "fix all" scenario results in numerous back-to-back calls to the defined endpoint. This can result in request-based throttling. If you encounter this, please try again in a few minutes. ## FAQ