-
Notifications
You must be signed in to change notification settings - Fork 1
Home
Welcome to the HandyLLM wiki! Click on the pages panel on the right to jump to what you need!
pip3 install handyllm
or, install from the Github repo to get latest updates:
pip3 install git+https://github.com/atomiechen/handyllm.git
Install HandyLLM
extension from the marketplace. Please check HandyLLM VSCode extension.
Create a text file named try.hprompt
with the following content (replace <YOUR_OPENAI_API_KEY>
):
Caution
This is only a minimal working example, and we do NOT recommend storing your API key in the hprompt file. Save it to a separate credential file instead (see below).
---
model: gpt-4o
temperature: 0.4
api_key: <YOUR_OPENAI_API_KEY>
---
$user$
How to speed up my prompt engineering iteration?
Now run it with the CLI:
handyllm hprompt try.hprompt
The result will be dumped to the stderr, and you will see it in the same hprompt format.
You can also run it programmatically:
from handyllm import hprompt
my_prompt = hprompt.load_from('try.hprompt')
result_prompt = my_prompt.run()
print(result_prompt.dumps())
You can specify more arguments in the frontmatter, and add variables in the content, like this:
---
# frontmatter data
model: gpt-3.5-turbo
temperature: 0.5
meta:
credential_path: .env
var_map_path: substitute.txt
output_path: out/%Y-%m-%d/result.%H-%M-%S.hprompt
---
$system$
You are a helpful assistant.
$user$
Your current context:
%context%
Please follow my instructions:
%instructions%
Check this page for details.