You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Reviewed the article. In general good job, but few things I would add:
make the article more specific and show how each approach affected our tests result
keep reference to sources where some of statements where validated (like other articles, LLM's documentation, etc).
Regarding the first option, I'm not sure it is possible for the current size of the data set to demo the difference taking into into account the randomness we observer, but at least it would help to show the result when we have one generic prompt vs result for separate prompts for each LLM.
Read prompt engineering approaches specification overview for Claude and GPT:
Try different techniques for our prompt for AutoScan project and document the results.
Reference this in your previous article for #4 .
The text was updated successfully, but these errors were encountered: