From 76eeeda26f7ea74ba49fe07f0af884d29bb62210 Mon Sep 17 00:00:00 2001 From: eLQeR Date: Sat, 12 Oct 2024 14:26:13 +0300 Subject: [PATCH] add tag into second post --- blog/content/posts/my-second-post/my-second-post.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/blog/content/posts/my-second-post/my-second-post.md b/blog/content/posts/my-second-post/my-second-post.md index 4314bad..b80e53c 100644 --- a/blog/content/posts/my-second-post/my-second-post.md +++ b/blog/content/posts/my-second-post/my-second-post.md @@ -55,7 +55,8 @@ Meta-prompting is an advanced technique in prompt engineering that goes beyond m When applied in Claude LLM, meta-prompting proved to be more effective than in GPT models. It significantly improved test case outcomes by making instructions simpler and clearer for the model to understand. Claude's advanced processing capabilities allowed it to better interpret and act on the meta-prompts, leading to more accurate and consistent results. **Example how meta-prompting optimized Claude outputs:** -![img_4.png](https://ibb.co/YkcwBwS) +![img_4.png](https://i.ibb.co/7WnLtLD/img-4.png) +img-4 However, in our specific case, meta-prompting did not lead to the exceptional results we had hoped for. While it is a valuable technique, its effectiveness can vary depending on the complexity of the task and the model's inherent capabilities.