Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inquiry on Performance Drop Using MedicalPrompt #57

Open
KimmDD opened this issue Feb 6, 2025 · 0 comments
Open

Inquiry on Performance Drop Using MedicalPrompt #57

KimmDD opened this issue Feb 6, 2025 · 0 comments

Comments

@KimmDD
Copy link

KimmDD commented Feb 6, 2025

Hi there,

I’m Mengdi, currently working on my thesis. My dataset consists of 29 clinical cases from the Merck Manual, including patient information, diagnosis questions, and correct answers. Here are my Essential Differential Diagnosis results using GPT-4o:

•	Baseline Evaluation (Temperature 0, 3 runs): Mean F1 = 0.6920
•	Baseline Evaluation (Default Temperature, 3 runs): Mean F1 = 0.6802
•	Prompt Engineering (No Embeddings): Mean F1 = 0.6080
•	Prompt Engineering (With Embeddings): Mean F1 = 0.6061

For the baseline evaluation, I provided only the patient information and asked GPT-4o to generate responses. I tested this at both temperature 0 and the default temperature.

For prompt engineering, I introduced few-shot examples and used chain-of-thought (CoT) reasoning to guide the model’s responses (F1 = 0.6080). I then incorporated embeddings, retrieving the top 3 most similar cases via cosine similarity before prompting the model with CoT (F1 = 0.6061).

However, despite adding structured reasoning and dynamic few-shot prompting, the performance decreased. Do you have any insights on why this might be happening?

Looking forward to your thoughts!
Best,
Mengdi

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant