Replies: 1 comment
-
Interesting Idea :) I will look into it as I have this version stable 👍 |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Describe the solution you'd like
Include a function that will Tag low confident returned data from the AI on the document. Allowing to manually review any document that the AI was not confident about. Or, once it is possible to have data added to custom fields, add a score value in a custom field.
This prompt seems to return reasonable interesting data: Provide a score of 1-100 indicating the confidence level in the accuracy of the analyzed data from the document
Describe alternatives you've considered
An alternative or addition could be to work two AI. Primary is OLlama but if the score is low, re-run with an Open AI advanced model. Or simpler Open AI model as primary and a more advanced Open AI model as secondary. Keeping cost low for most documents but using stronger AI for more complex items.
Beta Was this translation helpful? Give feedback.
All reactions