This repository has been archived by the owner on May 8, 2024. It is now read-only.
Use a fine-tuned BERT for identification of tricky introductions #72
Labels
enhancement
New feature or request
Now more length introductions can be missed in the current algorithms. One way would be to use a BERT model on a small trainings set of positive samples (complicated introductions), then use that to predict new possible introductions. based on these we would then add identified complicated introductions to a new training set and re-train/fine-tune the introductions and continue like that until we have found a couple of thousands of introductions.
The text was updated successfully, but these errors were encountered: