Skip to content
This repository has been archived by the owner on May 8, 2024. It is now read-only.

Use a fine-tuned BERT for identification of tricky introductions #72

Closed
MansMeg opened this issue Aug 27, 2021 · 2 comments
Closed

Use a fine-tuned BERT for identification of tricky introductions #72

MansMeg opened this issue Aug 27, 2021 · 2 comments
Labels
enhancement New feature or request

Comments

@MansMeg
Copy link
Collaborator

MansMeg commented Aug 27, 2021

Now more length introductions can be missed in the current algorithms. One way would be to use a BERT model on a small trainings set of positive samples (complicated introductions), then use that to predict new possible introductions. based on these we would then add identified complicated introductions to a new training set and re-train/fine-tune the introductions and continue like that until we have found a couple of thousands of introductions.

@ninpnin ninpnin added the enhancement New feature or request label Nov 3, 2021
@MansMeg
Copy link
Collaborator Author

MansMeg commented May 4, 2022

This is now being done as a masters thesis.

@MansMeg MansMeg closed this as completed May 4, 2022
@MansMeg
Copy link
Collaborator Author

MansMeg commented May 5, 2022

It is a little too far away from my work. But they might find the same technique useful.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants