Successfully developed a fine-tuned DistilBERT transformer model which can accurately predict the overall sentiment of a piece of financial news up to an accuracy of nearly 81.5%.
-
Updated
May 6, 2024 - Jupyter Notebook
Successfully developed a fine-tuned DistilBERT transformer model which can accurately predict the overall sentiment of a piece of financial news up to an accuracy of nearly 81.5%.
Successfully fine-tuned a pretrained DistilBERT transformer model that can classify social media text data into one of 4 cyberbullying labels i.e. ethnicity/race, gender/sexual, religion and not cyberbullying with a remarkable accuracy of 99%.
This repo contains code for toxic comment classification using deep learning models based on recurrent neural networks and transformers like BERT. The goal is to detect and classify toxic comments in online conversations using Jigsaw's Toxic Comment Classification dataset.
Successfully developed a fine-tuned BERT transformer model which can accurately classify symptoms to their corresponding diseases upto an accuracy of 89%.
Add a description, image, and links to the fine-tune-bert-tensorflow topic page so that developers can more easily learn about it.
To associate your repository with the fine-tune-bert-tensorflow topic, visit your repo's landing page and select "manage topics."