This demo shows the functionality of the Voximplant instant messaging SDK, including silent supervision by a bot.
-
Updated
May 19, 2022 - TypeScript
This demo shows the functionality of the Voximplant instant messaging SDK, including silent supervision by a bot.
AntiToxicBot is a bot that detects toxics in a chat using Data Science and Machine Learning technologies. The bot will warn admins about toxic users. Also, the admin can allow the bot to ban toxics.
This is a simple python program which uses a machine learning model to detect toxicity in tweets, developed in Flask.
A supervised learning based tool to identify toxic code review comments
This is a simple python program which uses a machine learning model to detect toxicity in tweets, GUI in Tkinter.
NLP deep learning model for multilingual toxicity detection in text 📚
Telegram bot that detects toxic comments based on Perspective API
This library can detect toxicity in the text or string or content and in return provide you the toxicity percentage in text or content with toxic words found in text.
It is a trained Deep Learning model to predict different level of toxic comments. Toxicity like threats, obscenity, insults, and identity-based hate.
This work focuses on the development of machine learning models, in particular neural networks and SVM, where they can detect toxicity in comments. The topics we will be dealing with: a) Cost-sensitive learning, b) Class imbalance
The Toxic Comment Detector is a tool powered by Hugging Face’s unitary/toxic-bert model, designed to identify harmful, offensive, or abusive language in real time. Built with a ReactJS frontend and a Flask backend, it provides detailed insights into toxicity levels, enabling safer online environments.
This repository features an LLM-based moderation system designed for game audio and text chats. By implementing toxicity moderation, it enhances the online interaction experience for gamers, improving player retention by minimizing adverse negative experiences in games such as Valorant and Overwatch. Ultimately reducing manual moderation costs.
This is a application to analyse toxicity in social media using BERT and context analysis and aims to reduce toxicity
An Explainable Toxicity detector for code review comments. Published in ESEM'2023
Comparing Toxic Texts with Transformers
BadFilter.js to the rescue! We’ve crafted a supercharged, customizable solution that helps developers filter out inappropriate words like a pro. Let's make the internet a friendlier place one word at a time!
Toxicity detection in a conversation or phases.
An anti-toxicity Discord bot to ease moderation.
Detecting Toxic comments using machine learning
Measure and mitigate gender bias in Danish toxicity classifiers and sentiment analysis models.
Add a description, image, and links to the toxicity-detection topic page so that developers can more easily learn about it.
To associate your repository with the toxicity-detection topic, visit your repo's landing page and select "manage topics."