A social media platform that you can trust.
Explore the docs »
·
Report Bug
·
Request Feature
·
Key Contributors: Gaurav Agrawal, Shrid Pant and Tarun Dhankhar.
Cyber bullying has risen exponentially over the years, especially among teens. And while the traumatic experiences of the victims are well-known, little has been done by social media giants to preemptively take action. On a large scale application, merely acting on the reported posts is not nearly sufficient. It is absolutely necessary to proactively participate in the prevention of cyber bullying.
Mindhunters
is a state-of-the-art LSTM-based NLP-algorithm on which this social media platform is wrapped. It provides sophisticated detection of texts that are violent, offensive, sexist, racist, discriminant or derogatory in nature. Scores are generated using Mindhunters
, which affect the reputation of each user. The generated scores are used to provide alerts to the social media platform, which may take appropriate action against the post and/or user.
This project was, originally, made in our participation of UB Hacking 2020
.
Note: Due to faulty datasets, we have been unable to develop an accurate model. While the social media site and its accompanying mindhunters algorithm still work, they have produced erroneous results in many circumstances. Until we are able to obtain a satisfactory dataset to push promising updates for the mindhunters algorithm, please refain from relying on this project. This project has not been deprecated in the interest of open source collaboration. Pull requests that resolve this (or, any!) issue are welcome!
The server-side application was built with Flask, Keras and NLTK. SQLite3 was employed for database management, and HTML, CSS and JavaScript for the client-side application. Mindhunters
was made possible by many open-sourced libraries and frameworks.
A score is associated with each post that the users make. The score is assigned by the sigmoid function in the output layer of Mindhunters
. If the value of the score is greater than the threshold (θ), then the post is considered to be inappropriate. θ may be tuned between a value of 0 and 1, according to the required sensitivity. The reputation of the users are determined by the f(x), where f is a function incorporating all the individual scores of the posts for each user. Posts that are marked inappropriate decrease the reputation of the users, while the other posts increase it.
While the reputation system was created to work on the backend of the application, the current version of the Social Media publicly displays the score assigned to each post and the reputation associated with each user. In practise, however, these parameters may be used in he backend to evaluate reasonable actions, including reporting the users to law enforcement.
The social media platform is a web application monitored by Mindhunters
to provide safety from cyber bullying. To execute, simply:
- Clone this repository with
git clone https://github.com/shridpant/mindhunters
. - Navigate to the root folder of the project and execute
pip install -r requirements.txt
to install all dependencies. - Start your server with
python app.py
. - Open the address from your terminal on your browser. And you're all set!
The web application contains a number of desirable features. Examples:
- Looking up other users
- Public profile
This project welcomes contributions and suggestions. Feel free to fork this repository or submit your ideas through issues. Please carefully read and follow the Contributor Covenant Code of Conduct while participating in this project.
Distributed under the MIT License. See LICENSE for more information.
The entire Mindhunters
application was built by Gaurav Agrawal, Shrid Pant and Tarun Dhankhar. Please feel free to contact us with regards to the project!
We plan to extend the Mindhunters
algorithm to identify misinformation (fake news). Further, we plan to include support for images, audios and videos.
Mindhunters
wouldn't be possible without the following resources: