Cyberbullying Automatic Detection and Mitigation
on Social media using Deep Learning Techniques
With the rapid advancement of technology, the usage of social media and digital
media has grown exponentially, driven by the widespread of the Internet.
Although digital media offers countless of opportunities and serves as powerful
communication means and interaction between individuals of all ages,
however, it is often misused by some to inflict harm on others. This misuse
contributed to the integration of harmful behaviors such as cyberbullying,
hate speech, sexual harassment and other forms of online trolling.
Cyberbullying is a serious issue in online environments and a growing societal
concern that cannot be avoided. It poses significant risks to the mental
and physical well-being of victims, leading to depression, anxiety, and suicidal
thoughts. Therefore, addressing this issue requires active measures to make
platforms safer and mitigate the impact of such acts.
Given the complexity of online interactions, human monitoring is insufficient
alone to identify instances of cyberbullying effectively. Therefore, Automatic
Detection system and Mitigation strategies for Cyberbullying on social media
platforms is a critical and important challenge. Technological solutions are
essential in addressing such harmful trend in order to create a safer online
environment for all users. This research focuses on the automatic detection of
cyberbullying using a simple Bi-directional Long Short-Term Memory architecture
(Bi-LSTM). Bi-LSTM networks which unlike other traditional existing
methods, are capable of capturing context in both directions and enhancing
the detection by effectively understanding dependencies. For example some
words used alone don’t always indicate bullying behavior but when combined
with other words can indicate harmful intent. Hence Twitter posts are short
sentences, traditional models can miss important contextual information because
they deal with word independently however on the other hand Bi-LSTM
understands the relationship between words and their meaning. Natural Language
Processing NLP has crucial role as an initial step to prevent such harmful
behavior by processing, analyzing and understanding the text data. The
model is trained using a dataset containing labeled instances of both bullying
and non-bullying tweets. The performance of the system will be validated
through experiments conducted on the most popular platforms such as Twitter
showing its high precision in detecting the Cyberbullying.
This work also investigates the preprocessing of Twitter data, feature extraction
and model evaluation metrics such as accuracy, recall, precision and F1-
score. The results demonstrate that Bi-LSTM surpasses traditional methods
previously used for detection and contributing significantly to the development
of solutions for identifying online harassment detection.
Keywords: Cyberbullying Detection, Social media, Natural Language Processing,
Deep Learning Model, Bi-LSTM.