Does BERT Pay Attention To Cyberbullying?

Fatma Elsafoury, Stamos Katsigiannis, Steven R. Wilson, and Naeem Ramzan

BibTex

Published in Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval sigir 2021

Link to paper Link to poster Link to code

Abstract: Social media have brought threats like cyberbullying, which can lead to stress, anxiety, depression and in some severe cases, suicide attempts. Detecting cyberbullying can help to warn/ block bullies and provide support to victims. However, very few studies have used self-attention-based language models like BERT for cyberbullying detection and they typically only report BERT’s performance without examining in depth the reasons for its performance. In this work, we examine the use of BERT for cyberbullying detection on various datasets and attempt to explain its performance by analysing its attention weights and gradient-based feature importance scores for textual and linguistic features. Our results show that attention weights do not correlate with feature importance scores and thus do not explain the model’s performance. Additionally, they suggest that BERT relies on syntactical biases in the datasets to assign feature importance scores to class-related words rather than cyberbullying-related linguistic features.