ePrints@IIScePrints@IISc Home | About | Browse | Latest Additions | Advanced Search | Contact | Help

Detecting offensive speech in conversational code-mixed dialogue on social media: A contextual dataset and benchmark experiments

Madhu, H and Satapara, S and Modha, S and Mandl, T and Majumder, P (2023) Detecting offensive speech in conversational code-mixed dialogue on social media: A contextual dataset and benchmark experiments. In: Expert Systems with Applications, 215 .

[img] PDF
Exp_Sys_App_215_2023.pdf - Published Version
Restricted to Registered users only

Download (1MB)
Official URL: https://doi.org/10.1016/j.eswa.2022.119342

Abstract

The spread of Hate Speech on online platforms is a severe issue for societies and requires the identification of offensive content by platforms. Research has modeled Hate Speech recognition as a text classification problem that predicts the class of a message based on the text of the message only. However, context plays a huge role in communication. In particular, for short messages, the text of the preceding tweets can completely change the interpretation of a message within a discourse. This work extends previous efforts to classify Hate Speech by considering the current and previous tweets jointly. In particular, we introduce a clearly defined way of extracting context. We present the development of the first dataset for conversational-based Hate Speech classification with an approach for collecting context from long conversations for code-mixed Hindi (ICHCL dataset). Overall, our benchmark experiments show that the inclusion of context can improve classification performance over a baseline. Furthermore, we develop a novel processing pipeline for processing the context. The best-performing pipeline uses a fine-tuned SentBERT paired with an LSTM as a classifier. This pipeline achieves a macro F1 score of 0.892 on the ICHCL test dataset. Another KNN, SentBERT, and ABC weighting-based pipeline yields an F1 Macro of 0.807, which gives the best results among traditional classifiers. So even a KNN model gives better results with an optimized BERT than a vanilla BERT model. © 2022 Elsevier Ltd

Item Type: Journal Article
Publication: Expert Systems with Applications
Publisher: Elsevier Ltd
Additional Information: The copyright for this article belongs to Elsevier Ltd
Keywords: Benchmarking; Character recognition; Classification (of information); Codes (symbols); Long short-term memory; Natural language processing systems; Pipeline processing systems; Social networking (online); Speech recognition; Statistical tests; Text processing, Benchmark; Benchmark experiments; Conversational analysis; Evaluation; Hate speech; Language processing; Natural language processing; Natural languages; Social media; Transformer, Pipelines
Department/Centre: Division of Electrical Sciences > Electrical Engineering
Date Deposited: 31 Jan 2023 06:55
Last Modified: 31 Jan 2023 06:55
URI: https://eprints.iisc.ac.in/id/eprint/79610

Actions (login required)

View Item View Item