Title: BERT pre-processed deep learning model for sarcasm detection

Abstract

In today’s scenario, stating statements in a sarcastic manner has become the latest trend. Every youngster around us uses sarcasm as an indirect way to say a negative statement. With the growth of artificial intelligence and machine programming in the field of natural language programming (NLP), the detection of sarcasm efficiently and accurately has become a challenge. To contribute as a solution to this ever-growing field of interest, this paper proposes a novel approach for sarcasm detection with the use of machine learning and deep learning. This approach uses bidirectional encoder representations from transformers (BERT) to pre-process the sentence and feed it to a hybrid deep learning model for training and classification. This hybrid model uses convolutional neural networks (CNN) and long short-term memory (LSTM). This proposed model has been experimented to distinguish between sarcastic statements and simple statements on two datasets. The accuracy of 99.63%, the precision of 99.33%, recall of 99.83% and a F1-score of 99.56% were achieved using the trained model. These results are obtained after performing tenfold cross-validation on the proposed model using the news headline dataset.

+1 (506) 909-0537