RoBERTa-BiLSTM-Conv1D Deep Learning Model for Detecting Persuasive Content in News
DOI:
https://doi.org/10.31849/digitalzone.v16i2.27674Keywords:
deep learning, persuasive content, text summarization, word embeddingAbstract
The use of persuasive language is one of the defining features of native advertisements. Therefore, detecting persuasive content in news is essential, since native ads often appear disguised as legitimate news articles, it is crucial to identify and filter such content to maintain objectivity and improve the user experience. This study aims to detect news with persuasive content i.e. persuasive news in English language using a natural language processing (NLP) approach. The proposed method incorporates text summarization methods, pre-trained word embeddings, and deep learning models. An additional Conv1D layer has been added to improve the model’s performance. The model were trained on an Indonesian news dataset translated into English using Google Translate API. Experimental results show that our proposed RoBERTa–BiLSTM-Conv1D model, outperformed other models, achieving 92% accuracy in identifying persuasive news in English. These findings indicate that the persuasive content detection model can be used for application in mainstream media environments to detect native ads in English language. In the future, the model can incorporate Indonesian and English news as training data to develop a cross-lingual native ads detection model
References
[1] M. Y. Saragih and A. I. Harahap, “The Challenges of Print Media Journalism in the Digital Era,” Budapest International Research and Critics Institute (BIRCI-Journal) : Humanities and Social Sciences, vol. 3, no. 1, 2020, https://doi.org/10.33258/birci.v3i1.805
[2] B. Eyada and A. Milla, “Native Advertising: Challenges and Perspectives,” Journal of Design Sciences and Applied Arts, vol. 1, no. 1, 2020, https://doi.org/10.21608/jdsaa.2020.70451
[3] M. Dahlén and M. Edenius, “When is advertising advertising? Comparing responses to non-traditional and traditional advertising media,” Journal of Current Issues and Research in Advertising, vol. 29, no. 1, 2007, https://doi.org/10.1080/10641734.2007.10505206
[4] B. R. P. Darnoto, D. Siahaan, and D. Purwitasari, “Deep Learning for Native Advertisement Detection in Electronic News: A Comparative Study,” in Proceedings - 11th Electrical Power, Electronics, Communications, Control, and Informatics Seminar, EECCIS 2022, 2022, https://doi.org/10.1109/EECCIS54468.2022.9902953
[5] M. D. Molina, S. S. Sundar, T. Le, and D. Lee, “‘Fake News’ Is Not Simply False Information: A Concept Explication and Taxonomy of Online Content,” American Behavioral Scientist, vol. 65, no. 2, 2021, https://doi.org/10.1177/0002764219878224
[6] I. D. Romanova and I. V. Smirnova, “Persuasive techniques in advertising,” Training, Language and Culture, vol. 3, no. 2, 2019, https://doi.org/10.29366/2019tlc.3.2.4
[7] C. J. Hoofnagle and E. Meleshinsky, “Native Advertising and Endorsement: Schema, Source-Based Misleadingness, and Omission of Material Facts,” Technol Sci, 2015, https://techscience.org/a/2015121503/
[8] Dzulkifli Isadaud, M.Dzikrul Fikri, and Muhammad Imam Bukhari, “The Urgency Of English In The Curriculum In Indonesia To Prepare Human Resources For Global Competitiveness,” DIAJAR: Jurnal Pendidikan dan Pembelajaran, vol. 1, no. 1, 2022, https://doi.org/10.54259/diajar.v1i1.177
[9] D. Khurana, A. Koli, K. Khatter, and S. Singh, “Natural language processing: state of the art, current trends and challenges,” Multimed Tools Appl, vol. 82, no. 3, 2023, https://doi.org/10.1007/s11042-022-13428-4
[10] D. W. Otter, J. R. Medina, and J. K. Kalita, “A Survey of the Usages of Deep Learning for Natural Language Processing,” IEEE Trans Neural Netw Learn Syst, vol. 32, no. 2, 2021, https://doi.org/10.1109/TNNLS.2020.2979670
[11] M. C. Ho, C. Y. Chuang, Y. C. Hsu, and Y. Y. Chang, “Hidden Advertorial Detection on Social Media in Chinese,” in ROCLING 2021 - Proceedings of the 33rd Conference on Computational Linguistics and Speech Processing, The Association for Computational Linguistics and Chinese Language Processing (ACLCLP), 2021, pp. 243–251, https://aclanthology.org/2021.rocling-1.31/
[12] G. Liu, Y. R. Fung, and H. Ji, “NLUBot101 at SemEval-2023 Task 3: An Augmented Multilingual NLI Approach Towards Online News Persuasion Techniques Detection,” in 17th International Workshop on Semantic Evaluation, SemEval 2023 - Proceedings of the Workshop, 2023, https://doi.org/10.18653/v1/2023.semeval-1.227
[13] B. R. P. Darnoto, D. Siahaan, and D. Purwitasari, “Automated Detection of Persuasive Content in Electronic News,” Informatics, vol. 10, no. 4, Dec. 2023, https://doi.org/10.3390/informatics10040086
[14] A. Moreo, A. Esuli, and F. Sebastiani, “Word-class embeddings for multiclass text classification,” Data Min Knowl Discov, vol. 35, no. 3, 2021, https://doi.org/10.1007/s10618-020-00735-3
[15] D. S. Asudani, N. K. Nagwani, and P. Singh, “Impact of word embedding models on text analytics in deep learning environment: a review,” Artif Intell Rev, vol. 56, no. 9, 2023, https://doi.org/10.1007/s10462-023-10419-1
[16] D. Erhan, A. Courville, Y. Bengio, and P. Vincent, “Why does unsupervised pre-training help deep learning?,” in Journal of Machine Learning Research, 2010, https://jmlr.org/papers/v11/erhan10a.html
[17] A. Mohammed and R. Kora, “An effective ensemble deep learning framework for text classification,” Journal of King Saud University - Computer and Information Sciences, vol. 34, no. 10, 2022, https://doi.org/10.1016/j.jksuci.2021.11.001
[18] Q. Li et al., “A Survey on Text Classification: From Traditional to Deep Learning,” ACM Trans Intell Syst Technol, vol. 13, no. 2, 2022, https://doi.org/10.1145/3495162
[19] M. M. Taye, “Understanding of Machine Learning with Deep Learning: Architectures, Workflow, Applications and Future Directions,” Computers, vol. 12, no. 5, 2023, https://doi.org/10.3390/computers12050091
[20] B. R. P. Darnoto, D. Siahaan, and D. Purwitasari, “Electronic News Dataset for Native Advertisement Detection,” Scientific Data , vol. 12, no. 1, Dec. 2025, https://doi.org/10.1038/s41597-024-04341-6
[21] R. Mihalcea and P. Tarau, “TextRank: Bringing order into texts,” in Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing, EMNLP 2004 - A meeting of SIGDAT, a Special Interest Group of the ACL held in conjunction with ACL 2004, 2004, https://aclanthology.org/W04-3252/
[22] A. Vaswani et al., “Attention is all you need,” in Advances in Neural Information Processing Systems, 2017. https://doi.org/10.48550/arXiv.1706.03762
[23] Y. Liu et al., “RoBERTa: A Robustly Optimized BERT Pretraining Approach,” 2019., https://github.com/pytorch/fairseq
[24] C. Janiesch, P. Zschech, and K. Heinrich, “Machine learning and deep learning,” Electronic Markets, vol. 31, no. 3, pp. 685–695, Sep. 2021, https://doi.org/10.1007/s12525-021-00475-2
[25] Y. LeCun, G. Hinton, and Y. Bengio, “Deep Learning,” Nature, vol. 521, 2015, https://doi.org/10.1038/nature14539
[26] M. Rhanoui, M. Mikram, S. Yousfi, and S. Barzali, “A CNN-BiLSTM Model for Document-Level Sentiment Analysis,” Mach Learn Knowl Extr, vol. 1, no. 3, pp. 832–847, Sep. 2019, https://doi.org/10.3390/make1030048
[27] W. Yin, K. Kann, M. Yu, and H. Schütze, “Comparative Study of CNN and RNN for Natural Language Processing,” arXiv.org, Feb. 2017, http://arxiv.org/abs/1702.01923
[28] M. A. Saleem, N. Senan, F. Wahid, M. Aamir, A. Samad, and M. Khan, “Comparative Analysis of Recent Architecture of Convolutional Neural Network,” Math Probl Eng, vol. 2022, 2022, https://doi.org/10.1155/2022/7313612
[29] G. Liu and J. Guo, “Bidirectional LSTM with attention mechanism and convolutional layer for text classification,” Neurocomputing, vol. 337, 2019, https://doi.org/10.1016/j.neucom.2019.01.078
[30] J. Alghamdi, Y. Lin, and S. Luo, “A Comparative Study of Machine Learning and Deep Learning Techniques for Fake News Detection,” Information (Switzerland), vol. 13, no. 12, 2022, https://doi.org/10.3390/info13120576
[31] S. F. N. Azizah, H. D. Cahyono, S. W. Sihwi, and W. Widiarto, “Performance Analysis of Transformer Based Models (BERT, ALBERT, and RoBERTa) in Fake News Detection,” in 2023 6th International Conference on Information and Communications Technology, ICOIACT 2023, 2023. https://doi.org/10.1109/ICOIACT59844.2023.10455849
[32] K. M. Fouad, S. F. Sabbeh, and W. Medhat, “Arabic fake news detection using deep learning,” Computers, Materials and Continua, vol. 71, no. 2, 2022, https://doi.org/10.32604/cmc.2022.021449
[33] A. Praseed, J. Rodrigues, and P. S. Thilagam, “Hindi fake news detection using transformer ensembles,” Eng Appl Artif Intell, vol. 119, Mar. 2023, https://doi.org/10.1016/j.engappai.2022.105731
Downloads
Published
Issue
Section
License
Copyright (c) 2025 Digital Zone: Jurnal Teknologi Informasi dan Komunikasi

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.






