Dimension-Expanding MLP in Transformer: Inappropriate Sentences and Paragraph Digital Content Filtering
Abstract
The creation of digital content is now a pivotal element of today’s digital environment, driven by the need for both individuals and organizations to engage audiences effectively. As digital platforms grow in scope and impact, ensuring the security, professionalism, and appropriateness of user-generated content has become crucial. This study introduces a new approach for filtering inappropriate digital content by integrating dimension-expanding multi-layer perceptions (MLPs) into transformer architectures. The dimension-expanding MLP processed more high-dimensional features in the Transformers network, giving the ability to understand more specific contexts. Experimental findings reveal that the proposed model outperforms Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU), Transformer (Baseline) in accuracy, computational efficiency, and scalability. The research highlights the model’s practical applications in areas like social media content moderation, legal document compliance monitoring, and filtering harmful content in e-learning and gaming platforms with 0.744 accuracy.
Article Metrics
Abstract: 294 Viewers PDF: 14 ViewersKeywords
Full Text:
PDFRefbacks
- There are currently no refbacks.
Journal of Applied Data Sciences
ISSN | : | 2723-6471 (Online) |
Organized by | : | Computer Science and Systems Information Technology, King Abdulaziz University, Kingdom of Saudi Arabia. |
Website | : | http://bright-journal.org/JADS |
: | taqwa@amikompurwokerto.ac.id (principal contact) | |
support@bright-journal.org (technical issues) |
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0