User Opinion Mining on the Maxim Application Reviews Using BERT-Base Multilingual Uncased

Authors

  • Sindy Eka Safitri Depatment of Technology Information, Faculty of Science and Technology
  • Wenty Dwi Yuniarti Depatment of Technology Information, Faculty of Science and Technology
  • Maya Rini Handayani Depatment of Technology Information, Faculty of Science and Technology
  • Khothibul Umam Depatment of Technology Information, Faculty of Science and Technology

DOI:

https://doi.org/10.32736/sisfokom.v14i3.2391

Keywords:

sentiment analysis, app reviews, BERT, Maxim, Text classification

Abstract

Online transportation applications such as Maxim are increasingly used due to the convenience they offer in ordering services. As usage increases, the number of user reviews also grows, serving as a valuable source of information for evaluating customer satisfaction and service quality. Sentiment analysis of these reviews can help companies understand user perceptions and improve service quality. This study aims to analyze the sentiment of user reviews on the Maxim application using the BERT-Base Multilingual Uncased model. BERT was chosen for its ability to understand sentence context bidirectionally, and it has proven to outperform traditional models such as MultinomialNB and SVM in previous studies, with an accuracy of 75.6%. The dataset used consists of 10,000 user reviews with an imbalanced distribution: 4,000 negative, 2,000 neutral, and 4,000 positive reviews. The data was split into 90% training data (9,000 reviews) and 10% test data (1,000 reviews). From the 9,000 training data, 15% or 1,350 reviews were allocated as validation data, resulting in a final training set of 7,650 reviews. Evaluation results show that BERT is capable of classifying sentiment into three categories positive, neutral, and negative, with an accuracy of 94.7%. The highest F1-score was achieved in the positive class (0.9621), followed by the neutral class (0.9412), and the negative class (0.9246). The confusion matrix shows that most predictions match the actual labels. These findings indicate that BERT is an effective and reliable model for performing sentiment analysis on user reviews of online transportation applications such as Maxim.

References

F. Greco, Sentiment analysis and opinion mining. Morgan & Claypool Publishers, 2022. doi: 10.4337/9781800374263.sentiment.analysis.

A. R. Gunawan and R. F. Alfa Aziza, “Sentiment Analysis Using LSTM Algorithm Regarding Grab Application Services in Indonesia,” J. Appl. Informatics Comput., vol. 9, no. 2, pp. 322–332, Mar. 2025, doi: 10.30871/jaic.v9i2.8696.

A. N. Hasanah and B. N. Sari, “ANALISIS SENTIMEN ULASAN PENGGUNA APLIKASI JASA OJEK ONLINE MAXIM PADA GOOGLE PLAY DENGAN METODE NAÏVE BAYES CLASSIFIER,” J. Inform. dan Tek. Elektro Terap., vol. 12, no. 1, Jan. 2024, doi: 10.23960/jitet.v12i1.3628.

K. Adib, M. R. Handayani, W. D. Yuniarti, and K. Umam, “Opini Publik Pasca-Pemilihan Presiden: Eksplorasi Analisis Sentimen Media Sosial X Menggunakan SVM,” SINTECH (Science Inf. Technol. J., vol. 7, no. 2, pp. 80–91, Aug. 2024, doi: 10.31598/sintechjournal.v7i2.1581.

J. U. S. Lazuardi and A. Juarna, “ANALISIS SENTIMEN ULASAN PENGGUNA APLIKASI JOOX PADA ANDROID MENGGUNAKAN METODE BIDIRECTIONAL ENCODER REPRESENTATION FROM TRANSFORMER (BERT),” J. Ilm. Inform. Komput., vol. 28, no. 3, pp. 251–260, Dec. 2023, doi: 10.35760/ik.2023.v28i3.10090.

W. Y. Raden Mas Rizqi Wahyu Panca Kusuma Atmaja, “Analisis Sentimen Customer Review Aplikasi Ruang Guru dengan Metode BERT (Bidirectional Encoder Representations from Transformers),” J. Emerg. Inf. Syst. Bus. Intell., vol. 02, pp. 55–62, 2021.

T. B. B. Wicaksono and R. D. Syah, “IMPLEMENTASI METODE BIDIRECTIONAL ENCODER REPRESENTATIONS FROM TRANSFORMERS UNTUK ANALISIS SENTIMEN TERHADAP ULASAN APLIKASI ACCESS,” J. Ilm. Inform. Komput., vol. 29, no. 3, pp. 254–265, Dec. 2024, doi: 10.35760/ik.2024.v29i3.12514.

B. Prasetyo, Ahmad Yusuf Al-Majid, and Suharjito, “A Comparative Analysis of MultinomialNB, SVM, and BERT on Garuda Indonesia Twitter Sentiment,” PIKSEL Penelit. Ilmu Komput. Sist. Embed. Log., vol. 12, no. 2, pp. 445–454, Sep. 2024, doi: 10.33558/piksel.v12i2.9966.

Y. Wu, Z. Jin, C. Shi, P. Liang, and T. Zhan, “Research on the application of deep learning-based BERT model in sentiment analysis,” Appl. Comput. Eng., vol. 71, no. 1, pp. 14–20, May 2024, doi: 10.54254/2755-2721/71/2024MA.

E. Alzahrani and L. Jololian, “How Different Text-Preprocessing Techniques using the Bert Model Affect the Gender Profiling of Authors,” in Advances in Machine Learning, Academy and Industry Research Collaboration Center (AIRCC), Sep. 2021, pp. 01–08. doi: 10.5121/csit.2021.111501.

J. D. M.-W. C. K. L. K. Toutanova, “BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding,” in Proceedings of NAACL-HLT 2019, pages 4171–4186 Minneapolis, Minnesota, June 2 - June 7, 2019. c 2019 Association for Computational Linguistics, 2019, pp. 4171–4186.

G. Letarte, F. Paradis, P. Giguère, and F. Laviolette, “Importance of Self-Attention for Sentiment Analysis,” in Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, Stroudsburg, PA, USA: Association for Computational Linguistics, 2018, pp. 267–275. doi: 10.18653/v1/W18-5429.

D. E. BIRBA, “A Comparative study of data splitting algorithms for machine learning model selection,” KTH ROYAL INSTITUTE OF TECHNOLOGY, 2020.

T. Pires, E. Schlinger, and D. Garrette, “How Multilingual is Multilingual BERT?,” in Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Stroudsburg, PA, USA: Association for Computational Linguistics, 2019, pp. 4996–5001. doi: 10.18653/v1/P19-1493.

X. Liu and C. Wang, “An Empirical Study on Hyperparameter Optimization for Fine-Tuning Pre-trained Language Models,” in Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), Stroudsburg, PA, USA: Association for Computational Linguistics, 2021, pp. 2286–2300. doi: 10.18653/v1/2021.acl-long.178.

C. Sun, X. Qiu, Y. Xu, and X. Huang, “How to Fine-Tune BERT for Text Classification?,” May 2019, [Online]. Available: http://arxiv.org/abs/1905.05583

Downloads

Published

2025-07-28

Issue

Section

Articles