HYBRID INTEGRATION OF BERT AND BILSTM MODELS FOR SENTIMENT ANALYSIS

  • Nicolas Ray Amarco Tambunan Department of Mathematics, Faculty of Mathematics and Natural Sciences, Universitas Sebelas Maret, Indonesia https://orcid.org/0009-0009-6327-159X
  • Dewi Retno Sari Saputro Department of Mathematics, Faculty of Mathematics and Natural Sciences, Universitas Sebelas Maret, Indonesia https://orcid.org/0000-0002-6569-394X
  • Purnami Widyaningsih Department of Mathematics, Faculty of Mathematics and Natural Sciences, Universitas Sebelas Maret, Indonesia https://orcid.org/0000-0002-4737-6502
Keywords: BERT, BiLSTM, NLP, RNN, Sentiment analysis

Abstract

The rapid growth of sentiment analysis research has driven increasing interest in deep learning models, particularly transformer-based architectures such as BERT and recurrent neural networks like BiLSTM. While both approaches have shown substantial success in text classification tasks, each presents distinct strengths and limitations. This study aims to analyze the integration of BERT and BiLSTM models to enhance sentiment classification performance by combining contextual and sequential learning. A bibliometric analysis was conducted using VosViewer based on Scopus-indexed publications from 2020 to 2025, identifying four major thematic clusters related to transformer modeling, recurrent architectures, hybrid integration, and methodological advancements. Comparative findings from benchmark datasets, including SST-2, IMDb, and Yelp Reviews, indicate that hybrid BERT–BiLSTM models achieve superior accuracy compared to single models, reaching up to 97.67% on the IMDb dataset. However, this improvement is associated with increased computational complexity. The proposed framework reinforces the integration between BERT’s contextual embeddings and BiLSTM’s sequential modeling, offering a foundation for developing adaptive, and multilingual sentiment analysis systems. The results highlight future directions in optimizing hybrid architectures for efficiency, cross-lingual adaptability, and domain-specific sentiment understanding.

Downloads

Download data is not yet available.

References

B. Liu, “SENTIMENT ANALYSIS AND OPINION MINING,” Synthesis Lectures on Human Language Technologies, vol. 5, no. 1, pp. 1–167, May 2012, doi: https://doi.org/10.2200/S00416ED1V01Y201204HLT016.

J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “BERT: PRE-TRAINING OF DEEP BIDIRECTIONAL TRANSFORMERS FOR LANGUAGE UNDERSTANDING,” May 2019, doi: https://doi.org/10.18653/v1/N19-1423.

B. Liu, “SENTIMENT ANALYSIS: MINING OPINIONS, SENTIMENTS, AND EMOTIONS, SECOND EDITION,” Sentiment Analysis: Mining Opinions, Sentiments, and Emotions, Second Edition, no. May, pp. 1–432, 2020, doi: https://doi.org/10.1017/9781108639286.

W. Medhat, A. Hassan, and H. Korashy, “SENTIMENT ANALYSIS ALGORITHMS AND APPLICATIONS: A SURVEY,” Ain Shams Engineering Journal, vol. 5, no. 4, pp. 1093–1113, 2014, doi: https://doi.org/10.1016/j.asej.2014.04.011

Z. Huang, W. Xu, and K. Yu, “BIDIRECTIONAL LSTM-CRF MODELS FOR SEQUENCE TAGGING,” 2015, doi: https://doi.org/10.48550/arXiv.1508.01991.

S. Hochreiter and J. Schmidhuber, “LONG SHORT-TERM MEMORY,” Neural Comput, vol. 9, no. 8, pp. 1735–1780, 1997, doi: https://doi.org/10.1162/neco.1997.9.8.1735.

A. B. Alawi and F. Bozkurt, “A HYBRID MACHINE LEARNING MODEL FOR SENTIMENT ANALYSIS AND SATISFACTION ASSESSMENT WITH TURKISH UNIVERSITIES USING TWITTER DATA,” Decision Analytics Journal, vol. 11, no. April, p. 100473, 2024, doi: https://doi.org/10.1016/j.dajour.2024.100473.

R. E. N. Cai, B. I. N. Qin, Y. Chen, S. Chen, and W. E. I. Wang, “SENTIMENT ANALYSIS ABOUT INVESTORS AND CONSUMERS IN ENERGY MARKET BASED ON BERT-BILSTM,” vol. 8, 2020, doi: https://doi.org/10.1109/ACCESS.2020.3024750.

X. Zhou, “SENTIMENT ANALYSIS OF THE CONSUMER REVIEW TEXT BASED ON BERT-BILSTM IN A SOCIAL MEDIA ENVIRONMENT,” International Journal of Information Technologies and Systems Approach, vol. 16, pp. 1–16, Jan. 2023, doi: https://doi.org/10.4018/IJITSA.325618.

M. V. Joseph, “A BI-LSTM AND GRU HYBRID NEURAL NETWORK WITH BERT FEATURE EXTRACTION FOR AMAZON TEXTUAL REVIEW ANALYSIS,” International Journal of Engineering Trends and Technology, vol. 70, no. 5, pp. 131–144, 2022, doi: https://doi.org/10.14445/22315381/IJETT-V70I5P216.

E. W. Rice, “BIBLIOMETRIC EVALUATIONS OF MODERN CLINICAL CHEMISTRY ARE NEEDED.,” Clin Chem, vol. 29, no. 10, pp. 1858–1859, Oct. 1983, doi: https://doi.org/10.1093/clinchem/29.10.1858.

D. R. S. Saputro, H. Prasetyo, A. Wibowo, F. Khairina, K. Sidiq, and G. N. A. Wibowo, “BIBLIOMETRIC ANALYSIS OF NEURAL BASIS EXPANSION ANALYSIS FOR INTERPRETABLE TIME SERIES (N-BEATS) FOR RESEARCH TREND MAPPING,” BAREKENG: Jurnal Ilmu Matematika dan Terapan, vol. 17, no. 2, pp. 1103–1112, Jun. 2023, doi: https://doi.org/10.30598/barekengvol17iss2pp1103-1112.

D. F. Al Husaeni and A. B. D. Nandiyanto, “BIBLIOMETRIC USING VOSVIEWER WITH PUBLISH OR PERISH (USING GOOGLE SCHOLAR DATA): FROM STEP-BY-STEP PROCESSING FOR USERS TO THE PRACTICAL EXAMPLES IN THE ANALYSIS OF DIGITAL LEARNING ARTICLES IN PRE AND POST COVID-19 PANDEMIC,” ASEAN Journal of Science and Engineering, vol. 2, no. 1, pp. 19–46, 2022, doi: https://doi.org/10.17509/ajse.v2i1.37368.

A. Kirby, “EXPLORATORY BIBLIOMETRICS: USING VOSVIEWER AS A PRELIMINARY RESEARCH TOOL,” 2023. doi: https://doi.org/10.3390/publications11010010.

M. Dubyna, O. Popelo, N. Kholiavko, A. Zhavoronok, M. Fedyshyn, and I. Yakushko, “MAPPING THE LITERATURE ON FINANCIAL BEHAVIOR: A BIBLIOMETRIC ANALYSIS USING THE VOSVIEWER PROGRAM,” WSEAS Transactions on Business and Economics, vol. 19, no. January, pp. 231–246, 2022, doi: https://doi.org/10.37394/23207.2022.19.22.

L. Waltman, N. J. van Eck, and E. Noyons, “A UNIFIED APPROACH TO MAPPING AND CLUSTERING OF BIBLIOMETRIC NETWORKS,” J Informetr, vol. 4, pp. 629–635, Jun. 2010, doi: https://doi.org/10.1016/j.joi.2010.07.002.

N. J. van Eck and L. Waltman, “SOFTWARE SURVEY: VOSVIEWER, A COMPUTER PROGRAM FOR BIBLIOMETRIC MAPPING,” Scientometrics, vol. 84, no. 2, pp. 523–538, 2010, doi: https://doi.org/10.1007/s11192-009-0146-3.

H. Xu, Y. Huang, Y. Zhu, K. Audhkhasi, and B. Ramabhadran, “CONVOLUTIONAL DROPOUT AND WORDPIECE AUGMENTATION FOR END-TO-END SPEECH RECOGNITION”, 2021, doi: https://doi.org/10.1109/ICASSP39728.2021.9415004.

N. Rai, D. Kumar, N. Kaushik, C. Raj, and A. Ali, “FAKE NEWS CLASSIFICATION USING TRANSFORMER BASED ENHANCED LSTM AND BERT,” International Journal of Cognitive Computing in Engineering, vol. 3, no. October 2021, pp. 98–105, 2022, doi: https://doi.org/10.1016/j.ijcce.2022.03.003.

Y. Zheng and D. (Xuejun) Wang, “A SURVEY OF RECOMMENDER SYSTEMS WITH MULTI-OBJECTIVE OPTIMIZATION,” Neurocomputing, vol. 474, pp. 141–153, Feb. 2022, doi: https://doi.org/10.1016/j.neucom.2021.11.041.

M. Luengo and D. García-Marín, “THE PERFORMANCE OF TRUTH: POLITICIANS, FACT-CHECKING JOURNALISM, AND THE STRUGGLE TO TACKLE COVID-19 MISINFORMATION,” Am J Cult Sociol, vol. 8, no. 3, pp. 405–427, 2020, doi: https://doi.org/10.1057/s41290-020-00115-w.

D. Chi, T. Huang, Z. Jia, and S. Zhang, “RESEARCH ON SENTIMENT ANALYSIS OF HOTEL REVIEW TEXT BASED ON BERT-TCN-BILSTM-ATTENTION MODEL,” Array, vol. 25, no. July 2024, p. 100378, 2025, doi: https://doi.org/10.1016/j.array.2025.100378.

M. N. Abda, P. Khano, D. Retno, S. Saputro, and A. Wibowo, “SENTIMENT ANALYSIS WITH LONG-SHORT TERM MEMORY (LSTM) AND GATED RECURRENT UNIT (GRU) ALGORITHMS,” vol. 17, no. 4, pp. 2235–2242, 2023, doi: https://doi.org/10.30598/barekengvol17iss4pp2235-2242.

T. Ganegedara, NATURAL LANGUAGE PROCESSING WITH TENSORFLOW: TEACH LANGUAGE TO MACHINES USING PYTHON’S DEEP LEARNING LIBRARY. in Expert insight. Packt Publishing, 2018.

P. Zhou, Z. Qi, S. Zheng, J. Xu, H. Bao, and B. Xu, “TEXT CLASSIFICATION IMPROVED BY INTEGRATING BIDIRECTIONAL LSTM WITH TWO-DIMENSIONAL MAX POOLING,” COLING 2016 - 26th International Conference on Computational Linguistics, Proceedings of COLING 2016: Technical Papers, vol. 2, no. 1, pp. 3485–3495, 2016, doi: https://doi.org/10.48550/arXiv.1611.06639.

G. Nkhata, S. Gauch, U. Anjum, and J. Zhan, “FINE-TUNING BERT WITH BIDIRECTIONAL LSTM FOR FINE-GRAINED MOVIE REVIEWS SENTIMENT ANALYSIS,” Feb. 2025, doi: https://doi.org/10.48550/arXiv.2502.20682.

R. Belaroussi, S. C. Noufe, F. Dupin, and P. O. Vandanjon, “POLARITY OF YELP REVIEWS: A BERT–LSTM COMPARATIVE STUDY,” Big Data and Cognitive Computing, vol. 9, no. 5, May 2025, doi: https://doi.org/10.3390/bdcc9050140.

X. Li, L. Chen, B. Chen, and X. Ge, “BERT-BILSTM-ATTENTION MODEL FOR SENTIMENT ANALYSIS ON CHINESE STOCK REVIEWS,” Applied Mathematics and Nonlinear Sciences, vol. 9, no. 1, Jan. 2024, doi: https://doi.org/10.2478/amns-2024-1847.

Published
2026-01-26
How to Cite
[1]
N. R. A. Tambunan, D. R. S. Saputro, and P. Widyaningsih, “HYBRID INTEGRATION OF BERT AND BILSTM MODELS FOR SENTIMENT ANALYSIS”, BAREKENG: J. Math. & App., vol. 20, no. 2, pp. 1719–1730, Jan. 2026.

Most read articles by the same author(s)