• Muhammad Nazhif Abda Putera Khano Department of Mathematics, Faculty of Mathematics and Natural Sciences, Sebelas Maret University, Indonesia
  • Dewi Retno Sari Saputro Department of Mathematics, Faculty of Mathematics and Natural Sciences, Sebelas Maret University, Indonesia
  • Sutanto Sutanto Department of Mathematics, Faculty of Mathematics and Natural Sciences, Sebelas Maret University, Indonesia
  • Antoni Wibowo Master of Information Technology, Bina Nusantara University, Indonesia
Keywords: Sentiment Analysis, RNN, GRU, LSTM


Sentiment analysis is a form of machine learning that functions to obtain emotional polarity values or data tendencies from data in the form of text. Sentiment analysis is needed to analyze opinions, sentiments, reviews, and criticisms from someone for a product, service, organization, topic, etc. Recurrent Neural Network (RNN) is one of the Natural Language Processing (NLP) algorithms that is used in sentiment analysis. RNN is a neural network that can use internal memory to process input. RNN itself has a weakness in Long-Term Memory (LTM). Therefore, this article examines the combination of Long-Short Term Memory (LSTM) and Gated Recurrent Unit (GRU) algorithms. GRU is an algorithm that is used to make each recurrent unit able to record adaptively at different time scales. Meanwhile, LSTM is a network architecture with the advantage of learning long-term dependencies on data. LSTM can remember long-term memory information, learn long-sequential data, and form information relation data in LTM. The combination of LSTM and GRU aims to overcome RNN’s weakness in LTM. The LSTM-GRU is combined by adding GRU to the data generated from LSTM. The combination of LSTM and GRU creates a better performance algorithm for addressing the LTM problem.


Download data is not yet available.


B. Liu, Sentiment Analysis: A Fascinating Problem. 2012. doi: 10.1007/978-3-031-02145-9_1.

R. Ni and H. Cao, “Sentiment Analysis based on GloVe and LSTM-GRU,” Chinese Control Conf. CCC, vol. 2020-July, pp. 7492–7497, 2020, doi: 10.23919/CCC50068.2020.9188578.

Muljono, D. Putri Artanti, A. Syukur, A. Prihandono, and D. I. Rosal Moses Setiadi, “Analisis Sentimen Untuk Penilaian Pelayanan Situs Belanja Online Menggunakan Algoritma Naïve Bayes,” 2018. [Online]. Available:

Y. A. Singgalen, “Pemilihan Metode dan Algoritma dalam Analisis Sentimen di Media Sosial : Sistematic Literature Review,” J. Inf. Syst. Informatics, vol. 3, no. 2, pp. 278–302, 2021, doi: 10.33557/journalisi.v3i2.125.

M. N. Alim, “Pemodelan Time Series Data Saham LQ45 dengan Algoritma LSTM, RNN, dan Arima,” Pros. Semin. Nas. Mat., vol. 6, pp. 694–701, 2023, [Online]. Available:

L. Zhang, S. Wang, and B. Liu, “Deep learning for sentiment analysis: A survey,” Wiley Interdiscip. Rev. Data Min. Knowl. Discov., vol. 8, no. 4, pp. 1–25, 2018, doi: 10.1002/widm.1253.

A. Graves, M. Liwicki, S. Fernández, R. Bertolami, H. Bunke, and J. Schmidhuber, “A novel connectionist system for unconstrained handwriting recognition,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 31, no. 5, pp. 855–868, 2009, doi: 10.1109/TPAMI.2008.137.

A. M. and G. H. Alex Graves, “Speech Recognition with Deep Recurrent Neural Networks , Department of Computer Science, University of Toronto,” Dep. Comput. Sci. Univ. Toronto, vol. 3, no. 3, pp. 45–49, 2013, [Online]. Available:

S. Sachin, A. Tripathi, N. Mahajan, S. Aggarwal, and P. Nagrath, “Sentiment Analysis Using Gated Recurrent Neural Networks,” SN Comput. Sci., vol. 1, no. 2, 2020, doi: 10.1007/s42979-020-0076-y.

N. Aslam, F. Rustam, E. Lee, P. B. Washington, and I. Ashraf, “Sentiment Analysis and Emotion Detection on Cryptocurrency Related Tweets Using Ensemble LSTM-GRU Model,” IEEE Access, vol. 10, no. April, pp. 39313–39324, 2022, doi: 10.1109/ACCESS.2022.3165621.

D. M. Kotambkar and P. M. Wankhede, “Hybrid LSTM/GRU-based Domain Adaptation Model for Correlation Analysis to Detect Glaucoma,” Int. J. Electr. Electron. Eng., vol. 10, no. 1, pp. 168–175, 2023, doi: 10.14445/23488379/ijeee-v10i1p116.

R. W. Yatscoff and J. Hayter, “Bibliometric evaluations of modern Clinical Chemistry are needed,” vol. 29, no. 10, pp. 1982–1983, 1983.


L. Waltman, N. J. van Eck, and E. C. M. Noyons, “A unified approach to mapping and clustering of bibliometric networks,” J. Informetr., vol. 4, no. 4, pp. 629–635, 2010, doi: 10.1016/j.joi.2010.07.002.

N. J. van Eck and L. Waltman, “Software survey: VOSviewer, a computer program for bibliometric mapping,” Scientometrics, vol. 84, no. 2, pp. 523–538, 2010, doi: 10.1007/s11192-009-0146-3.

V. P. Diodato and P. Gellatly, Dictionary of Bibliometrics. 1994. doi: 10.4324/9780203714133.

M.-A. De Looze and J. Lemarie, “Corpus Relevance Through Co-Word Analysis :,” Scientometrics, vol. 39, no. 3, pp. 267–280, 1997.

S. COULTER, Neal; MONARCH, Ira; KONDA, “Software Engineering as Seen through Its Research Literature : A Study in Co- Word Analysis . CMU / SEI-95-TR-019 ESC-TR-95-019 An Evolutionary Perspective of Software Engineering Research Through Co-Word Analysis Neal Coulter Ira Monarch Suresh Konda Mar,” J. Am. Soc. Inf. Sci., vol. 4571, no. January, 1998, doi: 10.1002/(SICI)1097-4571(1998)49.

J. Du and Y. Wu, “A bibliometric framework for identifying ‘Princes’ who wake up the ‘sleeping beauty’ in challenge-type scientific discoveries,” J. Data Inf. Sci., vol. 1, no. 1, pp. 50–68, 2016, doi: 10.20309/jdis.201605.

S. Hochreiter and J. Schmidhuber, “Long Short-Term Memory,” Neural Comput., vol. 9, no. 8, pp. 1735–1780, 1997, doi: 10.1162/neco.1997.9.8.1735.

J. Chung, C. Gulcehre, K. Cho, and Y. Bengio, “Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling,” pp. 1–9, 2014, [Online]. Available:

A. Graves, “Generating Sequences With Recurrent Neural Networks,” pp. 1–43, 2013, [Online]. Available:

R. Fu, Z. Zhang, and L. Li, “Using LSTM and GRU neural network methods for traffic flow prediction,” Proc. - 2016 31st Youth Acad. Annu. Conf. Chinese Assoc. Autom. YAC 2016, pp. 324–328, 2017, doi: 10.1109/YAC.2016.7804912.

G. Peng and Z. Yili, “Research on Forest Phenology Prediction Based on LSTM and GRU Model,” J. Resour. Ecol., vol. 14, no. 1, pp. 25–34, 2022, doi: 10.5814/j.issn.1674-764x.2023.01.003.

N. Habbat, H. Anoun, and L. Hassouni, “A Novel Hybrid Network for Arabic Sentiment Analysis using fine-tuned AraBERT model,” Int. J. Electr. Eng. Informatics, vol. 13, no. 4, pp. 801–812, 2021, doi: 10.15676/ijeei.2021.13.4.3.

F. Shahid, A. Zameer, and M. Muneeb, “Predictions for COVID-19 with deep learning models of LSTM, GRU and Bi-LSTM,” Chaos, Solitons and Fractals, vol. 140, p. 110212, 2020, doi: 10.1016/j.chaos.2020.110212.

J. Heaton, Applications of Deep Neural Networks with Keras. 2020. [Online]. Available:

S. Yang, X. Yu, and Y. Zhou, “LSTM and GRU Neural Network Performance Comparison Study: Taking Yelp Review Dataset as an Example,” Proc. - 2020 Int. Work. Electron. Commun. Artif. Intell. IWECAI 2020, no. April 2021, pp. 98–101, 2020, doi: 10.1109/IWECAI50956.2020.00027.

How to Cite
M. Putera Khano, D. Saputro, S. Sutanto, and A. Wibowo, “SENTIMENT ANALYSIS WITH LONG-SHORT TERM MEMORY (LSTM) AND GATED RECURRENT UNIT (GRU) ALGORITHMS”, BAREKENG: J. Math. & App., vol. 17, no. 4, pp. 2235-2242, Dec. 2023.