OPTIMIZATION OF LIE DETECTION WITH DEEP LEARNING APPROACH USING FUSION METHOD
Abstract
In lie detection, early fusion methods that combine information from multiple modalities, such as images and sounds, are used. To improve performance, a lie detection system is designed using mean fusion techniques. The feature extraction method, which uses Optical Flow (OF) and GaussianBlur, uses image data as input. This process generates facial feature change data as numeric values, enabling more efficient processing and allowing the model to be trained quickly and effectively. Evaluation of the model with accuracy, precision, recall, and F1 score using 10 (Fold) cross-validation using a Convolutional Neural Network (CNN) architecture to find features associated with lying in visual content. At the same time, voice signals are studied through voice signal processing and voice feature extraction methods using Mel Frequency Cepstral Coefficients (MFCC) feature extraction and classification using Mel Frequency Cepstral Coefficients (LSTM). The purpose of this process is to discover lying patterns through the audio module. The mean fusion model combines the decisions of multiple lie detection models for each modality, enabling the system to leverage the strengths of each modality to create a broader feature representation. The dataset used contains images, and voice is used for performance evaluation. This dataset can show various lying situations and contexts. The experimental results show that the fusion method using the mean fusion model achieves a lie detection accuracy of 99% and an F1-Score of 0.99. In the context of lying, this research helps develop a more comprehensive and reliable lie-detection system model. The main contribution of this work is a measurable multimodal fusion strategy that integrates pupil-based facial landmarks and temporal voice features, yielding an accuracy improvement of over 14% compared to unimodal baselines.
Downloads
References
K. Ask, S. Calderon, and E. Mac Giolla, “HUMAN LIE-DETECTION PERFORMANCE: DOES RANDOM ASSIGNMENT VERSUS SELF-SELECTION OF LIARS AND TRUTH-TELLERS MATTER?,” J. Appl. Res. Mem. Cogn., vol. 9, no. 1, pp. 128–136, 2020, doi: https://doi.org/10.1016/j.jarmac.2019.10.002
A. Vrij and M. Hartwig, “DECEPTION AND LIE DETECTION IN THE COURTROOM: THE EFFECT OF DEFENDANTS WEARING MEDICAL FACE MASKS,” J. Appl. Res. Mem. Cogn., vol. 10, no. 3, pp. 392–399, 2021, doi: https://doi.org/10.1016/j.jarmac.2021.06.002
Z. Yang et al., “TOPIC AUDIOLIZATION: A MODEL FOR RUMOR DETECTION INSPIRED BY LIE DETECTION TECHNOLOGY,” Inf. Process. Manag., vol. 61, no. 1, p. 103563, 2024, doi: https://doi.org/10.1016/j.ipm.2023.103563
H. J. Holm, “TRUTH AND LIE DETECTION IN BLUFFING,” J. Econ. Behav. Organ., vol. 76, no. 2, pp. 318–324, 2010, doi: https://doi.org/10.1016/j.jebo.2010.06.003
Á. Escolà-Gascón, “NEW TECHNIQUES TO MEASURE LIE DETECTION USING COVID-19 FAKE NEWS AND THE MULTIVARIABLE MULTIAXIAL SUGGESTIBILITY INVENTORY-2 (MMSI-2),” Comput. Hum. Behav. Reports, vol. 3, no. December 2021, doi: https://doi.org/10.1016/j.chbr.2020.100049
M. Rahmani, F. Mohajelin, K. Nastaran, and S. and S. D. Sheykhivand, “AN AUTOMATIC LIE DETECTION MODEL USING EEG SIGNALS BASED ON THE COMBINATION OF TYPE 2 FUZZY SETS AND DEEP GRAPH,” Biomed. Signal Process. Heal. Monit. Based Sensors, 2024, doi: https://doi.org/10.3390/s24113598
Y. V. Bessonova and A. A. Oboznov, “EYE MOVEMENTS AND LIE DETECTION,” Adv. Intell. Syst. Comput., vol. 722, pp. 149–155, 2018, doi: https://doi.org/10.1007/978-3-319-73888-8_25
W. Khan, K. Crockett, J. O’Shea, A. Hussain, and B. M. Khan, “DECEPTION IN THE EYES OF DECEIVER: A COMPUTER VISION AND MACHINE LEARNING BASED AUTOMATED DECEPTION DETECTION,” Expert Syst. Appl., vol. 169, no.1, p. 14341, November 2021, doi: https://doi.org/10.1016/j.eswa.2020.114341
F. V. Nurçin, E. Imanov, A. Işin, and D. U. Ozsahin, “LIE DETECTION ON PUPIL SIZE BY BACK PROPAGATION NEURAL NETWORK,” Procedia Comput. Sci., vol. 120, no. 2017, pp. 417–421, 2017, doi: https://doi.org/10.1016/j.procs.2017.11.258
B. M. D. Miron Zuckerman Robert Rosenthal, Verbal And Nonverbal Communication Of Deception, vol. 2, no. 2. 1956.
S. D. H. Permana, G. Saputra, B. Arifitama, Yaddarabullah, W. Caesarendra, and R. Rahim, “CLASSIFICATION OF BIRD SOUNDS AS AN EARLY WARNING METHOD OF FOREST FIRES USING CONVOLUTIONAL NEURAL NETWORK (CNN) ALGORITHM,” J. King Saud Univ. - Comput. Inf. Sci., vol. 34, no. 7, pp. 4345–4357, 2022, doi: https://doi.org/10.1016/j.jksuci.2021.04.013
S. Arooj, S. Altaf, S. Ahmad, H. Mahmoud, and A. S. N. Mohamed, “ENHANCING SIGN LANGUAGE RECOGNITION USING CNN AND SIFT: A CASE STUDY ON PAKISTAN SIGN LANGUAGE,” J. King Saud Univ. - Comput. Inf. Sci., vol. 36, no. 2, p. 101934, 2024. https://doi.org/10.1016/j.jksuci.2024.101934
D. Kusumawati, A. A. Ilham, A. Achmad, and I. Nurtanio, “VGG-16 AND VGG-19 ARCHITECTURE MODELS IN LIE DETECTION USING IMAGE PROCESSING,” in Proceeding - 6th International Conference on Information Technology, Information Systems and Electrical Engineering: Applying Data Sciences and Artificial Intelligence Technologies for Environmental Sustainability, ICITISEE 2022, pp. 340–345 2022, doi: https://doi.org/10.1109/ICITISEE57756.2022.10057748
Y. Zhang and S. Huang, “JOURNAL OF KING SAUD UNIVERSITY - COMPUTER AND AUTOMATIC RUMOR RECOGNITION FOR PUBLIC HEALTH AND SAFETY : A STRATEGY COMBINING TOPIC CLASSIFICATION AND MULTI-DIMENSIONAL FEATURE FUSION,” J. King Saud Univ. - Comput. Inf. Sci., vol. 36, no. 5, p. 102087, 2024, doi: https://doi.org/10.1016/j.jksuci.2024.102087
Y. Wu and Q. Ji, “FACIAL LANDMARK DETECTION: A LITERATURE SURVEY,” Int. J. Comput. Vis., vol. 127, no. 2, pp. 115–142, 2019, doi: https://doi.org/10.1007/s11263-018-1097-z
J. Gu et al., “RECENT ADVANCES IN CONVOLUTIONAL NEURAL NETWORKS,” Pattern Recognit., vol. 77, pp. 354–377, 2018, doi: https://doi.org/10.1016/j.patcog.2017.10.013
X. Pan et al., “ON THE INTEGRATION OF SELF-ATTENTION AND CONVOLUTION,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., vol. 2022-June, pp. 805–815, 2022, doi: https://doi.org/10.1109/CVPR52688.2022.00089
D. V. Sang and L. T. B. Cuong, “IMPROVING CRNN WITH EFFICIENTNET-LIKE FEATURE EXTRACTOR AND MULTI-HEAD ATENTION FOR TEXT RECOGNITION,” ACM Int. Conf. Proceeding Ser., pp. 285–290, 2019, doi: https://doi.org/10.1145/3368926.3369689
B. Zohuri and S. Zadeh, “THE UTILITY OF ARTIFICIAL INTELLIGENCE FOR MOOD ANALYSIS, DEPRESSION DETECTION, AND SUICIDE RISK MANAGEMENT,” Journal of Health Science. researchgate.net, 2020, [Online]. Available:https://www.researchgate.net/profile/Bahman Zohuri/publication/342448488_The_Utility_of_Artificial_Intelligence_for_Mood_Analysis_Depression_Detection_and_Suicide_Risk_Management/links/5ef4be6a45851550506f5125/The-Utility-of-Artificial-Intelligence-for-Mo.
H. R. V. Joze, A. Shaban, M. L. Iuzzolino, and K. Koishida, “MMTM: MULTIMODAL TRANSFER MODULE FOR CNN FUSION,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., pp. 13286–13296, 2020, doi: https://doi.org/10.1109/CVPR42600.2020.01330
A. Luque, A. Carrasco, A. Martín, and A. de las Heras, “THE IMPACT OF CLASS IMBALANCE IN CLASSIFICATION PERFORMANCE METRICS BASED ON THE BINARY CONFUSION MATRIX,” Pattern Recognit., vol. 91, pp. 216–231, 2019, doi: https://doi.org/10.1016/j.patcog.2019.02.023
A. A. Masrur Ahmed et al., “DEEP LEARNING HYBRID MODEL WITH BORUTA-RANDOM FOREST OPTIMISER ALGORITHM FOR STREAMFLOW FORECASTING WITH CLIMATE MODE INDICES, RAINFALL, AND PERIODICITY,” J. Hydrol., vol. 599, no. May, p. 126350, 2021, doi: https://doi.org/10.1016/j.jhydrol.2021.126350
M. Owayjan, A. Kashour, N. Al Haddad, M. Fadel, and G. Al Souki, “THE DESIGN AND DEVELOPMENT OF A LIE DETECTION SYSTEM USING FACIAL MICRO-EXPRESSIONS,” 2012 2nd Int. Conf. Adv. Comput. Tools Eng. Appl. ACTEA 2012, pp. 33–38, 2012, doi: https://doi.org/10.1109/ICTEA.2012.6462897
J. Immanuel, A. Joshua, and S. Thomas George, “A STUDY ON USING BLINK PARAMETERS FROM EEG DATA FOR LIE DETECTION,” in 2018 International Conference on Computer Communication and Informatics, ICCCI 2018, pp. 1–5, 2018, doi: https://doi.org/10.1109/ICCCI.2018.8441238
Z. Labibah, M. Nasrun, and C. Setianingsih, “LIE DETECTOR WITH THE ANALYSIS OF THE CHANGE OF DIAMETER PUPIL AND THE,” 2018 IEEE Int. Conf. Internet Things Intell. Syst. Lie, pp. 214–220, 2018, doi: https://doi.org/10.1109/IOTAIS.2018.8600918
V. Gupta, M. Agarwal, M. Arora, T. Chakraborty, R. Singh, and M. Vatsa, “BAG-OF-LIES: A MULTIMODAL DATASET FOR DECEPTION DETECTION,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, 2019, vol.1 No.II pp. 83–90,June 2019, doi: https://doi.org/10.1109/CVPRW.2019.00016
M. Ding, A. Zhao, Z. Lu, T. Xiang, and J. R. Wen, “FACE-FOCUSED CROSS-STREAM NETWORK FOR DECEPTION DETECTION IN VIDEOS,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2019, vol.1,No.2 , pp. 7794–7803,June 2019, doi: https://doi.org/10.1109/CVPR.2019.00799
F. M. Talaat, “EXPLAINABLE ENHANCED RECURRENT NEURAL NETWORK FOR LIE DETECTION USING VOICE STRESS ANALYSIS,” Multimed. Tools Appl., no. 0123456789, 2023, doi: https://doi.org/10.1007/s11042-023-16769-w
Copyright (c) 2026 Dewi Kusumawati, Fitriyanti Andi Masse, Wulan Wulan

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
Authors who publish with this Journal agree to the following terms:
- Author retain copyright and grant the journal right of first publication with the work simultaneously licensed under a creative commons attribution license that allow others to share the work within an acknowledgement of the work’s authorship and initial publication of this journal.
- Authors are able to enter into separate, additional contractual arrangement for the non-exclusive distribution of the journal’s published version of the work (e.g. acknowledgement of its initial publication in this journal).
- Authors are permitted and encouraged to post their work online (e.g. in institutional repositories or on their websites) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published works.




1.gif)


