Please use this identifier to cite or link to this item: https://rda.sliit.lk/handle/123456789/1732
Full metadata record
DC FieldValueLanguage
dc.contributor.authorWijethunga, R. L. M. A. P. C-
dc.contributor.authorMatheesha, D. M. K-
dc.contributor.authorNoman, A. A-
dc.contributor.authorDe Silva, K. H. V. T. A-
dc.contributor.authorTissera, M-
dc.contributor.authorRupasinghe, L-
dc.date.accessioned2022-03-22T04:05:38Z-
dc.date.available2022-03-22T04:05:38Z-
dc.date.issued2020-12-10-
dc.identifier.citationR. L. M. A. P. C. Wijethunga, D. M. K. Matheesha, A. A. Noman, K. H. V. T. A. De Silva, M. Tissera and L. Rupasinghe, "Deepfake Audio Detection: A Deep Learning Based Solution for Group Conversations," 2020 2nd International Conference on Advancements in Computing (ICAC), 2020, pp. 192-197, doi: 10.1109/ICAC51239.2020.9357161.en_US
dc.identifier.isbn978-1-7281-8412-8-
dc.identifier.urihttp://rda.sliit.lk/handle/123456789/1732-
dc.description.abstractThe recent advancements in deep learning and other related technologies have led to improvements in various areas such as computer vision, bio-informatics, and speech recognition etc. This research mainly focuses on a problem with synthetic speech and speaker diarization. The developments in audio have resulted in deep learning models capable of replicating natural-sounding voice also known as text-to-speech (TTS) systems. This technology could be manipulated for malicious purposes such as deepfakes, impersonation, or spoofing attacks. We propose a system that has the capability of distinguishing between real and synthetic speech in group conversations.We built Deep Neural Network models and integrated them into a single solution using different datasets, including but not limited to Urban-Sound8K (5.6GB), Conversational (12.2GB), AMI-Corpus (5GB), and FakeOrReal (4GB). Our proposed approach consists of four main components. The speech-denoising component cleans and preprocesses the audio using Multilayer- Perceptron and Convolutional Neural Network architectures, with 93% and 94% accuracies accordingly. The speaker diarization was implemented using two different approaches, Natural Language Processing for text conversion with 93% accuracy and Recurrent Neural Network model for speaker labeling with 80% accuracy and 0.52 Diarization-Error-Rate. The final component distinguishes between real and fake audio using a CNN architecture with 94 % accuracy. With these findings, this research will contribute immensely to the domain of speech analysis.en_US
dc.language.isoenen_US
dc.publisherIEEEen_US
dc.relation.ispartofseries2020 2nd International Conference on Advancements in Computing (ICAC);Vol 1 Pages 192-197-
dc.subjectDeepfake Audio Detectionen_US
dc.subjectDeep Learningen_US
dc.subjectBased Solutionen_US
dc.subjectGroup Conversationsen_US
dc.titleDeepfake Audio Detection: A Deep Learning Based Solution for Group Conversationsen_US
dc.typeArticleen_US
dc.identifier.doi10.1109/ICAC51239.2020.9357161en_US
Appears in Collections:Research Papers - IEEE
Research Papers - SLIIT Staff Publications

Files in This Item:
File Description SizeFormat 
Deepfake_Audio_Detection_A_Deep_Learning_Based_Solution_for_Group_Conversations.pdf
  Until 2050-12-31
215.77 kBAdobe PDFView/Open Request a copy


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.