Please use this identifier to cite or link to this item:
https://rda.sliit.lk/handle/123456789/1373
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Wijethunga, R.L.M.A.P.C. | - |
dc.contributor.author | Matheesha, D.M.K. | - |
dc.contributor.author | Al Noman, A. | - |
dc.contributor.author | De Silva, K.H.V.T.A. | - |
dc.contributor.author | Tissera, M. | - |
dc.contributor.author | Rupasinghe, L. | - |
dc.date.accessioned | 2022-02-23T08:44:44Z | - |
dc.date.available | 2022-02-23T08:44:44Z | - |
dc.date.issued | 2020-12-10 | - |
dc.identifier.isbn | 978-1-7281-8412-8 | - |
dc.identifier.uri | http://rda.sliit.lk/handle/123456789/1373 | - |
dc.description.abstract | The recent advancements in deep learning and other related technologies have led to improvements in various areas such as computer vision, bio-informatics, and speech recognition etc. This research mainly focuses on a problem with synthetic speech and speaker diarization. The developments in audio have resulted in deep learning models capable of replicating naturalsounding voice also known as text-to-speech (TTS) systems. This technology could be manipulated for malicious purposes such as deepfakes, impersonation, or spoofing attacks. We propose a system that has the capability of distinguishing between real and synthetic speech in group conversations.We built Deep Neural Network models and integrated them into a single solution using different datasets, including but not limited to Urban- Sound8K (5.6GB), Conversational (12.2GB), AMI-Corpus (5GB), and FakeOrReal (4GB). Our proposed approach consists of four main components. The speech-denoising component cleans and preprocesses the audio using Multilayer-Perceptron and Convolutional Neural Network architectures, with 93% and 94% accuracies accordingly. The speaker diarization was implemented using two different approaches, Natural Language Processing for text conversion with 93% accuracy and Recurrent Neural Network model for speaker labeling with 80% accuracy and 0.52 Diarization-Error-Rate. The final component distinguishes between real and fake audio using a CNN architecture with 94% accuracy. With these findings, this research will contribute immensely to the domain of speech analysis. | en_US |
dc.language.iso | en | en_US |
dc.publisher | 2020 2nd International Conference on Advancements in Computing (ICAC), SLIIT | en_US |
dc.relation.ispartofseries | Vol.1; | - |
dc.subject | Deep Neural Networks | en_US |
dc.subject | Natural Language Processing | en_US |
dc.subject | Speaker Diarization | en_US |
dc.subject | Deepfake | en_US |
dc.subject | Deep Learning | en_US |
dc.title | Deepfake Audio Detection: A Deep Learning Based Solution for Group Conversations | en_US |
dc.type | Article | en_US |
dc.identifier.doi | 10.1109/ICAC51239.2020.9357161 | en_US |
Appears in Collections: | 2nd International Conference on Advancements in Computing (ICAC) | 2020 Department of Computer Systems Engineering-Scopes |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
Deepfake_Audio_Detection_A_Deep_Learning_Based_Solution_for_Group_Conversations.pdf Until 2050-12-31 | 215.78 kB | Adobe PDF | View/Open Request a copy |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.