Exploring Green AI for Audio Deepfake Detection

Document Type

Conference Article

Publication Title

European Signal Processing Conference

Abstract

The state-of-the-art audio deepfake detectors leveraging deep neural networks exhibit impressive recognition performance. Nonetheless, this advantage is accompanied by a significant carbon footprint. This is mainly due to the use of high-performance computing with accelerators and high training time. Studies show that average deep NLP model produces around 626k lbs of CO2 which is equivalent to five times of average US car emission at its lifetime. This is certainly a massive threat to the environment. To tackle this challenge, this study presents a novel framework for audio deepfake detection that can be seamlessly trained using standard CPU resources. Our proposed framework utilizes off-the-shelve self-supervised learning (SSL) based models which are pre-trained and available in public repositories. In contrast to existing methods that fine-tune SSL models and employ additional deep neural networks for downstream tasks, we exploit classical machine learning algorithms such as logistic regression and shallow neural networks using the SSL embeddings extracted using the pre-trained model. Our approach shows competitive results compared to the commonly used high-carbon footprint approaches. In experiments with the ASVspoof 2019 LA dataset, we achieve a 0.90% equal error rate (EER) with less than 1k trained model parameters. To encourage further research in this direction and support reproducible results, the Python code will be made publicly accessible following acceptance.

First Page

186

Last Page

190

DOI

10.23919/eusipco63174.2024.10715424

Publication Date

1-1-2024

Comments

Open Access; Green Open Access

Share

COinS