Loading…
Friday January 31, 2025 12:15pm - 2:15pm IST

Authors - Shradha Jain, Sneha Suman, Insha Khan, Ashwani Kumar, Surbhi Sharma
Abstract - With continual advancements in deep learning, the potential misuse of deep fake is increasing and its detection is in a major scope of work. A model is trained to recognize the patterns in input data, deep fake recognize those patterns in a fabricated way. Sometimes a small, intentional change is added in the data points, these changes are undetectable to humans and confuse the learning model. Those changes are called adversarial perturbations. Compressive adversarial perturbations aim to make those changes even smaller and harder to detect. Authors explore a sophisticated framework - ComPAD (Compressive Adversarial Perturbations and Detection) which is used to detect adversarial attacks. This paper explores the strategies, and provides comparative analysis of methods used by different researchers. Various datasets including UADFV, DeepfakeTIMIT, LFW, FF++, and Deeperforensics are evaluated to achieve the highest metrics. Methods based on convolution neural networks, particle swarm optimization, genetic algorithm and D4 (Disjoint Diffusion Deep Face Detection) are used for detection. Authors also discuss the challenges such as generalization of models across the new data, the continuous evolution of adversarial perturbations that leads to consistent attacks, and the scalability issues for the real time deep fake. Concluding that models can significantly improve the accuracy, robustness and generalization.
Paper Presenter
Friday January 31, 2025 12:15pm - 2:15pm IST
Virtual Room B Pune, India

Sign up or log in to save this to your schedule, view media, leave feedback and see who's attending!

Share Modal

Share this link via

Or copy link