Loading…
Friday January 31, 2025 3:00pm - 5:00pm IST

Authors - Ashwini Bhosale, Laxmi Patil, Gitanjali Netake, Sakshi Surwase, Rutuja Gade, Prema Sahane
Abstract - This paper discusses a project that aims to create a system for translating sign language into spoken words while also recognizing the emotions of the signer. The goal is to make communication easier for Deaf and hard-of-hearing individuals by converting hand gestures into speech and reflecting the signer’s emotional tone in the voice output. This would make conversations feel more natural and expressive, enhancing interactions in both social and work environments. The project uses computer vision and Convolutional Neural Networks (CNNs) to accurately recognize various sign language gestures. To identify emotions, it uses deep learning models like VGG-16 and ResNet, which focus on facial expressions. It also uses Long Short-Term Memory (LSTM) networks to analyze audio input and detect emotional tones in speech. For turning sign language into spoken words, the system employs Text-to-Speech (TTS) technologies like Tacotron 2 and WaveGlow. These tools create natural-sounding speech, and the detected emotions are added to the voice by adjusting tone, pitch, and speed to match the signer’s feelings. With real-time processing and an easy-to-use interface, this system aims to provide quick translation and emotion detection. The expected result is a fully functional system that not only translates sign language into speech but also effectively conveys emotions, making communication more inclusive for Deaf and hard-ofhearing individuals.
Paper Presenter
Friday January 31, 2025 3:00pm - 5:00pm IST
Virtual Room B Pune, India

Sign up or log in to save this to your schedule, view media, leave feedback and see who's attending!

Share Modal

Share this link via

Or copy link