Authors - T. Sridevi, Chidhrapu Harini, Kurella Sai Veena Abstract - Sign Language is the primary means of communication among 1.8 million deaf people across India, and although Indian Sign Language (ISL) translation to technology-based effective solutions is still very limited, tremendous effort has so far been made in global research in sign language recognition. Nevertheless, the challenge persists in transcoding of text from ISL. This project will fill the vacuum by developing a deep-learning-based model capable of generating subtitles for ISL videos. With a pre-trained Convolutional Neural Network (CNN) for spatial feature extraction and a Recurrent Neural Network (RNN) for encoding the temporal pattern, the model learns on the Indian Sign Language Videos dataset. Designed in a manner to achieve high-accuracy captioning of ISL for reliable communication with the Indian deaf community. This will provide access to means of communication for millions of ISL users, but at the same time offers a critical communication tool meant to facilitate improvement in circles of education, social life, and professional circles in India.