Authors - Kusuma B S, Meghana Murthy B V, Preksha R, Srushti M P, C Balarengadurai Abstract - Against the backdrop of either a Deaf World or hearing people, the major challenges which face modern society concern communication barriers in general. The paper proposes a system for translation through gestures in Indian Sign Language to audio and video outputs for non-signers to enable easy interaction with them. Advanced machine learning techniques, such as Support Vector Machine and Convolutional Neural Network, will be used to enable this tool to recognize motions of ISL in real time. It converts these into the correct format for video and audio. In this respect, the paper claims to "make communication more accessible and bridge the gap in communication in which gestures are recognized and translated." Real-time recognition algorithms overcome the challenges faced by hand gesture detection to provide an intuitive and seamless interaction experience. This approach is an effective strategy to enhance communications in government and industry with special focus on smart writing. Results confirm this method's promise in the broader social interaction by significantly improving the speed and accuracy of deaf individuals.