Authors - A.Kousar Nikhath, J.Ananya Reddy, P.Aishwarya, S.Maadhurya Sri, P.Gowthami Abstract - Sign language is the medium between the people who can hear and speak and those who cannot. This project is set to be used in the development of technologies that are beneficial for the lives of individuals with disabilities. The project studies in-depth the use of computer vision and deep learning. The accurate and the regional language translation began with the gestures of the sign languages as the input information, and finally the software produced the accurate translation in the regional language. I am inspired by the prospect of using Artificial Intelligence technology in developing hereditary transmission from a worldwide venue and health diagnosis in a timely manner. Convolutional Neural Network (CNN) is employed to pick up characteristics from hand movement that belongs to the sign language. These attributes are used in the training set as features, themselves in the classification of the gesture, and the process is the learning of this model for the recognition of gestures in real-time. Further, there is an inclusion of computer vision for preprocessing and the sake of accuracy prediction of the recognition process. The functionality of the sign language recognition system is assessed by using a variety of experiments, including accuracy and speed. In general, the developed Sign Language Recognition System with integration of deep learning and computer vision techniques facilitates the precise and quick recognition of sign language gestures. Integration with a translator in addition to this not only makes it multi-language support but also guarantees the correct translation into regional languages.