Authors - Nadiya Hoque Shudha, Fatema Tuj Johura, Anupam Singha, Kingkar Prosad Ghosh Abstract - Sign language is vital for effective communication among deaf individuals, helping them to connect with others and break down communication barriers, which improves their overall well-being. For those with hearing impairments, knowing sign language is key to facing these challenges. While sign language is not used everywhere, sign language recognition has gained significant attention in computer vision and deep learning, aiming to improve this communication method for broader use. This paper introduces a convolutional neural network model designed to identify images of Bengali sign language gestures for native speakers through image classification. The model was trained using two publicly available datasets: one for one-handed Bengali sign language with 30 different sign alphabets and another for two handed Bengali sign language with 36 distinct sign alphabets. Various image preprocessing methods including gamma correction, grayscale conversion, CLAHE, resizing and normalization were used to enhance the datasets, making the model more robust. The results were impressive, with the model achieving 98.58% accuracy on the one-handed dataset and 94.86% on the two-handed dataset. These results show the model’s effectiveness in classifying Bengali sign alphabets, which could improve communication access for the hearing-impaired community.