Abstract: Voice and language are the primary means by which humans communicate with one another. We can understand one other's ideas because of our listening abilities. Even today, we can use speech recognition to issue commands. But what if you can't hear anything and, as a result, can't speak? As a result, because sign language is the primary means of communication for hearing-impaired and mute persons, and in order to preserve their independence, automatic interpretation of sign language is a large research topic. Many strategies and algorithms have been developed in this area using image processing and artificial intelligence. Every sign language recognition system has been programmed to recognise signs and convert them into the needed pattern. The suggested method aims to bring speech to the speechless. In this article, the double-handed Indian Sign Language is captured as a series of photographs, which are then processed using Python before being transformed to speech and text.