Sign language is the only medium of communication for the speech-impaired community while the rest of the population communicate verbally. This project aims to bridge this communication gap by proposing a novel approach to interpret the static and dynamic signs in the Indian Sign Language and convert them to speech. A sensor glove, with flex sensors to detect the bending of each finger and an IMU to read the orientation of the hand, is used to collect data about the actions. This data is then wirelessly transmitted and classified into corresponding speech outputs. LSTM networks were studied and implemented for classification of gesture data because of their ability to learn long-term dependencies. The designed model could classify 26 gestures with an accuracy of 98%, showing the feasibility of using LSTM based neural networks for the purpose of sign language translation.