This paper proposes a method to test the sobriety of an individual using infrared images of the persons eyes, face, hand, and facial profile. The database we used consisted of images of forty different individuals. The process is broken down into two main stages. In the first stage, the data set was divided according to body part and each one was run through its own Convolutional Neural Network (CNN). We then tested the resulting network against a validation data set. The results obtained gave us an indication of which body parts were better suited for identifying signs of drunken state and sobriety. In the second stage, we took the weights of CNN giving best validation accuracy from the first stage. We then grouped the body parts according to the person they belong to. The body parts were fed together into a CNN using the weights obtained in the first stage. The result for each body part was passed to a simple back-propagation neural network (BPNN) to get final results. We tried to identify the most optimal configuration of neural networks for each stage of the process. The results we obtained showed that facial profile images tend to give very good indications of sobriety. The results also showed that combining the results of multiple body parts using a simple BPNN gives a higher accuracy than that of individual ones.