TY - GEN
T1 - Sobriety Testing Based on Thermal Infrared Images Using Convolutional Neural Networks
AU - Kamath, Aditya K.
AU - Karthik, A. Tarun
AU - Monis, Leslie
AU - Mulimani, Manjunath
AU - Koolagudi, Shashidhar G.
N1 - Publisher Copyright:
© 2018 IEEE.
PY - 2019/2/22
Y1 - 2019/2/22
N2 - This paper proposes a method to test the sobriety of an individual using infrared images of the persons eyes, face, hand, and facial profile. The database we used consisted of images of forty different individuals. The process is broken down into two main stages. In the first stage, the data set was divided according to body part and each one was run through its own Convolutional Neural Network (CNN). We then tested the resulting network against a validation data set. The results obtained gave us an indication of which body parts were better suited for identifying signs of drunken state and sobriety. In the second stage, we took the weights of CNN giving best validation accuracy from the first stage. We then grouped the body parts according to the person they belong to. The body parts were fed together into a CNN using the weights obtained in the first stage. The result for each body part was passed to a simple back-propagation neural network (BPNN) to get final results. We tried to identify the most optimal configuration of neural networks for each stage of the process. The results we obtained showed that facial profile images tend to give very good indications of sobriety. The results also showed that combining the results of multiple body parts using a simple BPNN gives a higher accuracy than that of individual ones.
AB - This paper proposes a method to test the sobriety of an individual using infrared images of the persons eyes, face, hand, and facial profile. The database we used consisted of images of forty different individuals. The process is broken down into two main stages. In the first stage, the data set was divided according to body part and each one was run through its own Convolutional Neural Network (CNN). We then tested the resulting network against a validation data set. The results obtained gave us an indication of which body parts were better suited for identifying signs of drunken state and sobriety. In the second stage, we took the weights of CNN giving best validation accuracy from the first stage. We then grouped the body parts according to the person they belong to. The body parts were fed together into a CNN using the weights obtained in the first stage. The result for each body part was passed to a simple back-propagation neural network (BPNN) to get final results. We tried to identify the most optimal configuration of neural networks for each stage of the process. The results we obtained showed that facial profile images tend to give very good indications of sobriety. The results also showed that combining the results of multiple body parts using a simple BPNN gives a higher accuracy than that of individual ones.
UR - http://www.scopus.com/inward/record.url?scp=85063217342&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85063217342&partnerID=8YFLogxK
U2 - 10.1109/TENCON.2018.8650463
DO - 10.1109/TENCON.2018.8650463
M3 - Conference contribution
AN - SCOPUS:85063217342
T3 - IEEE Region 10 Annual International Conference, Proceedings/TENCON
SP - 2170
EP - 2174
BT - Proceedings of TENCON 2018 - 2018 IEEE Region 10 Conference
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2018 IEEE Region 10 Conference, TENCON 2018
Y2 - 28 October 2018 through 31 October 2018
ER -