Human emotions are expressed through body gestures, voice variations and facial expressions. Research in the area of facial expression recognition has been active for last 20 years for improving the system performance. This work proposes a novel geometrical modeling of facial regions based feature extraction technique for emotion recognition. Most of the facial landmark based approaches use a common reference point for detecting the facial variations. In such approaches a slight variation or tripping of the reference point may result in errors which may lead to erroneous expression recognition. In order to reduce errors a new method is proposed wherein 3 important reference points in the axis of symmetry of face is fixed and angle variations associated with these reference points are used for detecting the upper and lower Action Units (AUs). Also to increase the speed performance the segmentation algorithm required for facial feature extraction is implemented parallel in Compute Unified Device Architecture (CUDA). Facial expressions of emotion are recognised as combinations of FACS AUs. It is implemented in Graphics Processing Unit (GPU) based High Performance Computing (HPC), tesla K20, CUDA server and analysed the performance as a massively parallel data processing tool. The results showed that multithreaded GPU version of the face segmentation algorithm is much faster than that of single-threaded CPU version.
|Number of pages||13|
|Journal||International Journal of Applied Engineering Research|
|Publication status||Published - 01-01-2016|
All Science Journal Classification (ASJC) codes