• Sonuç bulunamadı

View of Facial Emotion Recognition through Fuzzy Systems

N/A
N/A
Protected

Academic year: 2021

Share "View of Facial Emotion Recognition through Fuzzy Systems"

Copied!
7
0
0

Yükleniyor.... (view fulltext now)

Tam metin

(1)

274

Fuzzy Inference System based Analysis of Facial Expressions for Emotion Recognition

Anju Das 1, Sumit Mohanty 2

1

Dept of EEE, CMR Institute of Technology, Bengaluru, India

2

Dept of EEE, CMR Institute of Technology, Bengaluru, India

Abstract: This paper provides a logical framework for understanding the process of Feature Extraction of face & the use of

Fuzzy Inference system to recognize the facial emotions using these features. The system consists of a photo processing section followed by an emotional awareness section. In the processing phase of image, the face and face of the subject mouth and eyes are removed. Next, appropriate points for use should be underlined in each facial expression. In the visual field, sensory dots are taken for fuzzification and to know the magnitude of the various actions of face. This power is used to make the concept of the topic stand out. Japanese Women's Face Dataset used to evaluate program performance which resulted in a full recovery of 78.8%.

Keywords - Emotion recognition, Fuzzy system, Human Facial Features, Fuzzy Classifier,

1.

INTRODUCTION

Emotional awareness is a major problem in many places. Emotional awareness of the situation has focused on numerous psychological or psychological investigations over the past decade. As anyone would expect, there are many lessons to be lear ned from this article. Applications for this article include an automated safety plan, an automated help program, and clinical diagnostic patients. For example, in the safety gadgets, exact exposure can make us be aware of potential hazards. In additio n, this would let the robots to work together and help their human opponents to make remarkable progress. In each case, direct and appropriate consent plays a key role in improving the system's performance.

Today, there is a increasing enthusiasm in enhanced communication between people and PCs. Few suggests that to get good input, PCs need to communicate with the client in the same way. For example, you can communicate emotional reactions by touching and speaking. However, even though speech is an incredible form of communication, gestures can convey many important details about the current situation or emotions.

Psychological reasoning suggests that human emotions can be divided into old-fashioned feelings of fear, fear, contempt, anger, excitement, and problems. To show these archetypes, the face muscles of person may change, the way a person speaks may vary, and his speech range may rise or fall. The lips size and shape can help to understand voice calmly. All of this plays a vital point in the method of transmitting various emotions. Although all these symptoms are hidden, people could detect symptoms by processing the details from both the eyes and ears [5, 6]. People are more sensitive to exposing their faces and seeing other people's faces than other useless channels, often more than verbal communication.

At a time when emotional recognition is rapid, a variety of methods can be used. Strategies include physical examination, voice examples, and facial expressions. In a facial examination, Ekman and Friesen, in 1978, introduced a facial model, which is present considered as Coding System of Facial Action. This FACS divides faces into forty six separate points (AUs). The whole AU is seen as a decrease or disintegration of at least one muscle group. A few, but not all, of these AU tests add to the recognizing 7 important emotions that Ekman and Friesen distinguish: joy, compassion, fear, anger, contempt and panic. In addition, AUs in the eyes, eyebrows, and mouth are related to emotional expression.

The complex structure presented in this study depends on the use of these AUs to detect emotions. Using the Japanese Wo men's Dataset (JAFFE) Dataset [12], the computer compares curiosity with a low- level image to express speech. This data (JAFFE) has almost 213 images of seven faces made by ten women. To get this similarity between auditory and neutral nerves, the require ment for training that takes at any point is ignored. Initially, 2 images are processed to get the appropriate surface. The difference in the position of the points of the various objects is the confusion in obtaining the AU power shown over time. These powers are then given a set of important rules and are given the power to acquire the power expressed by the six basic senses. Well-known emotions are chosen by the process of overcoming everything where the highest power is chosen.

2.

RELATED WORK

As of late, more examination has been done regarding the matter of emotional distress. A wide range of techniques utilizes the JAFFE data set to measure performance. A few projects utilize excellent strategies. Exploring Alzheimer's sickness implies that the infection harms the main sensory systems in the beginning phases. The three most influenced parts are the posterior, amygdala and hippocampus, intersection. In comparative phases, individuals consisting of mental imbalance, an issue described by genuine social issues, likewise experience issues seeing looks. This is because of unusual mental action or movement. The fundamental cause of autism is discussed whether it is a hereditary problem or whether individuals are brought into the world

(2)

275

with it. The amygdala is frequently harmed even in individuals with impermanent epilepsy. This brain area is related to enthusiastic sensation in the face.

Figure 1 – 7 different Emotion images for 1 object

From the start of mankind's set of experiences, looks have been significant methods for examination. For example, these "facial highlights" can inform an individual about their disposition or passionate state while talking [15]. In these investigations, it very well may be seen that these three areas of the mind assume a significant part in an individual's capacity to detect feelings. As this document utilizes the JAFFE information base to evaluate the efficiency, it is better to notice the consequences of different projects utilizing a similar data set. For instance, a single system utilizing Gabor channels on various scales and focuses continued with Linear Discriminant Analysis (LDA) and Principle Component Analysis (PCA) and had the option to effectively identify 6 essential feelings, just as naturalness, with almost 90% achievement [16]. The other system that uses the Gabor coefficients wavelet and mathematical shapes is recognized as essential and impartial feelings, with a triumph pace of 90.1% [17]. Finally, at long last, utilizing PCA to separate factor and LDA as a differentiation factor, one system got feelings with a triumph pace of 87.6% [18].

As well as taking a look at programs that utilize the JAFFE data set, it is likewise imperative to take a look at the aftereffects of different projects that utilize dynamic reasoning. For instance, an unambiguous lawbased framework with a Genetic Algorithm (GA) to improve execution had the option to distinguish four feelings (delight, bitterness, shock, and outrage) at a pace of 88% precision utilizing the Cohn- Kanade information base [19]. Then again, an unsatisfactory overall set of laws that works at points between various facial points has made an all-out progress pace of 72% in getting four feelings (euphoria, bitterness, dread, and outrage) continuously. It is important that the degrees of joy and distress (63% and 58%) were fundamentally lower than those of outrage and dread (72% and 90%) of this framework.

Other acknowledgment strategies are utilized in numerous angles, from Xbox Kinect to Asperger treatment to a viable PC. This is finished with nonstop video observing. Examining these considerations, highlights are separated from the current state, and characterization is performed utilizing a one-time phase. To decide if the elements of face were of a particular sort, Gausic variety and histogram partition were utilized. Albeit this seems promising, there was rough ly 60% rate of misdiagnosis. A few investigations have fostered a state of-guide framework toward recognize enthusiastic articulations. Ratliff and Patterson utilized FACS to investigate looks. This investigation has shown that the outcomes shift from one individual to another somewhere in the range of 60 to 100% precision. The normal precision was between 80% to 90%. The investigation made another fascinating point: Why is it so hard for certain individuals to be more noticeable than others? The truth of the matter is that every individual has an alternate look. In this manner, a huge assortment of point by point preparing would be a smart thought to help precise ly recognize these facial highlights. Different thoughts utilized incorporate Markov's secret mo dels. This is useful for displaying stationary signals. These species, in any case, are tedious.

A significant factor to consider while considering these results is that many of these projects exploit plans to acquire intr icate and vigorous highlights while utilizing the most immediate strategies for emotiona; detachment. The exemption is a constant framework, which utilizes a timetable detection system.

3.

OVERVIEW OF DATASET

The data of images used to measure program efficiency is the JAFFE data set [12]. Lamentably, just ten

Japanese ladies are recognized by the characters (YM, UY, TM, NM, NA, MK, KR, KM, KL and KA). However, they have held onto it for each situation, notwithstanding hindrances, we can barely imagine. "These articulations incorporate nonpartisanship, bliss, pity, outrage, contempt, dread, and shock. Moreover, for each article, there are two or four pictures in every scene.

(3)

276

Fig 2. Different expressions of happiness for single subject.

Altogether, there are 213 pictures in the Dataset. Pictures are 256 x 256 in size and TIFF design. Figure 1 shows seven writings introduced in the only 1 of the systems. Figure 2 implies a few instances of glad emotion for a theme.

Figure 3 - Main Facial Features to be considered 4.EXTRACTION OF FACIAL FEATURE AND FUZZY CLASSIFICATION

A.Extraction of Facial Feature

To get 7 essential senses recognized by Ekman and Friessen, suitable looks should initially be taken from the photos. Since this task centers around the unobtrusive contrasts of feelings, unquestionably, the least difficult perspective is finished. Then, the fundamental catch technique is utilized. This might be because of a huge change in the skin tone of the head and the contrast between the mouth, eyebrows and eyes. This is accomplished by utilizing the toolbox of MATLAB view and image processing.

Figure 4 - Traced Boundaries Figure 5 – Feature points of face that are Overlaid

The initial phase in eliminating the component is by recognizing the highlights of face of the injury (mouth, eyebrows and eyes):

1.Identification of the object face.2.After editing the first picture, get the title's eyes, eyebrows and mouth.

3.As demonstrated in Figure 3, MATLAB recognizes this with the obligation to enclose the component.

Then, the real facial element ought to be found inside the space distinguished by MATLAB. To do this, the film starts to block. The genuine factor is distinguished by discovering the limits of the articles found in the confined regions. For instance, Figure 4 implies the shot of sharp mouth, eyebrows and eyes of an object.

(4)

277

At last, identification points focus is gotten from the parameters of the features. Utilized for mouth and eyes, upper, lower, left, and right corners. On the eyebrows, right, left, and focus focuses are utilized. Figure 5 depicts distinguished component foc uses over the primary picture.

B.Fuzzy Classification

Descriptions of facial muscles involved in the emotions are the one which Darwin considered universal as mentioned in Table-1[26]. The information should now under fuzzification to separate [27]. Utilizing the recognized feature points, the applicable AU specifications are calculated. Input variables are four Semantic FF (Facial Features) of facial components: eyebrows, eyes, nose, and mouth. These values are obtained from the previous stage, semantic facial features extr action [28]. This data incorporates boundaries like eyelid stature, inside tallness, mouth width, and so on. AU subtleties are determined both as a perspective picture for a neutral assertion and a picture of interest. Costs are then contrasted with every AU and recorded. Examinations are made as a proportion of the advantages picture in the reference picture. For instance, if the inner stature of the band increments from eight pixels in a neutral picture to twelve pixels in the gain picture, the recorded worth is 12: 8, or 1.5.

Table 1. Descriptions of facial muscles involved in the emotions Darwin considered universal [26]

Fear Anger Disgust Happiness Surprise Sadness Eyes open

Mouth open Lips retracted Eyebrows

raised

Eyes wide open Mouth

compressed

Nostrils raised

Mouth open Lower lip down Upper lip raised Eyes sparkle Mouth drawn back at corners Skin under eyes wrinkled Eyes open Mouth open Eyebrows raised Lips protruded Corner of mouth depressed Inner corner of eyebrows raised

Figure 6. Fuzzy emotion recognition framework

When all the AUs recording is recorded, details are recorded. In every AU, the power shown is reflected in the non-existent, feeble, normal, and solid linguistic diversity. The participation capacities used to plan yield inputs comprises of 3-sided and trapezoidal curves. In Aus depending on different facial limits, every boundary is addressed by its enrollment work. Fuzzy emotion inference system involves semantic facial features input, emotion rule base, and fuzzy emotion output. The process inside the fuzzy emotion inference system is described in Fig. 7. Initially, the semantic facial features input x and y are fuzzified using fuzzifier module, resulting fuzzy membership degrees μ(x) and μ(y). The fuzzy inference module consists of inference engine and fuzzy rule-base which process the fuzzy input μ(x) and μ(y) to result the fuzzy aggregate membership degree μ(z). The last step is defuzzifier module which converts the fuzzy value μ(z) into real output value z. Fuzzy emotion rules [30] are as follows: 1. IF eyebrow is lower AND eye is normal AND mouth is widely open THEN angry is high 2. IF eyebrow is raised AND nose is wrinkle AND mouth is narrow THEN disgust is medium 3. IF eyebrow is normal AND eye is narrow AND mouth is narrow THEN fear is low.

Figure 7. Fuzzy emotion inference system process

For every feeling, there is an alternate complex control system. Utilize an alternate arrangement of rules for every feeling instead of a twofold advantage [31]. To begin with, as every idea doesn't depend on all AUs, different law systems fundamentally lessen

(5)

278

the absolute rules that required in the system. Second, in diminishing the rules needed, the framework's preparing time is likewise decreased.

Table 2. Precision-Recall of predicted images for JAFFE Dataset

Emotions Fear Anger Disgust Happiness Surprise Sadness Precision (%) 89.93 91.33 88.24 93.27 88.51 87.61

Recall (%) 81.47 95.21 82.39 92.67 84.77 83.23

Table 3. Emotion Recognition Accuracy in percentage (%) (approximately)

Emotions Debuisson[18] Esau[20] Austin[31] Proposed

Happiness 88 63 80 92 Sadness 89 58 78 91 Anger 91 72 90 93 Fear 83 90 75 92 Surprise 90 NA 87 89 Disgust 85 NA 63 88

Each law and order are worth between 0 (not appeared) and 6 (indicated). This number addresses how much an image of interest mirrors the sentiments gave. Each emotion class has its own dimension as well as linguistic values, e.g. happy has three linguistic values: slight, medium, and extreme happy [29]. Fuzzified AU information is remembered for each fluffy guideline system, and the outcomes are equivalent. The success strategy takes everything in to decide the inclination appeared by the image of inte rest.

RESULTS & DISCUSSION

⮚Table-2 represents the Precision & recall values of predicted images for each emotion. These are always handshaking in nature which can be seen from their values. Both the values for each emotion are almost near. Highest precision is achieved for Happiness i.e. approximately 93% whereas highest Recall occurs for Anger i.e. approximately 95%. ⮚All neutral presentation pictures in JAFFE information Dataset were utilized for examination. One neutral picture was

utilized for every lesson. The outcomes are introduced in Table 3. The normal rate of recognition was approximately 91%. Outrage, happiness, anger and shock were found with the most significant level of accomplishment. Sorrow and dread are all around reported, and disdain is found with next to no precision.

⮚Disgust was the hardest feeling to discover. This is conceivable because the discovery of contamination is exceptionally dependent to the nasal and the surrounding region. The program introduced here doesn't think about the nose.

⮚In a few cases, visual examination of sentences can be exceptionally troublesome on account of how an individual's adjective is expressed. For instance, a visual assessment of pictures in the JAFFE data set uncovered that few investigations were making sure sensations incorrectly as indicated by Ekman and Friesen's portrayal; indeed, extra AUs can be infrequently communicated around there. This way, expanding the informational collection or eliminating some unacceptable display will assist with accomplishing better outcomes.

⮚In a few cases, image processing has worked ineffectively in recognizing focal points. This was generally observable to the left/right corner of the mouth and the upper/lower eyelids. It is required because of the effortlessness of the element expulsion strategy utilized. Indeed, even with this basic methodology and the unpredictable system functions admirably. Utilizing a more precise component delivery will expand system execution.

⮚Another issue is the test picture resolution. 256 x 256, certain parameters resolution (stature of eyelid, inward tallness, and so on) are estimated as the number of pixels. The littlest change (1pixel) addresses an increment/reduction of 10% or more. This granular level gauges the measure of fuzzification participation exercises can perform.

CONCLUSION

By contrasting the outcomes and a similar function and finding a perplexing element, the introduced system makes comparable degrees of feeling for specific feelings while doing the most exceedingly worst for others. It makes the achievement rates marginally higher than real- time system. Differences in image processing structure between these different systems may clarify the distinction in progress rates. Expanding the precision of incorporation information in a complex system ought to improve the achievement accomplished in these equivalent programs.

(6)

279

References

[1]

R. Adolphs, “Recognizing emotion from facial expressions: psychological and neurological mechanisms,”

Behavioral and Cognitive Neuroscience Reviews, vol. 1, pp. 21-62, March 2002.

[2]

C. Gagliardi, E. Frigerio, D. Buro, I. Cazzaniga, D. Pret and R. Borgatti, “Facial expression recognition in

Williams syndrome,” Neuropsicologia 41, 2003, pp. 733-738. [3]A. Azcarate, F. Hageloh, K. Sande, and R. Valenti, “Automatic facial emotion recognition,” Universiteit van Amsterdam, pp. 1-6, June 2005.

[4]A. Metallinou, C. Busso, S. Lee, and S. Narayanan, “Visual emotion recognition using compact facial representations and viseme information,” ICASSP 2010, pp. 2474-2477, 2010. [5]C. Busso, Z. Deng, S. Yildirim, M. Bulut, C. Lee, A. Kazemzadeh, S. Lee, U. Neumann, and

S. Narayanan, “Analysis of emotion recognition using facial expressions, speech, and multimodal information,” Proceedings of the 6th international conference on Multimodal interfaces, pp. 205- 211, 2004.

[6]

S. Ioannou, A. Raouzaiou, V. Tzouvaras, T. Mailis, K. Karpouzis, and S. Kollias, “Emotion recognition through facial expression Figure 7. Happiness fuzzy system 2220 analysis based on a neuro-fuzzy network,” Neural Networks, pp.423- 435, 2005.

[7]

H. Elfenbein, A. Marsh, and N. Ambady, “Emotional intelligence and the recognition of emotion from facial expressions,” The wisdom of Feelings: Processes Underlying Emotional Intelligence, pp. 1-19, 1998.

[8]

P. Ekman, “Emotions in the human face,” New York, NY, Cambridge University Press, 1982.

[9]

P. Ekman and R. J. Davidson, “Nature of emotion,” New York, NY, Oxford University Press, 1994.

[10]

P. Ekman and W. V. Friesen, “The facial action coding system,” Palo Alto, CA, Consulting Psychologists Press, 1978.

[11]

P. Ekman and K. R. Scherer, “Approaches to emotion,” Hillsdale, NJ, Erlbaum Associates, 1984.

[12]

M. J. Lyons, M. Kamachi, and J. Gyoba, “Japanese Female Facial Expression (JAFFE) Dataset of Digital Images,” 1997.

[13]

J. Drapeau, N. Gosselin, L. Gagon, I. Peretz, and D. Lorrain, “Emotional Recognition from the face, voice, and

music in dementia of the Alzheimer type,” The Neurosciences and Music III, vol. 1169, pp. 342-345, 2009.M. Harms, A. Martin, and G. Wallace, “Facial emotion recognition in autism spectrum disorders: a review of behavioural and neuroimaging studies, vol. 20, pp.290-322, 2010.

[14]

C. Cristinzio, D. Sander, and P. Vuilleumier, “Recognition of emotional face expressions and amygdala pathology,” Epileptologie 2007, vol. 24, pp. 130-138, 2007.

[15]

M. Lyons, J. Budynek and S. Akamatsu, “Automatic classification of single facial images,” IEEE Trans. Patt. Anal. Mach. Intell. 21, 1999, pp. 27-38.

[16]

Z. Zhang, M. Lyons, M. Schuster, and S. Akamatsu, “Comparison between geometry-based and

Gabor-wavelets-based facial expression recognition using multi-layer perceptron,” Proc. IEEE Int. Conf. Automatic Face and Gesture Recognition, 1998, pp. 454-459. [17]S. Dubuisson, F. Davoine and M. Masson, “A solution for facial expression representation and recognition,” Sign. Process.: Imag. Commun. 17, 2002, pp. 657-673.

[18]

A. Jamshidnezhad, “A fuzzy learning model for emotion recognition,” European Journal of Scientific Research 57(2), 2011, pp. 206-211

[19]

N. Esau, E. Wetzel, L. Kleinjohann and B. Kleinjohann, “Real-time facial expression recognition using a

fuzzy emotion model,” IEEE Fuzzy Systems Conf., 2007, pp. 1-6 [20]A. Cruz, B. Bhanu, and N. Thakoor, “Facial emotion recognition in continuous video,” 21st International Conference on Pattern Recognition, pp. 1880-1883, November 2012.

[21]

M. Ratliff and E. Patterson, “Emotion recognition using facial expressions with active appearance models,” HCI Proceedings of the Third IASTED International Conference on Human- Computer Interaction, pp. 138-143, 2008.

[22]

I. Cohen, A. Garg, and T. Huang, “Emotion recognition from facial expressions using multilevel HMM,” pp. 1-7,

2000.

[23]

“Computer Vision System Toolbox,” MathWorks, 2012, Web.

[24]

“Image Processing Toolbox,” MathWorks, 2012, Web.

[25]

Darwin, C. “The Expression of Emotions in Man and Animals” Murray: London, UK, 1872; pp. 30 & 180

[26]

“Fuzzy Logic Toolbox,” MathWorks, 2012, Web.

(7)

280

[27]

Liliana, Dewi Yanti & Basaruddin, T. & Rahmat, Medi. “Fuzzy Emotion Recognition Using Semantic Facial Features and Knowledge-based Fuzzy”. International Journal of Engineering and Technology, 2019. 11. 177-186. 10.21817/ijet/2019/v11i2/191102014.

[28]

P. Ekman, “Emotions revealed: recognizing faces and feelings to improve communication and emotional life”, 1st

ed. New York: Times Books, 2003

[29]

D. Y. Liliana, M. R. Widyanto, and T. Basaruddin, “Geometric Facial Components Feature Extraction for Facial Expression Recognition,” in 2018 International Conference on Advanced Computer Science and Information Systems (ICACSIS), 2018, pp. 391–396

[30]

Austin Nicolai, Anthony Choi. "Facial Emotion Recognition Using Fuzzy Systems" , 2015 IEEE International Conference on Systems, Man, and Cybernetics, 2015

Referanslar

Benzer Belgeler

Dün biten sempozyumda Enis Batur ve Hilmi Yavuz Yaşar Kemal’in eserlerini

Hüseyin Türk, Zeki Uyanık, Arif Kala ...81.. Alevi-Bektaşi Türkmen Geleneğinde Sosyal Dayanışma ve Kardeşlik Kurumu Olarak

Ben muhallebici burjuva çocuğu olarak 30’lu yaşlarda bana böyle çok fazla talep olmadığı gibi, benim de düşüncelerim dağınık olduğu için fazla bir şey

Ünlü bir çizgi roman ustasının önemli bir tarih­ sel kişiyi konu alarak bu anlatım biçi­ miyle büyüklere seslenmek istemesi, belki bu alanın eleştiriye açılmasına

İngiltere ve Türkiye arasında 1926 yılında imzalanan Ankara Antlaşması ile Türkiye, Musul petrollerinden bir süreliğine belli bir pay alması karşılığında,

Belə ki, NMR Konstitusiyasının 5-ci maddəsinin I hissəsinin 14-cü və 17-20-ci bəndlərinə əsasən Ali vəzifəli şəxs NMR-də AR-ın hərbi doktrinasını həyata

Bu bildiride 6360 sayılı kanunla kurulan Tekirdağ Büyükşehir Belediyesi’nin dijitalleşme çabalarına yer verilecek olup, bu bağlamda söz konusu büyükşehir

Diğer yandan Türkiye’de 27 Mayıs 1960 darbesi sonrasında daha sert bir askeri yönetimi amaçlayan ve harekete geçen cuntalara karşı (Aydemir Olayları) ordunun