Generating Affective Characters for Assistive Applications

10
Generating Affective Characters for Assistive Applications Diana Arellano, Isaac Lera, Javier Varona, Francisco J. Perales Universitat de les Illes Balears. Valldemossa Road. Km. 7.5. Palma de Mallorca. Spain diana.arellano, isaac.lera, xavi.varona, [email protected] Abstract We propose a knowledge semantic model that allows the definition of a generic and interactive virtual environment with be- lievable characters. We also introduce an affective model that determines the emotional state of a character according to his personality traits and the experienced emotions. To evidence the emotional states, we attribute them automatically generated facial expressions based on their associated emotions. The obtained results demonstrate that the computational model that integrates the knowledge base with the affective model generates emotions coherent to the elicited events, and that facial ex- pressions associated to emotional states can be recognized proving that the mixing of expressions for universal emotions to obtain intermediate expressions is successful. Finally, two applications in which the research is currently focus are explained. Keywords: Affective Computing, Virtual Humans, Virtual Worlds, Facial Animation, Assistive Technologies 1 Introduction In this paper we explore the role of new technologies and theories that explore human affect, and how they can be used by persons in everyday life. In this sense, our research focus on the creation of virtual characters for specific applications developed for physically, or mentally, disable people. To achieve this, first we designed a semantic knowledge model that allows the representation of the environment that surrounds a character, as well as their internal state (goals, preferences, admiration for other agents), so the user can interact with the avatar in a realistic way. Realism is provided by an affective model which combines the emotions felt by the character due to certain events, and their personality, generating emotional states which makes them more empathic and believable to the user. Then we needed to visualize the emotional states through the generation of facial expressions for them. Expressions can be of two types: for universal emotions and for intermediate emotions. Universal emotions are the six ones proposed by Ekman [1]. Intermediate emotions in this case are other known emotions, and in this particular work the ones proposed by Ortony et al. [2]. Expressions for the intermediate emotions are obtained as the mixture of two expressions of universal emotions, or as the categorization of one expression for an universal emotion. Finally, we have an affective character whose face is capable of reacting in a believable way. To this end, we are working on two possible application. In the first one the affective agent is framed on a tangible interface intended for assistive domotics, especially for elderly or disabled people. In the second one the agent can be thought as a virtual tutor, or trainer that helps people to enhance their social habilities. This paper is divided as follows. First we give a brief non-technical overview of the proposed model and the method for visualization of emotional states (a more extended and technical explanation can be found in Arellano et al. [3] and Lera et al. [4]. Then we explain the conducted evaluation of the computational model and the recognition of the generated facial expressions. Thereafter, we describe two possible applications where affective character can be used for helping disabled and/or elderly people. Lastly, conclusions and future work are presented. 2 Overview of our system I. Semantic Knowledge Model. We represent the context of the character using ontologies because they permit to define new knowledge and easily reuse it. An ontology is a specification of a representational vocabulary for a shared domain of discourse (definitions of classes, relations, functions, and other objects) [5]. This way we have a formalization of the personality, goals, preferences, relation with other agents, and a clear specification of the event that affect the character.

Transcript of Generating Affective Characters for Assistive Applications

Generating Affective Characters for Assistive Applications

Diana Arellano, Isaac Lera, Javier Varona, Francisco J. Perales

Universitat de les Illes Balears. Valldemossa Road. Km. 7.5. Palma de Mallorca. Spain

diana.arellano, isaac.lera, xavi.varona, [email protected]

AbstractWe propose a knowledge semantic model that allows the definition of a generic and interactive virtual environment with be-lievable characters. We also introduce an affective model that determines the emotional state of a character according to hispersonality traits and the experienced emotions. To evidence the emotional states, we attribute them automatically generatedfacial expressions based on their associated emotions. The obtained results demonstrate that the computational model thatintegrates the knowledge base with the affective model generates emotions coherent to the elicited events, and that facial ex-pressions associated to emotional states can be recognized proving that the mixing of expressions for universal emotions toobtain intermediate expressions is successful. Finally, two applications in which the research is currently focus are explained.

Keywords: Affective Computing, Virtual Humans, Virtual Worlds, Facial Animation, Assistive Technologies

1 IntroductionIn this paper we explore the role of new technologies and theories that explore human affect, and how they can be usedby persons in everyday life. In this sense, our research focus on the creation of virtual characters for specific applicationsdeveloped for physically, or mentally, disable people. To achieve this, first we designed a semantic knowledge model thatallows the representation of the environment that surrounds a character, as well as their internal state (goals, preferences,admiration for other agents), so the user can interact with the avatar in a realistic way. Realism is provided by an affectivemodel which combines the emotions felt by the character due to certain events, and their personality, generating emotionalstates which makes them more empathic and believable to the user. Then we needed to visualize the emotional states throughthe generation of facial expressions for them. Expressions can be of two types: for universal emotions and for intermediateemotions. Universal emotions are the six ones proposed by Ekman [1]. Intermediate emotions in this case are other knownemotions, and in this particular work the ones proposed by Ortony et al. [2]. Expressions for the intermediate emotions areobtained as the mixture of two expressions of universal emotions, or as the categorization of one expression for an universalemotion. Finally, we have an affective character whose face is capable of reacting in a believable way. To this end, we areworking on two possible application. In the first one the affective agent is framed on a tangible interface intended for assistivedomotics, especially for elderly or disabled people. In the second one the agent can be thought as a virtual tutor, or trainer thathelps people to enhance their social habilities.

This paper is divided as follows. First we give a brief non-technical overview of the proposed model and the method forvisualization of emotional states (a more extended and technical explanation can be found in Arellano et al. [3] and Leraet al. [4]. Then we explain the conducted evaluation of the computational model and the recognition of the generated facialexpressions. Thereafter, we describe two possible applications where affective character can be used for helping disabled and/orelderly people. Lastly, conclusions and future work are presented.

2 Overview of our systemI. Semantic Knowledge Model. We represent the context of the character using ontologies because they permit to define

new knowledge and easily reuse it. An ontology is a specification of a representational vocabulary for a shared domainof discourse (definitions of classes, relations, functions, and other objects) [5]. This way we have a formalization of thepersonality, goals, preferences, relation with other agents, and a clear specification of the event that affect the character.

Figure 1: Personality-Emotion Ontology Diagram

With this model is easy to create new environments and new situations, adding new information or using one alreadydefined. Simulating what affects the character, internally and externally, gives as a result a set of emotions which are usedas input to the affective model. Figure 1 shows one of the designed ontologies.

II. Affective Model. We present a computational model for the generation of emotional states. It is based on the modelproposed by Mehrabian, which allows the relationship between personality and emotion in the same space, named PADspace [6]. Emotional states considered in this space are shown in Table 1. Personality traits are mapped to this spacethrough a set of equations, also proposed by Mehrabian. These traits are the ones specified by the Five Factor Model [7],and they are defined in the semantic model. As a result, personality is seen as a point in the 3D PAD space, and isconsidered the default emotional state. This means that the state of the character when no emotions are elicited dependson their personality. When emotions are triggered due to the occurrence of events, they are also mapped to specific octantsof the PAD space. Computing the center of mass of all of them we obtain an emotional center. It represents the effectthat emotions, felt at certain time, have on the character. Having personality and emotions in the same space, we proceedto generate the actual emotional state in time t, which is the state after the default emotional state has been influencedby the emotional center. In an instant t + 1 when the character experiments new emotions, these will be affected by theactual emotional state, generating a new emotional state. Another aspect that has been considered is decay. It allows thecharacter to return to her default emotional state after certain period of time.

ES Emotions(E)xuberant (+P,+A,+D) (B)ored (-P-A-D)(De)pendent (+P+A-D) (Di)sdainful (-P-A+D)(R)elaxed (+P-A+D) (A)nxious (-P+A-D)(Do)cile (+P-A-D) (H)ostile (-P+A+D)

Table 1: Emotional States (ES) in PAD space [8]

III. Visualization of the Emotional States. In this phase the objective is the visualization of the generated emotional states.We use facial expression for emotions associated to the emotional state for its own representation. To generate themwe implement an algorithm that uses the standard MPEG-4 and the theory proposed by Whissell [9], in which emotionsare represented in a two-dimensional space according to values of activation and evaluation. A facial expression for an

Figure 2: Facial Expressions for intermediate emotions: love, disappointment, hate, pity.

emotion is defined by a set of Facial Animation Parameters (FAPs), which indicates how certain parts of the face aremoved (i.e. “raise bottom middle lip”, “open jaw”) according to a normalized value. To obtain intermediate emotions weused one of two methods: combine two universal emotions, or categorize one universal emotion. If only one universalemotion, the range of values for the intermediate emotion is a sub-range of the considered universal emotion. If twouniversal emotions are used to obtain an intermediate emotion, we rely on the Whissell wheel, and consider the universalones with the minimal distance to the intermediate emotion. Then we have to combine their FAPs depending on theirvalues. As a result, we obtain a set of FAPs with a range of values that indicates all the possible intensities of the emotion.

Figure 2 shows some facial expressions obtained for intermediate emotions with the highest intensity.

3 EvaluationTwo types of evaluation have been performed, one of the computational model resulting from the integration of the semanticand affective model, and the other of the correct generation of facial expressions for emotional states.

3.1 Evaluation of the Computational Model: Semantic and Affective ModelThe evaluation was done through an experiment that consisted on using the proposed computational model for simulating aseries of events of a normal day of our character, and her emotional response by facial expressions. The idea was to evidencethe coherence of the computational model to generate emotional states given characteristics of the character and the event; andalso whether subjects are able to associate a facial expression with an emotional state, given the personality of the character andthe event that produced that state.

3.1.1 Setup of the experiment

First, a story was defined where five relevant events were pointed out. We evaluated the process assigning 2 different per-sonalities to our character, Personality 1 (P1): neurotic and disagreeable, and Personality 2 (P2): extroverted and agreeable.Personality affects only at the affective model level, and does not directly influence the generation of emotions in the knowledgemodel. Goals, preferences, action categorization, and agent admiration have been defined in 2 different ways for the same setof events, Configuration 1 (C1) and Configuration 2 (C2). Thus, we ended up with 4 different situations (with 5 events each),and therefore, 4 different sets of emotional states for each of the 5 events (P1-C1, P2-C1, P1-C2, P2-C2).

Each event evokes a set of emotions according to the degree of satisfaction that it produces in the agent. The intensities of theemotions are given by the values of the preferences, accomplishment of goals, the admiration for other agents that participatein the event. Table 2 shows the produced emotions by each event in configurations C1 and C2.

The final set of emotions for C1 and C2, are used by the computational affective model together with personalities P1 andP2, to generate emotional states for each situation P1-C1, P2-C1, P1-C2, and P2-C2. Results of emotional states and theirintensity degree are displayed in Table 3.

Facial expressions for the emotional states of each situation of event 5 (Ev.5), generated using the MPEG-4 standard, areshown in Figure 3.

Event 1 Event 2 Event 3 Event 4 Event 5Liking = 0.06 Love = 0.6 Distress = 1.0 Liking = 0.45 Liking = 0.6Love = 0.11 Liking = 0.2 Liking = 0.26 Pride = 1.0 Pride = 0.4Hate = 0.56 Anger = 1.0 Shame = 1.0 Gratification = 1.0 Gratification = 0.4

C1 Gratification = 0.06 Distress = 1.0 Remorse = 1.0 Admiration = 1.0 Admiration = 0.64Disappointment = 1.0 Pity = 1.0 Satisfaction = 1.0 Surprise = 0.64

Reproach = 1.0 Joy = 1.0 Joy = 1.0Satisfaction = 1.0

Disgust = 0.3 Love = 0.58 Disgust = 0.26 Liking = 0.45 Disgust = 0.6Disgust = 0.6 Liking = 0.2 Liking = 0.22 Pride = 1.0 Shame = 0.4Love = 1.0 Hate = 0.6 Shame = 0.01 Gratification = 1.0 Remorse = 0.4

C2 Gratification = 0.56 Anger = 0.01 Remorse = 0.01 Admiration = 1.0 Anger = 0.64Disappointment = 0.01 Pity = 0.01 Satisfaction = 1.0 Disappointment = 0.64

Reproach = 0.66 Joy = 1.0 Distress = 0.64Distress = 0.66

Table 2: Elicited emotions

P1-C1 P2-C1 P1-C2 P2-C2Default ES Moderate Disdainful Moderate Exuberant Moderate Disdainful Moderate ExuberantEv.1 Moderate Hostile Moderate Hostile Slightly Disdainful Moderate ExuberantDecay Moderate Disdainful Moderate Exuberant Moderate Disdainful Fully ExuberantEv.2 Slightly Bored Slightly Anxious Slightly Hostile Moderate ExuberantEv.3 Fully Bored Moderate Bored Slightly Disdainful Slightly HostileDecay Moderate Bored Slightly Docile Moderate Disdainful Moderate ExuberantEv.4 Slightly Relaxed Moderate Exuberant Slightly Relaxed Fully ExuberantEv.5 Moderate Exuberant Fully Exuberant Slightly Disdainful Slightly Exuberant

Table 3: Results of Emotional States

3.1.2 Experimentation

The evaluation set consisted on 20 animations, generated using the MPEG-4 standard. Each of them corresponds to the processof changing from the previous emotional state to the actual emotional state produced by one of the five events categorized inP1-C1 or P2-C1, with P1-C2 or P2-C2.

The participants of the experiment were 21 persons (4 women and 17 men) between 20 and 41 years old, all from differentacademic backgrounds. The procedure was to show them 3 animations per event: the correct one and two incorrect ones,randomly presented. Events were shown in order of occurrence. First, participants read what was the event and the personality,and then the emotional state of the character after the occurrence of the event. Afterwards, they observed the three animationstwice. Finally, they marked in the questionary the animation (A1, A2, A3) they considered more appropriate to the situation.

The results of the experiment are shown in table 4. For each event with configuration C1 or C2, and personality P1 or P2,we counted the percentage of persons that correctly associate “situation - emotional state - facial expression”.

3.1.3 Discussion

The aim of this experiment was to demonstrate the coherence of the affective results given a definition of the character and theirenvironment. This coherence had to be manifested through facial expressions of emotional states capable of being recognizedby the users. The results are presented in Table 4. For the neurotic and disagreeable personality P1, all events with configurationC1 were correctly recognized in more than 71% of the cases. For the same configuration C1, from the events where personalitywas extroverted and agreeable, P2, only two were correctly recognized in more than 80% of the cases. Events 1, 2 and 5obtained low recognition rates. A possible reason for this outcome could be attributed to the lack of expressiveness of the face,because people attributed extraversion and agreeableness to more exaggerated expressions. Events with configuration C2 andcharacter with personality P1 where correctly recognized with a percentage over 62%, except for event 3. Finally, events withconfiguration C2 and character with personality P2 were correctly recognized with a rate over 62%.

Figure 3: Event 5. Upper row: Moderate Exuberant, Slightly Disdainful. Lower row: Fully Exuberant, Slightly Exuberant.

3.2 Evaluation of the Generated Facial Expressions3.2.1 Subjective evaluation

In this experiment we evaluated the realistic generation of facial expressions by the mixing or categorization of the expressionsfor universal emotions. This was achieved through the subjective evaluation of 16 facial expressions, in a synthetic female face,of universal and intermediate emotions. Evaluated emotions are presented in Table 5. The evaluation was done through a papersurvey. We chose a group of 75 students of 2nd year of Computer Science at Universitat de les Illes Balears, between 18 and42 years old with a mean of 22 years old, with no previous knowledge about emotion recognition. Images and videos wereprojected in a screen of 1,20 x 1,20 m. The evaluation time was 40 minutes. The items evaluated were:

I. Recognition of universal emotions in images corresponding to 16 facial expressions. With this experiment we verifythat the algorithm used for generation of intermediate emotions worked. This was proved when one of the universalemotions used in the generation was recognized. Table 6 shows the results obtained in the recognition of universalemotions in each of the evaluated expressions.

From these results we could see that intermediate emotions were recognized in 93% of the cases. We could also inferredthat the facial expression for fear must be improved because it was identified as surprise in most of the cases.

II. Recognition of emotional states grouped by dimension Dominance (D) in images corresponding to 16 facial ex-pressions. From previous works we concluded that it was difficult for people to identify the 8 emotional states in animage of a facial expression. As former researches [10] support the theory that is possible to work only with dimensionsof Arousal (A) and Pleasure (P) from the PAD space, we reduced the eight emotional states to four. We grouped themby the dimension Dominance (D), obtaining the following groups: Exuberant - Dependent (ED), Relaxed - Docile (RD),Bored - Disdainful (BD), and Anxious - Hostile (AH).

P1-C1 P2-C1 P1-C2 P2-C2Ev. 1 (%) 72 29 62 95Ev. 2 (%) 71 52 71 76Ev. 3 (%) 81 81 48 67Ev. 4 (%) 76 85 76 85Ev. 5 (%) 85 29 95 62

Table 4: Results of the evaluation

Nr. Emotion Nr. Emotion1 Joy 9 Love2 Sadness 10 Disappointment3 Disgust 11 Satisfaction4 Anger 12 Pity5 Surprise 13 Admiration6 Fear 14 Reproach7 Gloating 15 Gratitude8 Hate N Neutral

Table 5: Evaluated emotions

Table 7 shows the obtained results when associating each facial expression with a combined emotional state. The firstcolumn has the emotion associated to the evaluated expression with high intensity. The second column has the emotionalstate associated to the emotion. The following columns presents the recognition rate (%) of each emotional state, withoutconsidering the intensity.

It was observed that 73% of the emotions were correctly associated, confirming the theory that emotional states groupedby Dominance can be recognized in facial expressions. Surprise, which has no correspondence in PAD space wasidentified in the state Anxious-Hostile in 55% of the cases. Although this result is not conclusive, it gives an idea aboutthe location of the emotion surprise in the PAD space, according to the facial expression given in this experiment.

III. Recognition of the emotional states grouped by dimension Dominance (D) in videos. The evaluation of animatedfacial expressions for universal emotions going from the neutral state to the highest intensity and their association withan emotional state allowed the validation of the results obtained in (II). Results are shown in Table 8.

Table 8 shows that the recognition rate is high in most of the cases. Emotions of anger, sadness, joy, and fear werecorrectly recognized, and the results are similar to the ones obtained in (I). Surprise, with no emotional state associated,was associated with the state Exuberant-Dependent. Nonetheless, it was associated with the state Anxious-Hostile whenevaluating static images of the emotion surprise. This result leads to the conclusion that, despite surprise can have avery identifiable facial expression, it is not easily related to an emotional state because it can be generated by events ofdifferent types.

IV. Evaluation of the visual features of the synthetic face. Results of closed questions about visual features of the syntheticface showed that it was considered credible and people felt comfortable watching it. They also evaluated them as cold andrigid. However, percentage rates were under 60%, which indicates that we have to work on the realism of the face. Fromopen questions we inferred that mouth and eyebrows, as well as their movement were considered very realistic. Hair wasthe less realistic feature. Eyes are a very important feature, and although the used textures and size were credible, thelack of movement minimize the expressivity of the emotion.

3.3 Objective evaluation by automatic recognizerThe images used in the subjective evaluation, plus the expressions for: gratification, hope, liking, pride, relief, remorse, re-sentment, and shame, were validated using an automatic recognizer developed by Cerezo et al. [11]. The method studies thevariation of a certain number of face parameters (distances and angles between some feature points of the face) with respect tothe neutral expression. The characteristic points are based on the points defined on the MPEG-4 standard. The objective is toassign a score to each emotion, according to the state acquired by each one of the parameters in the image. The emotion (or

Emotion Inputs Neutral Joy Sadness Disgust Anger Surprise FearN – 80 9 1 2 0 0 01 J 1 93 0 1 0 2 02 Sa 0 0 87 8 2 0 03 D 1 0 2 60 28 2 04 A 0 0 3 3 84 2 05 Su 0 1 0 2 5 70 156 F 2 0 3 6 1 52 337 J/Su 24 66 4 0 0 2 08 A 0 0 2 18 68 0 19 J/Su 2 59 0 0 0 27 110 Sa 18 1 38 17 0 11 711 J 4 84 1 0 2 2 012 Sa/F 1 0 24 21 1 1 4313 J/Su 4 86 0 1 0 5 014 D/A 4 1 0 54 33 1 015 J 5 83 0 2 1 1 1

Table 6: % of recognition of universal emotions

Emotion In ED RD BD AHNeutral RD 12 77 5 4Joy ED 86 3 4 2Sadness BD 0 2 86 10Disgust BD 0 2 46 48Anger AH 6 0 17 75Surprise – 23 5 5 55Fear AH 7 0 19 71Gloating RD 56 23 6 2Hate AH 1 0 30 68Love ED 92 6 0 0Disappointment BD 2 24 50 18Satisfaction RD 82 8 5 2Pity BD 0 1 62 34Admiration ED 79 11 6 0Reproach BD 0 2 56 38Gratitude ED 82 7 5 3

Table 7: % of recognition of the combined emotional states

emotions in case of draw) chosen will be the one that obtains a greater score. The reason to use this method was the recognitionreliability percentage which is 90.93% in average. Table 9, shows the obtained results.

The results show that universal emotions were all recognized as such, except fear which was confused with surprise, anddisgust was evaluated as sadness by the automatic recognizer.

In conclusion, the automatic recognizer evaluated 82% out of the total number of expressions, which can be consideredsatisfactory. Failed emotions were: hope, pity, and reproach, which were considered as aversion. Relief was identified as theneutral face, but it makes sense considering that this emotion is taken as calm.

4 Computational Applications

4.1 Tangible interfaces for disable and elderly usersVirtual characters have become a key aspect in assistive applications, so that the more believable, the more succesful and helpfulthey are. This is the case of tangible interfaces for domotics environments intended to help users with daily tasks. That is whywe propose the development of a tangible avatar in charge of the elderly’s assistance, with tele-assistance functionalities inchronic cases. For this specific implementation, we need a virtual character capable of:

I. Reaction capability, facing events and environment changes.

Emotion Emotional State RecognizedAnger AH (85%)Sadness BD (93%)Disgust BD (61%)Surprise ED (53%)Joy ED (88%)Fear AH (57%)

Table 8: Results of animated facial expressions

Expression IN Automatic RecognizerJoy Joy JoyDisgust Disgust SadnessAnger Anger Anger/AversionFear Fear SurpriseSurprise Surprise SurpriseSadness Sadness SadnessAdmiration Joy/Surprise SurpriseDisappointment Sadness SadnessGloating Joy/Surprise SurpriseGratification Joy Joy/NeutralGratitude Joy JoyHate Anger Aversion/AngerHope Joy AversionLiking Joy/Surprise JoyLove Joy/Surprise Surprise/FearPity Sadness/Fear AversionPride Joy JoyRelief Joy NeutralRemorse Sadness SadnessReproach Disgust/Anger Sadness/AversionResentment Disgust/Anger Aversion/AngerSatisfaction Joy JoyShame Sadness/Fear Sadness/Aversion

Table 9: Results of the objective evaluation

II. Planning capability and decision making, to carry out the tasks according to one or more objectives.

III. Efficiency in decision making and in carrying out tasks.

IV. Interaction capability and communication with other agents.

V. Capability to adapt to other environments.

Using the described computational model that integrates a knowledge base of the environment and internal state of an avatar,with an affective model that makes them more human-like, we can have a virtual assistant that responds coherently to the eventsof the world through non-verbal behaviors as facial expressions.

The importance of introducing an affective avatar is the possibility of reducing the rejection that a domotics system can causein a user not accustom to new technologies, especially an elderly one. On one hand, having a human figure would facilitatethe use of the application, because the elderly person would feel that another human being is in charged of the whole system,and besides is empathic with the situation and with his/her sensations. On the other hand, this human figure has to be realenough to provoke the acceptance of the user as another “human being”, but visually attractive to not cause repulsion (uncannyvalley [12]). This is the main reason why exhaustive evaluation of the visual appearance and event-behavior of the charactermust be performed before taking the avatar to a real application.

4.2 Virtual trainer for developing social abilitiesAnother application is the development of a virtual character that facilitates the task of helping people to express, or suppress,the expression of their emotions. This could be of good use to psychologists who would need a base model, in this case a

virtual model, which serves to show how the face should appropriately show certain emotions and emotional states. Supplyinga framework with a list of possible events that the patient can experience in their daily lives (using the semantic knowledgemodel), the avatar utters a suitable facial expression for the resultant emotional state due to a given event. Thus, the user cansee the correct expression and with the help of the expert in charge, notice which facial zones are affected and emulate thisexpression.

Being this a novelty application, further research is needed in order to know how a correct implementation must be carriedon, and how the patients would react to an intelligent virtual character that teachs them how to behave in real life.

5 Conclusions and Future WorkDesigning and implementing an affective virtual character that can be empathic, intelligent, and autonomous is not an easy taskbecause its lack of objectivity. Nevertheless, we aimed to implement a knowledge base through a semantic model which outputs(emotions as responses to events) were used as inputs of an affective model that lead the character to emotional states expressedby means of facial expressions.

From the evaluation we concluded that obtained emotional states were coherent to the elicited emotions that produce them.Also, the recognition of these states was achieved by users not familiarized with the context. Nonetheless, results indicated thatexpressions where personality trait was extraversion needed more distinct expressions. In our experiment for recognition ofemotional states, subjects can be biased by one of this three parameters and that is why a refinement of the evaluation is neededin order to obtain more accurate results. However, the fact that users recognize emotional states in a face in 80% of the cases(16 out of 20 situations) is a novelty and a step forward in the generation of more realistic and humanized virtual characters.

Recognition of emotions in images of facial expressions is not an easy task. Diversity in psychological theories leads tolack of universality in the description of a facial expression for an intermediate emotion. In addition, lack of context, voice, ormovements make harder the recognition of emotions in images than in animations.

The work presented has important contributions: versatile semantic model which is able to represent a number of differentenvironments and situations where the character plays a leading role; an affective model for generation of emotional states andassociated emotions under the influence of personality traits; the generation and visualization of facial expressions correspond-ing to emotions associated with an emotional state; and the evaluation of the integrated model containing a knowledge base andan affective model, as well as the subjective and objective evaluation of the generated facial expressions.

In the near future implementing and evaluating the behavior of the characters inside a tangible interface, as well as in anapplication to enhance social abilities, will be performed.

AcknowledgementsThis work is subsidized by the national projects TIN2007-67993 and TIN2007-67896 from the MCYT Spanish Government.Besides, J. Varona contract’s is funded by the Ministry of Education of Spain within the framework of the Ramon y CajalProgram, and cofunded by the European Social Fund.

References[1] P. Ekman. Facial expression and emotion. American Psychologist, 48(4):384–392, 1993.

[2] A. Ortony, G. Clore, and A. Collins. Cognitive Structure of Emotions. Cambridge University Press, 1988.

[3] D.Arellano, J.Varona, and F.J.Perales. Generation and visualization of emotional states in virtual characters. ComputerAnimation and Virtual Worlds (CAVW), 19(3-4):259–270, 2008.

[4] I.Lera, D.Arellano, J.Varona, C.Juiz, and R.Puitganer. Semantic model for facial emotion to improve the human computerinteraction in ami. 3rd Symposium of Ubiquitous Computing and Ambient Intelligence 2008, pages 139–148, 2009.

[5] T. R. Gruber. A translation approach to portable ontology specifications. Knowledge Acquisition, 5(2):199–220, 1993.

[6] A. Mehrabian. Pleasure-arousal-dominance: A general framework for describing and measuring individual differences intemperament. Current Psychology, 14:261–292, 1996.

[7] R. R. McCrae and P. T. Costa. Validation of a five-factor model of personality across instruments and observers. Journalof Personality and Social Psychology, 52:81–90, 1987.

[8] P. Gebhard. Alma: a layered model of affect. In AAMAS ’05: Proceedings of the fourth international joint conference onAutonomous agents and multiagent systems, pages 29–36, USA, 2005. ACM.

[9] C.M. Whissell. The dictionary of affect in language. New York: Academic Press, 1989.

[10] V. Vinayagamoorthy, M. Gillies, A. Steed, E. Tanguy, X. Pan, C. Loscos, and M. Slater. Building expression into virtualcharacters. Eurographics, 2006.

[11] E. Cerezo, I. Hupont, C. Manresa Yee, J. Varona, S. Baldassarri, F.J. Perales, and F.J. Seron. Real-time facial expressionrecognition for natural interaction. In IbPRIA07, pages 40–47, 2007.

[12] T. Geller. Overcoming the uncanny valley. Computer Graphics and Applications, IEEE, 28(4):11–17, 2008.