A computational model of affects

This article provides a simple logical structure, in which affective concepts (i.e. concepts related to emotions and feelings) can be defined. The set of affects defined is similar to the set of emotions covered in the OCC model (Ortony A., Collins A…

Authors: Mika Turkia

A computational model of affects
A pre-pr int v ersion; final version published in D. Dietr ich et al. (Eds.): Simulating the Mind. Springer 2009. http://www .springer .com/sprin gerwiennewyork/computer+science/book/978-3-211-09450-1 A computational model of affects Mika T urkia Univ ersity of Helsinki turkia@cs.h elsinki.fi Abstract — Emotions and feeli ngs (i.e. affects) are a central fea- ture of human beha vior . Due to complexity a nd interdiscipli narity of affectiv e phenomena, attempts to define them hav e often been unsatisfactory . Th is article pro vides a simple l ogical structu re, in which affectiv e concepts can be defi ned. The set of affects defin ed is similar t o th e set of emotions cov ered in the OCC model [1], but the model presented in this article is fully computationally defined, whereas th e OCC model depend s on undefined concepts. Follo wi ng Matthis [2], a ffects are seen as unconscious, emo- tions as preconscious and f eelings as conscious. Affects are thus a superclass of emotions and feelings wi th rega rds to consciousness. A set of affective states and related affect-specific behavio rs and strategies can be defi ned w ith unconscious affects only . In addi tion, affects are defined as p rocesses of ch ange in the body state, that hav e specific triggers. For example, an affect of hope is defined as a specific body state th at is trigger ed wh en th e agent is becomes informed about a fu ture event, that is positive with regards to the agent’s needs. Affects are differentiated fro m each other by types of causin g ev ents. Affects caused b y u nexpected positive, neutral and neg- ativ e events are delight , surprise and fright , respectiv ely . Affects caused by expected positive and negative futu re ev ents are h ope and fear . Affects caused by expected p ast events ar e as follo ws: satisfac- tion results from a positi ve expectation being fulfill ed, disappoint- ment r esults from a positiv e expectation not being fulfi lled, fears- confirmed results from a negative expectation being fulfilled, and relief results from a negativ e expectation n ot being fulfilled . Pride is targeted towards a self-originated p ositiv e ev ent, and shame towards a self-originated negative ev ent. R emorse is targeted towards a self-originated action causing a negative event. Pity is targeted towards a liked agent experiencing a negative event, and happy-for towards a liked agent experiencin g a positive event. Resentment is targeted towards a disliked agent experiencing a positive ev ent, and gloating t owards a disliked agent experiencin g a negativ e event. An agent is liked/loved if it has produced a net utility greater than zero, and disliked/hated if the net utili ty is lower th an zero. An agent is d esired if it is expected to produce a positive net u tility in the future, and disliked if the expected net ut ility is negative. The above model for u nconscious affects is easily computa- tionally implementabl e, and may be used as a starting point in building believable simulation models of hu man beh a vior . Th e models can be used as a starting poin t in the development of computational psychological, psychiatric, sociological and crimi- nological theories, or in e.g. computer games. I . I N T R O D U C T I O N I N this article, computation ally tri v ial differentiation cri- teria fo r the m ost commo n affects fo r simple agents are presented (for introd uction to the ag ent-based a pproac h see e.g. [3]). The focus is in providing a simple lo gical or computatio nal structure , in which affectiv e c oncepts can be defined. Manuscript recei ved January 31, 2008; re vised March 15, 2008. The set of affects de fined is similar to the set o f emotion s presented in the OCC model [1], whic h has been a p opular emotion mod el in com puter science. Howev er , as e. g. Petta points out, it is only p artially compu tationally d efined [4]. For example, defin itions o f many emotions are based o n co ncepts of standards and nor ms, but th ese concep ts are u ndefined . These lim itations have often no t been taken into accoun t. The OCC m odel may be closer to a req uirements specifica- tion than to a direc tly impleme ntable m odel. Ortony himself has later described the m odel as too comp licated and proposed a simpler model [5], which may howe ver be some what limited. The missing c oncepts are howe ver d efinable. In th is article, the necessary d efinitions an d a restru ctured model similar to the OCC model ar e pre sented. A sim ple imp lementation of the stru ctural c lassification model is also pr esented. The p rimary concept is the con cept of a com putationa l agent, tha t r epresents the affecti ve subject. An agent is defined as possessing a predefin ed set of goals, e.g. self- surviv al and repro duction . Th ese goals form th e b asis of sub jectiv ely experienced utility . An ev ent fulfilling a goal has a p ositiv e utility; co rrespon dingly , an ev ent reduc ing the fulfillment of a goal has a n egati ve utility . All other go als may be seen as subgoals derived fro m these primary , e volutionar ily f ormed goals. Utility is thus seen as a measure of evolutionary fitness. An ag ent is defined as log ically co nsisting of a co ntrolling part (nervous system) an d a controlled par t (th e re st o f the bod y). T o be a ble to con trol its environment (throu gh controllin g its own body , that perfo rms actions, that affect the en vironme nt) the ag ent fo rms a mode l of th e environmen t. This o bject mo del con sists of representatio ns of pr eviously perceived objects associated with the utilities they h av e pro- duced. All future pre dictions a re thus based solely on past experiences. An affect is de fined as a pro cess, in which the contro lling part, on perceiving an utility-ch anging event in the co ntext of its cu rrent ob ject rep resentations (o bject mode l), prod uces a correspo nding ev olu tionarily determ ined bo dily chan ge, i.e. transfers the ag ent to another b ody state. Specific behaviors and strategies c an be associated with specific affecti ve states. The set o f possible affective states and associated actio ns may b e predefined (i.e. inn ate) or learn ed. Innate associations may includ e e.g . aggression towards the causing object of a fr ustrating event (i.e. ag gression as an action associated with f rustrated state). Learned action s are acquired by associating previously exper ienced states with the results of experimented action s in these states. Emotions and f eelings are defin ed as subclasses of affects [2]. Emo tions are defin ed as p reconsciou s affects an d feel- ings as consciou s affects. Being consciou s o f some obje ct is 1 A pre-pr int version; final version published in D. Dietr ich et al. (Eds.): Simulating the Mind. Springer 2009. http://www .spring er .com/spr ingerwiennewyork/computer+science/book/978-3-211-09450-1 preliminar ily defin ed a s the o bject b eing a target of attention (see e.g . [6]). Cor respond ingly , being preconscio us is being capable o f a ttending to th e o bject when n eeded. In contrast, uncon scious affects are proce sses that can not be p erceived at all du e to lack of sensor y mecha nisms, or otherwise ca nnot be attended to , d ue to e .g. limitations or mechanism s of the controllin g p art. Thus, emotions and feelings a re co nceptualized as requ iring the agen t to b e capab le o f being conscious of ch anges in its body states [7] . As an affect was defin ed a s a physiolo gical state change trig gered b y a p erception of a pred efined event or an o bject constellation , we can also define a system wh ere agents are n ot conscious of th eir affects, but still have them. These unconscio us affects suf fice to pro duce a set o f states, to which a ffect-specific b ehaviors and strategies may be boun d. In e ffect, such agents a re affecti ve but no t emo tional. Relations between th e concep ts o f affect, emo tion, feeling and consciou sness were defined above. Ano ther question is the differentiation of affects f rom each o ther . This is achieved by classifying the trigge ring object constellation s. The con stella- tions includ e the state of the agen t, which in tur n includes the complete history of the agent. In other words the idea is the following: a ffects are differentiated fr om each o ther by bo th the event type and the contents of th e curren t object model, i.e. by the structu re of the subjec ti ve social situation . T his subjective social situation is formed by agent’ s life history , i.e. the series o f all p erceived events. T o prelim inarily bind this conceptual framework to psycho - analytic object relatio ns theor y (e. g. [8], [9 ]), we note that objects a nd their utilities in re lation to self for m a network o f object r ela tions . I I . S I M U L A T I O N E N V I RO N M E N T A. Ontologica l definition s Let u s assume a world that pr oduces a series o f events . The world con tains objects , some of wh ich are alive. Living objects th at a re able to act on the world are called agents . Agents’ actions are a subset of events. An ev ent co nsists o f a type indicator and ref erences to causing object(s) and a target object(s). B. An agent as a co ntr ol system An agent is seen as a c ontr ol system , which consists of two parts: a contro lled sy stem and a contr olling system (in computer science, th is idea has been pr esented by at le ast [10]). This division can be done on a functional or logical le vel only . Let u s thus defin e, tha t an ag ent’ s contro lling system is the b rain and the associated neu ral system, and the contro lled system is the b ody . Physically the con trolling system is part of the controlled system, but o n a fu nctional level they are separate, altho ugh th ere may be feedb ack loop s, so th at the actions of the contro lling system change its own physical b asis, which in turn r esults in modification s in the ru les o f co ntrol. An ag ent usually experien ces on ly a part of th e series of ev ents in th e world; tha t part is the en vir onmen t of the ag ent. Experien cing happens thr ough perceptions that co ntain e vents. These e vents are evaluated with regards to tar get agent’ s needs. The value of an ev ent for an ag ent’ s need s can b e called its utility (the utility co ncept used here is similar to the utility concept used in r einforcem ent learnin g [ 11]). The utility of an ev ent or action is associated with the causing object. In a simplified model, fixed utilities can be assigned to e vent types. In th is ca se th e ev alu ation phase is omitted. The b asis of utilities represented in th e controlling system (i.e. m ind) ar e the needs of the contr olled system (i.e. body ). Utilities dir ect actions to attempt the fu lfillment of the n eeds of the body . Utility is thus to be unde rstood as a measure o f change in ev olutionary fitness caused b y an event. An agen t attempts to exp erience as much value as possible (maximize its u tility) durin g the r est of its lifetime. Max imizing utility maximizes ev o lutionar y fitness, i.e. self-survival and rep ro- duction, according to a n utility function preset by ev o lution of the spec ies in question. In o rder to attempt this utility max imization, the age nt ha s to be ab le to affect the environment ( to act), so that it c an pursue highly v alued e vents and tr y to a void less valued events. It also has to be able to predict which actions would lead it to experie nce highly valued events and av oid low-v alued (meaning less o r harm ful) experiences. T o be able to predict, the agent h as to h av e a mode l of th e en vir onmen t , wh ich contains mo dels of perceived objects associated with their utilities. As the agent can n ev er kn ow if it has seen all possible objects and event ty pes of the world, all inform ation is neces- sarily uncertain, i.e. pro bability-b ased in its nature. Therefo re, when an a gent per forms an action, it expects a cer tain utility . The actual re sulting u tility may differ from expected, due to the necessarily lim ited pred icti ve capability of th e internal model of the environment. At any momen t, an ag ent selects and perfor ms the action with the highest expected utility . Thu s, e very action maximizes subjective utility . A lso, any g oal is d erived f rom the pr imary goals ( needs), i.e. self-surviv al and reprod uction. Lifetime utility means the su m o f all value inputs that the agent experien ces du ring its life time. P a st u tility means the sum of already exper ienced value inputs. Fu tur e utility mea ns the sum of value inputs to be experienced during th e rest of the lifetime. Sin ce this is unknown, it can only b e estimated based on p ast utility . T his estimate is called expected futu r e utility . Then expected lifetime u tility is the sum of past utility an d expected futu re utility . An agen t max imizes expected future utility . If it would maximize expected lifetim e utility , it would die wh en th e expected futu re utility falls below zer o. C. T emperament a nd personality P ersonality is the consequen ces of learned d ifferences ex- pressed in beh avior . Thus, per sonality is de termined by the learned con tents of the controllin g system. T emperament is the conseq uences of ph ysiological differences expressed in behavior . D. Norms and motivation Norms are defined as learn ed utilities o f actio ns, i.e. ex- pected utilities o f action. Fundame ntally , no rms are based 2 A pre-pr int version; final version published in D. Dietr ich et al. (Eds.): Simulating the Mind. Springer 2009. http://www .spring er .com/spr ingerwiennewyork/computer+science/book/978-3-211-09450-1 on ph ysiological needs, as this is the o nly way to b ootstrap (get starting values f or) the values of actions. Utilities can be learned fro m feedback from agen t’ s o wn bod y only . Howe ver, the utilities determin ed b y internal rew ards may be modified by social in teraction: an action with a high intern al r ew ar ds may cause harm to other agents, who then thr eat or harm the agent, lowering the utility of the action to ta ke into account the needs of the othe r agents. Thus, no rms of an individual usually partly express utilities of oth er ag ents. In a simplified m odel there is no need to represent the two co mponen ts separately . They ma y h owe ver b e separated when mode ling of in ternal motiv a tional con flicts is req uired. A standard is defined as a sy nonym for n orm, th ough as a term it ha s a m ore perso nal co nnotation , i.e . internal rewards may domin ate over the external rew ards. Mo tivation equals the expected utility of an action. Motivation and no rm ar e thus syn onymou s. E. The pr ocessing lo op The pro cessing loop of the agen t is the fo llowing: perceive new events, determine their utilities, up date o bject mode l, perfor m the action maximizin g utility in the cu rrent situatio n. As a new e vent is perceived, the representation of the causing object is u pdated to include the u tility of the cu rrent event. The ob ject represen tation currently being r etrieved and updated is defined as being the target of attention. After ev alua ting all n ew objects, th e ob ject with the h ighest absolu te utility (of all objects in the mod el) is taken as a target of attention. F . Object contexts If an agen t’ s expected fu ture utility , wh ich it attempts to maximize, is calculated as a sum of utilities of all k nown objects, it c an chang e only when new events are perceived. Howe ver, if it is calculated from conscious objects only , o r tak- ing the co nscious objects as a starting node and e xpandin g the context f rom there , keeping low-valued objects u nconsciou s becomes mo tiv ated . Now e.g. the id ea of repr ession b ecomes definable. Thus, introdu ction of a n inte rnal o bject co ntext enab les internal dyna mics of the expec ted futur e u tility . T wo kind s of dynamics emerge: first related to new o bjects, and second re- lated to context switches, which happen durin g the processing of n ew events. It can also be defined , that agents can expand th e co n- text. This expan sion is con ceptualized as an action , which is selected accord ing to the same p rinciple a s other action s, i.e. when it is expected to maximize future u tility . This m ay happen e.g. during idle time s, i.e. whe n there are no new events and all pe nding actions have b een perf ormed, or when se veral actions canno t be prioritized over each other . Expa nsion or contraction of the context causes co ntext switches and thus potentially c hanges in expe cted futur e utility . An especially in teresting c onsequen ce is tha t th e idea of context expansion dur ing idle times leads to the am ount of stress be ing r elated to the size o f the context (an ”emergent” feature). When the age nt is overload ed, it con text expansion may not take first prio rity . It ”do es no t have time” to expand the context, i.e. think things thoroug hly . Therefo re, con scious- ness of o bjects’ f eatures dimin ishes; con sciousness ”b ecomes shallow”. Th is shallo wness in cludes all object representations, also the self- representatio n. Overloading has also ano ther consequen ce. New per cepts must be ev a luated and appr opriate actions selecte d, but ther e may be no time to perf orm these action s, which are then queued . The prior ities of the queued actions may change when new events are evaluated. Th erefore, at each time poin t a different action has first prior ity . Action s taking mo re time than on e time unit a re started but not fin ished, since at the next time p oint so me o ther action is more importan t. Theref ore, the agent percei ves that it is ”too busy to do anything”, a common feature o f burnout . In practice, expansion is do ne by traversing the o bject network fr om the currently p rioritized o bject tow ards h igher utility . For example, an agent has per ceiv ed a threaten ing object and thus expects a negative event in the near future. It targets an affect of fear tow a rds the object. As a result its body state ch anges to ”f ear” state. One way of conceptu alizing action selection would be to think that a list of actio ns is browsed to see if there is an action that wou ld cancel th e threat. Another way is to think of the action as a node in the o bject n etwork. T aking th e feared object as a starting p oint, the network is traversed to find a suitable action rep resented by a n ode linked with the fe ared object, the link re presenting the expected u tility of the n ode. If the nod e is has the hig hest utility of all the n odes startin g from th is o bject, it is traversed to. If the nod e is an action, it is p erform ed. It it is an other o bject, th e expansion contin ues. As the expected futu re utility is calculated f rom the objects in the context, the threat is cancelled when a n action with a high enou gh u tility is fo und, alth ough it may no t yet be perfor med ( the utility shou ld be we ighted b y the pro bability of su cceeding in per forming the action). This in effect corre- sponds to a d iscounting of expected utility . Another, prob ably better, optio n would b e to take the affecti ve state as a starting node. If th e agent has previously experienced a state o f fear , it has a representatio n of this state (an object), an d ac tions associated with the state. Personality was p reviously defined as the learn ed co ntents of th e con trolling pa rt of the c ontrol system. Per sonality is therefor e for med by adding n ew objects and th eir associated utilities to the ob ject n etwork. In the psych oanalytic traditio n this is called internalization [9]. The continuing process of in ternalizing new , more satisfying function s of the self m ay be called pr ogr ession . In progression, an agent’ s focus shifts on the new objects, since the old objec ts turn out less satisfying in com parison. Correspon dingly , if the new fu nctions later turn out to be useless and b etter ones cannot be found , the agen t turn s back to the old objects; this may be called r egr ession . I I I . A FF E C T S , E M OT I O N S A N D F E E L I N G S A. Affect as a bod ily pr ocess Affects are defined here as pred efined bodily proc esses that have certain trig gers. When a specific trigge r is perce iv ed, a 3 A pre-pr int version; final version published in D. Dietr ich et al. (Eds.): Simulating the Mind. Springer 2009. http://www .spring er .com/spr ingerwiennewyork/computer+science/book/978-3-211-09450-1 Fig. 1. Relat ions between e vent-rela ted concep ts. correspo nding change in body state f ollows. This chang e m ay then b e perceived or no t. I f it is per- ceiv ed, the content of p erception is the proce ss of cha nge. In other words, an affect is p erceived when the conten t of the perception is a representation of the b ody state in transition , associated with the per ception of the trigger . This is essentially the idea of Damasio [ 7]. The trig gers are not simple o bjects, but spec ific constel- lations of ob ject r elations . A certain con stellation trigge rs a certain emotion. F or example, fea r is triggered when a negative ev ent is expe cted to happ en in the future . Th ere is thus an object re lation be tween the agent an d the feared object, in which the o bject has a negative expected u tility . This r elation may be seen as an o bject con stellation. In principle the current affect is determin ed by the w hole history o f interactions between the agent an d the ob jects, no t just the cur rent ev ent, since if e.g. th e e xpected utility was v ery high in the beginning, a small negati ve change would not suffice to change the object relation fro m h ope to fe ar . Alternatively , if an age nt k nows how to avoid the thr eat (has an ap propr iate action ), th en fear is removed w hen a r epresentation o f the suitab le action is retrieved from memo ry . In such case the ag ent was expecting to be able to cance l the effects o f the expected negati ve event, and expected utility rises back to a lev el cor respondin g to th e sum o f utilities of the event and the rep arative action. These differences are however related to tr iggers only . What makes an experience of fear d ifferent from an exper ience o f e.g. hope are the pe rceived differences in bo dily reac tions associated with these em otions, i.e. a representatio n of body state a ssociated with one emotion is different f rom the rep- resentation of a represen tation of ano ther emotion . This is essentially the ’q ualia’ pro blem, which in this context would be equal to askin g why e.g. f ear feels like fear, or wh at gives fear the qua lity of fearness . The so lution is that th e ’quality ’ of feeling o f e.g . f ear is just the specific, uniqu e repre sentation of the bo dy state. The re canno t be any additiona l aspects in the experience; what is experienced (i.e. the target of attentio n) is simply th e represen tation. B. Differ entia ting by levels o f consciousne ss Relations between affects, emotions and feelings are defined accordin g to Matth is, who defin es affects as a superclass o f emotions and feelings [2]. D ifferentiation is made with respect to levels of con sciousness. Emotion s are preco nscious affects, whereas feelin gs ar e consciou s affects. There may also b e affects th at cannot be pre conscious or con scious (i.e. cann ot be p erceived); these are labeled uncon scious. Now we seem to face th e prob lem of defin ing con scious- ness. Howe ver, the a gent only has to be con scious o f some objects . E.g. Baars has sugge sted, that being conscious of an object cor respond s to that o bject being the tar get of a ttention [6]. Let u s thus d efine that conscious contents are the contents that ar e the target of attention a t a given moment. Correspond- ingly , preco nscious are the contents that can b e taken at the target of attention, if they beco me the most impor tant at a giv en mom ent. When an ob ject is p erceived (as a part o f an event), an agent searches its internal object mod el to see if th e object is known o r unkn own. It then attempts to estimate the utility of the event ( good o r useful, meaning less, bad or harm ful) by using the known u tility of the object. The intern al mo del of this object is then up dated with th e utility of the curre nt ev ent. I f the re is no need to search and no unchecked ob jects are pr esent, atten tion is targeted tow ards th e o bject or action which has the highest a bsolute value of expected utility . T he idea beh ind this is that utility is max imized by pursuing the highest positive opp ortunity or dodg ing the worst threat. If only o ne g oal is present, a higher positive e vent cancels o ut a lesser negativ e event. Multiple goa ls cr eate more com plicated situations, w hich are n ot discu ssed in this ar ticle. C. Multilayer ed co ntr olling systems For body states in transition and in association with a perceived trig gering object co nstellation to b e taken as ta rgets of attention , the co ntrolling system needs a n ab ility to inspect its own structu ral configur ations and their ch anges in time. Therefo re an another lay er is n eeded, th at records the states of a lower layer of the controlling system. These reco rds of state change seq uences can then b e handled as o bjects and attention can be targeted at them, thus mak ing them prec onscious o r conscious. Unconsciou s affects are then first-layer affects that cannot be perce i ved by the second layer . Th is may be d ue to e. g. fixed structural limitatio ns in th e introspec tion mechanism. Defined this way we can also say that th ere may be affectiv e agents that are not emo tional. In particular, all a gents with one-lay er controllin g system would be affecti ve o nly . An a ffecti ve agent can th us be fully un conscious. Howe ver , an emotiona l or a feeling ag ent n eeds co nsciousness. D. Differ entiatin g by object constellation s Classification presented here contain s mostly th e same affects as th e OCC mo del [ 1], but the classification cr iteria differ . The classification is presented in figu re 2, which may be compare d with th e classification prop osed in the OCC model [1, p. 1 9]. 4 A pre-pr int version; final version published in D. Dietr ich et al. (Eds.): Simulating the Mind. Springer 2009. http://www .spring er .com/spr ingerwiennewyork/computer+science/book/978-3-211-09450-1 The d ifferentiation criteria are: natur e of the tar get : whether the target o f affect is an event, or an object or agent; time : whether the event has happ ened in the past or is expected in the futu re; expectedness : whether th e ob ject was known o r unknown, or whether a past e vent was e x pected or unexpected; goal corr espo ndence : whether the ev ent contributed positively or negativ ely to agent’ s goals; self-in flictedness : whe ther the ev ent was self-inflicted or ca used by others; relation to th e tar get : whether the target object or ag ent of the event was liked or disliked. A simplified implementatio n of these criter ia can be con - structed as follows: agen ts do not for m memor ies of ev ents as a who le, but only record utilities of causing o bjects. Future expectations are thus imp licit an d consist of object utilities only . In other words, ag ents d o not exp ect specific events, but expect a spe cific object to have an utility that is the average of the pr evious events created by it. An object is expec ted, if a mo del of it exists, i.e. it ha s been perce i ved before as a causing o bject. Go al co rrespon dence is imp licit in the utilities, as agen ts only h av e one go al: maximiza tion of the utility . Goal structure and goal d eriv ati ves are thus abstracted away in this simplification. E. Affects related to events The first differentiation criter ia for e vent- related af fe cts are: whether the event was targeted towards self or towards other; and whether th e originato r o f the event was self or other . 1) Even ts tar geted towards self: a) Unexpected p ast events: Frigh t is an affect caused by a negative u nexpected event. Correspon dingly , delig ht is an affect caused by a p ositiv e unexpected ev ent. Surprise is caused by a n eutral unexpected event. Whether or no t it is an affect is of ten disputed . If it is associated with e.g. memor y- related phy siological ch anges, it would be an a ffect. Ano ther criteria is, that it is associa ted with a ty pical facial expr ession; in this sense it should b e classified as an affect. b) Expe cted future events: An expected positive future ev ent causes ho pe. Corre sponding ly , a n expec ted negati ve future event causes fear . c) Expected past events: Relief is an affect caused by an expected negative event not b eing re alized. Disapp ointmen t is an affect cau sed by an exp ected positive event n ot being realized. Satisfaction is an affect caused by an expected positive ev ent being realized as expe cted. Fears-con firmed is an affect caused by an expected negative e ven t bein g realized as exp ected. 2) Even ts tar geted towards others: a) Disliked ob jects: Envy is targeted towards a d isliked agent that experienc ed a positive event. Gloa ting is targeted tow ards a d isliked agent that experien ced a negative event. b) Liked ob jects: Pity is targeted towards a liked agen t that experien ced a negativ e event. Ha ppy-for is targeted to- wards a liked agen t that experienced a po siti ve event. 3) Self-c aused events: Remor se is targeted torwards a self- originated action that caused a n egati ve event to self or some- one liked; events p ositiv e f or disliked objects are con sidered negativ e for self. Pride is targeted tow ards a self-orig inated action that cau sed a positive event to self or a liked object; ev ents negati ve f or disliked objects are co nsidered positi ve fo r self. Shame is targeted towards self wh en a self-origin ated action caused a negativ e event. 4) Events cau sed by others: Gratitude is targeted towards an ag ent that caused a positive e vent tow ards self or someo ne who self depends on (i.e. likes). Correspo nding ly , anger is targeted towards an ag ent th at caused a negativ e event. F . Affects r elated to agents and objects In addition to event-related affects, also the o riginators a nd targets of events ar e targets of affects. 1) P ast consequen ces r elated affects: Consequenc es of ev ents ca use th e origin ators of the events to b e liked or disliked. Like a nd dislike can be t houg ht of as aggregate terms, taking into accou nt a ll events cau sed by an agen t. Dislike or hate is targeted tow ards an ag ent, who ha s on a verage produ ced more harm than good. Acco rdingly , like or love is targeted tow ards an agen t, who has pr oduced more g ood than ha rm. Th e difference between e.g. like an d love is tha t of magn itude, not of qu ality; i.e. love is ”stronger ” liking . A possibly mo re appr opriate interpretation o f love as altr uism, i.e. as prio ritizing needs o f others instead o f own needs, is currently out of sco pe o f th is m odel. 2) Future pr ospects related affects: Future pr ospects are estimated o n the basis of past experien ces; theref ore they are determined by the past. Howev er , if we set the point of view on the f uture only , we ca n d ifferentiate disgust fro m dislike and like from desire. Desire is an affect caused by a positiv e future expectation associated to an object. Acco rdingly , disgust is an affect caused by a negativ e fu ture expectation . 3) Identific ation-related affects: Identification -related af- fects a re cur rently out of th e scope of th e co mputatio nal implementatio n, as th e con cept o f identification has not been been implemented. Agent wants to identify with an object, that has capa bilities that would fulfill its n eeds; in other word s, if the object c an p erform a ctions that the ag ent would like to learn. Admiratio n is d efined as an a ffect targeted towards an age nt or ob ject that the a gent wants to identify with. According ly , rep roach is targeted towards an o bject th at the agent does n ot want to identify with. 4) Self-r efer en cing conce pts: Above c oncepts refer red to external o bjects o r relation s between objects. Affectiv e co n- cepts can also refer to the object itself: e.g. mood re fers to the state o f th e ag ent itself. Example s of m ood ar e happiness, sad- ness, depression and mania. A simple definitio n of happin ess could b e that the average utility of all events (or events in the context) is above zero (be low zer o for sadness, respectively). Depression could be de fined as a co ndition wh ere n o k nown objects h av e a po siti ve utility . G. Affects a nd time Often mood is though t of as being somehow qualitatively different fro m em otions. In this paper, the longer duratio n o f mood is thou ght to be simply a consequence of the stability o f the co ntents o f th e object mod el, wh ich in turn d epends o n th e en vironme nt. If the en vironme nt do es no t affect the relev ant needs, the affective state does no t chan ge. 5 A pre-pr int version; final version published in D. Dietr ich et al. (Eds.): Simulating the Mind. Springer 2009. http://www .spring er .com/spr ingerwiennewyork/computer+science/book/978-3-211-09450-1 Fig. 2. Aff ects in relation to e ach othe r . 6 A pre-pr int version; final version published in D. Dietr ich et al. (Eds.): Simulating the Mind. Springer 2009. http://www .spring er .com/spr ingerwiennewyork/computer+science/book/978-3-211-09450-1 I V . D E M O N S T R A T I O N A simple browser-based implementation is av ailable at http://www.cs .helsinki.fi/ u/turkia/emotion/emotioneditor . In this simulation the user pro vides e vents that change the af fective states of three agents. An example run is as follows. Agent 1 giv es agent 2 an utili ty of 1. Since i n the beginning the agents don ’t hav e models of each other , this positiv e even t is unexpe cted, and agent 2 is thus de lighted. Al so, it now beings to expect a positiv e utility from agent 1, i s begins to like agent 1. In turn, agent 2 giv es agent 1 an utility of 1; agent 1 is similarly delighted. No w , agent 3 gi ves agent 1 an utility of zero. Agent 1 is surpised, and it’ s a ttitude to wards agent 3 is set to neutral. Agent 3 then g iv es an utility of -1 to agent 2, who is frightened, and be gins to dislike agent 3. Although agent 1 is an outsider in this ev ent it reacts to it, since it likes agent 2. Thus, agent 1 targets an affect of pity/compassion to wards agent 2 and anger tow ards agent 3. Agent 2 no w giv es an utility of -2 to agent 3, who is frightened, and begins to dislike agent 2. A gent 2 gloats over the misfortune of agent 3 and feels pride of its own action. Agent 1 feels pity tow ards 3 and an ger at 2 (due to neutral attitude being defined equal to liking). Finally , agent 1 gives an utility of 2 to agent 3, who is delighted. Agent 1 feels happy for agent 3 and pride for its o wn action. Agent 2 feels envy towards the disliked agent 3 and anger to wards agent 1. At this point, all agents hav e expectations of each other . Agent 2 now accidentally gives an utility 2 to the dislik ed agent 3, after which i t feels remorse and anger tow ards self and env y to wards 3. Agent 1 feels happy for 3 and gratitude tow ards 2. At this point, agents don’t have util ities for themselve s. T o demonstrate affects r elated to expected past ev ents, agent 2 give s itself an utility of 2. It is now delighted, likes itself, and expects a similar result in the future. When performing the same ev ent again, agent 2 feels satisfaction a nd joy . Howe ver , no w gi ving itself an u tility of 1, it is disappointed and feels remorse. As a result of the pre vious e vent history , agent 3 expect an utility of 2 and i s in a good mood. It does not have expectations of itself. When givin g it self an utilit y of -4, its av erage expectations change to -2, its expectations to wards it self to -4 and its mood to bad. When gi ving itself an utilit y of -4 again, i ts fears are confirmed. When gi ving itself an utility of -2, it feels relief. The e vent sequence was th us (1,2,1), (2,1,1), (3,1,0), (3,2,-1), (2,3,- 2), (1,3,2), (2,3,2), ( 2,2,2), (2,2,2), (2,2,1), (3,3,-4), (3,3,-4), (3,3,- 2), where the first argument of the triple is the causing agent, the second is the target agent, and the third is the utility of t he e vent. As mentioned before, the resulting i ntersubjecti ve utilit y expectations form a network of object relati ons. V . C O N C L U S I O N This article presented definitions of affects, that remov e the limitations of the OCC model and are easily computationally i m- plementable. They can be used as a starting point in the develo pment of computational psychological, psychiatric, sociological and crimi- nological theories, or i n e.g. computer games. A C K N O W L E D G M E N T The author would like to thank Krista Lagus (Helsinki Univ ersity of T echnology), Matti Nyk ¨ anen (Unive rsity of Kuopio), Ari Rantanen (Univ ersity of Helsinki) and Timo Honke la (Helsinki Unive rsity of T echnology ) for support. R E F E R E N C E S [1] A. Ortony , A. Collins, and G. L. Clore, The Cognitiv e Structur e of Emotions . Cambridge: Cambri dge Uni versity Press, 1988. [2] I. Matthis, “Sketc h for a metapsychol ogy of af fect, ” The Internation al J ournal of Psychoanaly sis , vol. 81, pp. 215–227, 2000. [3] S. Russell and P . Norvig, A rtificial Intellige nce: A Modern Approac h . London: Prentice Hall, 1995. [4] P . Petta, “Identi fying theoriti cal problems, ” in Pr oceedi ngs of the first HUMAINE W orkshop on Theories and Model s of Emotion , June 2004. [Online]. A vaila ble: http:// emotion-research.net/ws/wp3/ [5] A. Ortony , “On m aking belie v able emotional agents belie v able, ” in Emotions in Humans and Artifact s , R. Trappl , P . Petta , and S. Payr , Eds. Cambridge: MIT Press, 2003, pp. 189–212. [6] B. J. Baars, “Some essential dif ferences between consciousness and atten tion, pe rcepti on, and w orking m emory , ” Consciou sness and Cogni- tion , v ol. 6, pp. 363–371, 1997. [7] A. R. Damasio, L ooking for Spinoza: Jo y , Sorrow and the F eeling Brain . Orlando: Harcourt, 2003. [8] P . Fonagy and M. T arget, E ds., Psychoanaly tic Theories. P erspect ives fr om devel opmental psyc hopatolo gy . London: Whurr , 200 3. [9] V . T ¨ ahk ¨ a, Mind and Its T re atment: A Psychoanalyt ic Appr oac h . Inter- nationa l Uni versit ies Press, 1993. [10] A. Sloman, “The mind as a control system, ” in Philosophy and the Cogni tive Science s , C. Hookway and D. Peterson, E ds. Cambridge Uni ve rsity Press, 1993 , pp. 69–110. [11] R. S. Sutton and A. G. Barto, Reinforc ement Learning: an Intr oducti on . London: MIT Press, 1998. 7

Original Paper

Loading high-quality paper...

Comments & Academic Discussion

Loading comments...

Leave a Comment