Cooperative interface of a swarm of UAVs
After presenting the broad context of authority sharing, we outline how introducing more natural interaction in the design of the ground operator interface of UV systems should help in allowing a single operator to manage the complexity of his/her ta…
Authors: Sylvie Saget, Francois Legras, Gilles Coppin
Co op erativ e In terface for a Sw arm of UA Vs Sylvie Saget 1 , 2 , F ran¸ cois Legras 1 , 2 , and Gilles Coppin 1 , 2 1 Institut TELE COM; TELE COM Bretagne; UMR CNR S 3192 Lab-STICC, F rance 2 Universit ´ e Europ´ eenne de Bretagne, F rance Abstract. After presen t ing the broad context of authority sharing, w e outline how introducing more natural in teraction in the design of th e ground operator interface of UV systems should help in allo wing a single operator to manage the comp lexity of his/her task. Introducing new mod alities is o ne one of the me ans in the realization of our vision of next- generation GOI. A more fundamen tal aspect resides in the inter action manager which should help balance the wo rkload of t he operator b etw een mission and interaction, notably by applying a multi-strateg y approac h to generation and interpretatio n. W e intend to app ly th ese principles to t h e context of the Smaar t pro- totype, and in this persp ective, w e illustrate h o w to characterize th e w orkload asso cia ted with a particular op eratio nal situation. 1 In t roduction Unmanned V ehicle (UV) Systems will considerably ev olve within the ne x t tw o decades. Actually , in the c ur ren t gener ation of UV Sys tems, several gro und op- erators op erate a single v ehicle with limited autonomous capabilities. Where a s, in the nex t g e ne r ation o f UV Systems, a g round o perato r will hav e to sup er- vise a system of sev e ral co op erating vehicles p erforming a joint mission, i.e. a Multi-Agent System (MAS) [10, 12]. In or de r to enable mission co n trol, vehi- cles autonomy will increase [7] and will require new forms of Human-sy stem int eraction. In this context, we hav e develope d a protot yp e m ulti-UV ground co n trol station ( Smaar t ) tha t a llo ws an op erator to sup ervise the surveillance of a sim- ulated stra tegic airbas e by a swarm of r o tary-wing UVs [1 3]. In this system, the autonomous b eha v ior o f the UVs is gener a ted by the means of a dual dig ital pheromone a lgorithm (bio-inspired approach), and although we obtained in ter - esting results, the op erator-sy s tem interaction is r ather basic: place beacons in the environment, dispatch UVs to wards these beacons, select mo de of infor ma- tion dis pla y , etc. Despite b eing technically an authority sharing system, Smaar t is only the first step in the dev elopment of a full-fledged authority sharing co n- trol system for swarms of UVs. Two ma in challenges have to be faced in order to design such an efficient a nd realistic UV Systems: 1. obviating the negativ e side effects of automa tio n: workload mitigation, loss of situation aw a reness, complacency , skill degrada tion, etc [17]; 2. decreasing the cog nitiv e loa d induced for the ground op erator. Op erating UV sys tems is hig hly complex. Obviously , shifting to UV Systems with s ev- eral vehicles will makes mission and vehicles control mo re co mplex [6]. In addition, even though increasing vehicles’ a utonom y aims at decreasing the cognitive load induced by missio n control for g round op erators, workload mitigation may lead to even hig her workload [17 , 6]. While using an UV Sys tem’s interface, the ground oper ator is a t least engaged within tw o a ctivities: mission control and interaction. How ever, all in all, the great ma jority of study fo cus on mission con trol. In this pap er, we claim that int eraction design must b e considered as a field o f research by itself. In this per spective, a s [25, 9] w e claim that there is a need to enhance the na turalness of gr o und op erator in terface rather tha n only improving mission r ealization and control. But as so on as an interface provides “natural” input a nd/ or o utput devices, non-understandings a nd misundersta nding s may o ccur. Then, in o rder to design efficie n t UV Systems, the problem of robustness of the interaction has to b e ha ndled. Based on recen t a dv ances in Pragmatics and in Human- C o mputer Interface [4, 22, 18 , 19], we present a collabo rativ e view of in ter action dedicated to UV Systems. Considering in tera ction as a collab orative activity while designing an int erface enhances its ro bustness [20], opens the do or to managing the global workload of the op erator through a ba la nce e ffect betw een mission load a nd int eraction load [16 ]. 2 P er spective on Authority Sharing This section pr esen ts th e backdrop of our r esearch o n author it y sharing fo r un- manned systems control. The basic requir e ment for an authority shar ing sy stem is to provide several distinct o p erating mo des to accomplish a g iv en task or function. Fig. 1 illustra tes how w e represent the different op erating mo des for three tasks, decomp osed along John Boyd’s OOD A lo op [23]. In this exa mple, the system has only one op erating mode for the Observe stage of T1, but tw o alternative mo des for Orient and De cide , and t hree mo des for A ct . Fig. 1. Representation of the op erating mo des for three tasks (T1, T2, T3). Each o perating mode giv es specific authorit y and resp onsibilities to the op- erator and the sys tem. F or example, the A ct stage of the landing task of an UV could provide three mo des: (a) full manual i.e. the op erator tele-op erates the Fig. 2. Auth ori ty sharing concept s in single system–single op erator interactio ns. UV; (b) full auto i.e. the vehicle lands a utomatically with no p ossible in terven- tion of the o perato r; and (c) auto with veto, where the o perator can disconnect the auto-pilot and handle the la nding. As w e see on Fig. 2, in order to control the system at time t , an ope rating mo de mu st b e ac tiv a ted for e a c h task and stag e in OODA (filled cells on Fig. 1). Therefore, the main question is “ which op erating mo de has to b e selected at time t ?” This c o rresp onds to a second level of authority sharing i.e. the authority to assign respo nsibilities (select oper ating mo des) and m ust b e implemented a s some sort of decision pro cess. Indeed, in a full-fledged authority-sharing system, the op e r ators and the machines could have different prefer ences concerning which op erating mo des should b e sele cted. W e can consider that the op e rators and the system’s in ternal repr esen ta tio ns can be broadly dec omposed along the same three categ ories: – mo dels and representations o f the situation: his corr esponds to the repre- sentation of ob jects in the system’s environment, knowledge ab out its laws, prop erties, et c. : this can b e seen as a “world mo del”; – mo dels and r epresen tations of the system: his corresp onds to the current state of the system, its known capabilities, predictions about its evolution, etc. ; – mo dels and repr esen tatio ns of the op e rator: similar ly this represents the state of the op erator, his/her abilities, p erformances, etc. If one c o m bines these categor ies with the tw o kind of acto rs (mach ine and Humans), we obtain six distinct fields of resea rc h relev ant to the development of authority sharing systems. W ithout b eing exhaustive, one can identify: – w ork o n human situational aw ar e ness (repres en tation of the situatio n on the op erator’s s ide); – w ork on trust in automation (representation of the system on the op erator’s side); – cognitive and physiologica l mo deling (repr esen tatio n o f the operato r on the system’s side). System’s a nd op erator’s representations are not only fed by obser v ation o f the situation (as illustrated on Fig. 2 ), but also through interaction between Hu- man and machine. Man- ma c hine interaction can happ en on several media (from classic mouse & keyboa rd, joystick to adv anced haptic in ter fa ces or dialo gue) but whatever the chosen media, it should tend to facilitate the con vergence b e- t ween the r espective repr esen tatio ns o f Humans a nd machines in volv ed. A gain, we consider here another field of res earch by itself. And one can note that an efficient interaction will decrea se the difficult y of the final decision-making pro- cess as conv erging r epresen tations on the Human and machine’s sides lead to conv erg ing pr e ferences on which op erating mo des to select. 3 Naturalness In the current generation o f UV Systems, Ground Op erator Interface (GOI) is a traditional Gr a phical User Interface, such as [11]. These are based on in- put/output mo dalities suc h a s drop-down men u or push button, with a con- strained in teraction language, pr o viding quantitativ e spatial infor mation and int eraction, etc. This interaction language is similar to the low-lev el command language for v ehicles, with quan titative spatial information for example. Thus, GOI naturalne s s is p o o r. How e ver, current works a ims at enhancing the natur a l- ness of interface [3]. That is integrating a less-co nstrained interaction, at least a single natur al mo dalit y as input [16, 9 ] (such as gesture, s poken or written language) or output [9] (such as sp eec h, ha ptic display), multi-mo dalit y , flexible int eraction [12], providing qua litativ e spatia l information and interaction et c. As so on as natur alness is introduced w ithin the GOI, non-understandings – due to v ag ueness, ambiguit y or undersp ecification – have to b e managed. In ter- active manag emen t of non-under standings follows fro m the co llabor a tiv e na ture of in ter action (grounding) and require s a new GOI co mponent: an in teraction manager [20 ]. Such an a pproach has been us ed within The WIT A S pro ject [14 ] as well as within the GeoDialogue Pro ject [2], which r elates to Geogr aphical Information Systems. Enhancing GOI naturalness ha s v a rious b enefits for UV Sys tems . First, a more natural interaction betw een an int erface and its user generally enhances the efficiency of interaction i.e. it re duces the cog nitive load induced b y the int eraction for the use r as well as in ter a ction time. F or example, a data en try function based on vocal keyw o r d recog nition may require a single vo cal utterance in the next genera tion of GOI, while it may require over t wen ty sepa rate manual actions in the curr en t g eneration of GO I. Moreov er, natural display mo dalit y (typically , haptic display) also aims at making up for the “sensor y isolation” of the gro und oper ator [9]. O perator ’s sensory isolation is due to the fact that he is gener a lly not collo cated in the s ame ph ysical spac e than vehicles [8, 21]. This leads to lack of situation aw ar eness, among others negative effects. Super visory control of UV systems mainly in volv es spatial cognition and r ef- erence to vehicles, la ndmarks, waypoints, etc. T o the extend that human b eings per forms these ta s ks using qua litativ e information and interact through gesture and verbal c o mm unication, GOI naturalness s ho uld fo cus on spa tial infor mation and in ter action. Nonetheles s our go al is not to transfor m the GOI in a fully natural interface. But, as so on as naturalness is in tro duced within an in terface, side effects hav e to b e ca refully co ns idered. In particular, the GOI must contain new functionalities: b eing a semantic bridge b etw een op erator s and vehicles, and handling of non-unders ta ndings. Fig. 3. Ground op erator interface: a semantic bridge. First, considering “natural” input device ( i.e. corresp onding to a control command f rom the ground op erator to a vehicle), there is a misma tc h betw een the “na tur al” command provided by the op erator and the “op erational” co m- mand that a vehicle ca n acc ept. Then, the gro und op erator in terface must be a semantic bridge , that conv erts the perceived message in a representation which is suita ble for the addressee . That is to say that following the pe r ception of an input on a control input device and following its interpretation, GOI also has to c onvert the understo o d control command before transmitting it to the prop er vehicle(s). As shown on Fig. 3: – the fr on t pa rt the GOI has to detect input on a control input dev ice and ha s to transmit the raw messa ge to the interpretation mo dule; – the interpretation mo dule identifies the intend meaning o f the op erator: the kind of con trol command a nd t he in tended ob jects, the t wo v ehicles which are the recipients o f the message and the intended pre-defined zone; – the conv er s ion mo dule has to: 1. translate o perator command into the lang uage whic h is pro per for the addressee; 2. pro cess the undersp ecified elements, such as pa th planning. The co n ver- sion mo dule may la c k necess ary information, such as the first wa yp oint . In this case, a completio n r equest has to b e send to the gr o und op erator in order to complete the comma nd con version. Second, as so on as an interface provides semi-cons tr ained int eraction, qualita- tive spa tia l interaction [2], natural (multi-)modality [2 2 ], then non-understandings may o ccur. Non- understanding is co mmonly set apart misunderstanding. In a misunderstanding, the addr essee succeeds in communicativ e act’s in terpretation, whereas in a non-unders tanding he fails . But, in a misunder standing, addre ssee’s int erpretation is incor rect. F or example, mishearing may lead to mis understand- ing. Handling non-understandings is necessary for the GOI as so on a s an in- put c on tro l co mma nd canno t be transmitted t o vehicles. Hu man b eings handle non understandings by interactively refining their unders ta nding unt il a p oin t of intelligibilit y is reached. This pro cess is called “gr ounding” [4]. In order to design grounding, in ter action has to be view ed as a co llabor a tiv e pr ocess and an in tera ction manager has to be in tegrated within the GOI. F or more details, the in ter e sted rea der may refer to [20]. The r e is a common m isconception that non-understandings a re c onsidered as “c o mm unicative error s” one ma y tend to av oid, as well a s unders tanding r e finemen t. Although, either for Human-Human Int eraction [5] or for Human-Co mputer Interaction [15], non- understandings and their manag emen t pr ocess pr esen t lots of adv antages. F or example, feedbacks are cue that enable interaction par tners to b e aw ar e of the level of understanding of each other. Through in tera ction refinemen t ea c h interaction partner maint ains an accur a te repres e ntation of the other and this e nha nces the efficiency of future int eractions. More genera lly , enhancing interaction des ig n may le a d to po s itiv e s ide effects enhancing miss ion control. If o ne takes the example of gesture, as explained in the previous sectio n, enabling the ground op erator to in teract using ges ture aims a t making efficie n t references. Actually , gestures facilitate the maintenance of s pa- tial representations in working memory [24]. Therefore, gestures may contribute to maintaining the situation aw areness that ena ble s an efficien t sup ervisory co n- trol by the op erator. 4 Balancing Mission & In teraction While using an UV System’s interface, t he ground op erator is at least engaged within tw o activities: missio n control and int eraction. This is the genera l case of all goal-o rien ted interaction (or dialogue): Dialo gues, ther efor e, divid e int o two planes of activity [4]. On one plane, p e ople cr e ate dia lo gu e in servic e of t he b asic joint activities t hey ar e en- gage d in m ak ing dinner, de aling with the emer gency, op er ating the sh ip. On a se c ond plane, they manage t he dialo gue itself – de ciding who sp e aks when, establishing that an utter anc e has b e en understo o d, etc. These two planes ar e not indep endent, for pr oblems in the di alo gu e may have their sour c e in t he joint activity the dialo gue is in servic e of, and vic e versa. Stil l, in t his view , b asic joint activi ties ar e primary, and dialo gue is cr e- ate d to manage them. [1 ] Int eraction is defined by each dialog partner’s goals to under stand e a c h other , i.e. words to reach a cer tain degr ee of in tellig ibilit y , sufficient for t he curr ent purp ose . The crucial p oint s here are that : 1. pe rfect understanding is not required, the level o f under standing required is directed b y the basic activity ( i.e. the mission) and the situa tional context ( e.g. time pres s ure); 2. as gro und op erator’s cognitive load is “div ided” b et ween the cog nitiv e loads induced by each activity , the interaction’s complexity must v a ry depending on the complexity in volv ed by the mission, as de fined by Mouloua and al. [16]. F or example, as time pres s ure rises, the cognitive lo ad induced by the mission increases. The c o gnitiv e load requir ed by the interaction should decr e a se in order to carr y through the mission. In the p ersp ectiv e of adapting adapt the gro unding pro cess to the spe cific case of the sup ervision of multiple UVs, one can note that a GO I is also an int eraction s upport for a team, ther efore similar to interfaces dedicated to Com- puter Supp orted Co opera tiv e W ork. How ever, the team include humans (ground op erators) and machines (vehicles). The GOI is also a n in teraction partner of ground ope rators. T hey interact for decision support tasks, non-understanding management and interface manipula tio n. In addition, our study aims at designing in teraction in o rder to tak e adv an- tage of the m utual dep endancy betw een mis s ion con tro l and interaction. More precisely , we prop ose adaptive int eraction design in order to obtain the prop- erties defined by Mouloua and al. [16, 20]. The p oint is to ob v ia te the negativ e side effects of a utomation, thro ug h balanced workload. W e claim that adapting int eraction design in rega rds h uman factor s such as op erator’s workload, trust or per formance must hav e po sitiv e s ide effects o n these human facto rs, cf. Figure 4. The GOI we are developing will integrate an Inter action Manager co mpo- nent , respons ible for choosing a dequate strategies for (1) gener ation and (2) interpr etation : Fig. 4. Interaction as a collab ora tive and adaptive process. 1. for a g iv en information or interaction to b e po ten tially send to the op erator, deciding whe ther to send it o r no t and choos ing a mo dalit y and for m ulation; 2. trying to understand interactions emanating from the op erator, given the current context and grounding information. Several r edundan t strateg ies will hav e to be develop ed f or the v ar ious p ossible int eractions p e rmitted by the GOI. F or a given interaction, some strategies will put the bur den of int eraction on the GOI (disambiguation, acknowledgemen t, etc. ) while other s will rest more “on the shoulders” of the o p erator . The rationa le is that in situations of low “ mission workload”, one is better off with giving more w or k the o perator as it builds up the gro unding all the while k eeping him/her busy (fending off b oredom and lo ss of a tten tion). Conv ersely , in missio n critical episo des, the GOI can assume a mor e active role in interaction a nd le t the operator fo c us on the missio n (and still b e r obust thank s to the gro unding constructed earlier ). 5 Situation Cueing for the GOI of a Sw arm of UVs In addition to designing different stra tegies for genera tion and interaction, we hav e to give to the interaction manag er some means to ev a luate the cur r en t s tate of the miss ion (and therefore make an estimation of the as sociated workload) in order to cho ose s trategies. In this s ection, we illustrate how – in the p ersp ectiv e of the ex tension of the Smaar t system (see [13] in this v o lume) – we in tend to discriminate four catego r ies of mission states a nd their asso ciated workloa ds . 5.1 Description The Smaar t system allows patrol and intercept op erations for a dozen of ro tary- wing UA Vs sup ervised b y a single op erator. Fig. 5. I ll ustration of levels of wo rkload. Symbols and correspond to – respec- tively – patrolling and pursuing UVs. The symbols correspond to alarms. Thick dotted lines link alarms that are supp osed to hav e b een triggered by th e same intruder, while greyed-out regions are search zones with adjustable parameters ( cen ter, d irection and breadth). Recent alarms are in dicated by an exclamation mark (!). Subfigure 5a illus trates the lowest level of w o r kload: Routine Patr ol . The UA Vs perform their surveillance with a stable perfor mance, every p oint on the airbase is scanned a t an acc eptable frequency . In this co n text, the task of the op erator is pur ely of a sup ervisor y na tur e. The pa rameters of the display ar e se t to v alues known to b e adequate for the current setup (n umber of UA Vs, area, threat level, etc. see [13] in this volume). The op erator has a go o d apprecia tion of the reliability of the a lgorithm (go o d) and knows that anomalies are ra re. Due to its lo cal natur e, the alg orithm that the UA Vs use to p erform their co ordinated patro l c a n misbehave in s o me configurations . 3 In s uc h cases, the op erator can detect the anomaly and take a ctions by dispatching so me UA Vs manually to comp ensate for the anomaly . In this context he/s he has to closely sup e rvise t he execution of th ese actions, judge t heir effectiv enes s, all the while 3 F or example, if for some reasons an “islet” app ear in the d igi tal pheromone space, the gradien t-follo wing UA Vs will never reac h it, and consequentl y , a dark sp ot will app ear and wo rsen. This local pro cessing on the part of the UA Vs has some interes ting prop erties, but also h as th e consequence that a mo dification in a part of the airbase ( e.g. creation of a no-fly zone) will tak e some time to be propagated to th e rest of the environmen t t h ough the digital pheromone. contin uing his/ her globa l sup ervising activity of the patrol on the whole airbase. One ca n detect such a w o rkload level ( Patr ol with Anomaly ) b y the a c tion of the op erator o n an UA V (Subfigure 5 b). The t wo next workload lev els are c haracteriz e d b y the presence of alarms. The num b er of alar ms in recent time allows to distinguish low threa t Alarm (po ssible false ala rm, Subfigure 5 c) from emerg e ncy situa tion (m ultiple ala rms, co ordinated Intrusion , Subfigure 5d). In this last situatio n, the gener al s urv eil- lance of the airbas e is la rgely jeopardized, a s (1) many UA Vs ar e used to pursue the intruders in sp ecific re gions, therefo re depleting the patr olling vehicles. And, (2) the attention of the o perato r is larg ely foc us ed on the intrusions. 5.2 Characterization T able 1 sums up the characterizatio n of the op erational s itua tions we intend to implemen t in Smaar t . 1 No activity on the part of the op erato r ( no command sen t to th e UA Vs) 2 At least a command sent to a UA V in the last few minutes 3 One alarm in the last few minutes 4 Several alarms in the last few minutes T able 1. Characterization of t h e four levels of mission workload in Smaa r t . Based on these criter ions, the interaction manager is able t o compute a dis- crete mission workload level at every momen t: either (1) by storing every even ts (op erator action toward UA Vs or alarms) a nd matching with the criterions of table 1, or (2) by up dating a contin uo us workload lev el b y the co m bination of fixed additiv e v a lues asso ciated to alarms and o rders with a disco un t tempora l factor (see Figure 6). With the latter option, the cont inu ous lev el is compared to pre-defined thresho lds to obtain discrete levels. 6 Conclusion & P ersp ectiv es In the bro ad context of authority s haring, we hav e outlined ho w introducing more natural int eraction in the design of the g round op erator interface of UV systems should help in allowing a single op erator to manage the complexity of his/her task. Introducing new mo dalities is one o ne of the me ans in the r ealization of our vision of next-g eneration GOI. A more fundamental a spect resides in the inter action manager which should help balance the workloa d of the o perato r betw een miss ion a nd in teraction, no tably by applying a m ulti-strategy approach to genera tion and in terpretation. W e intend to apply these principles to the context o f the Smaar t pr otot yp e, and in this per s pective, we have illustr ated ho w to characterize the workload asso ciated with a particular op erational situation. Fig. 6. Illustration of the computation of the four levels of mission workload. References 1. A. Bangerter and H.H. Clark. Na vigating joint p ro jects with dialogue. Co gnitive Scienc e , 27:195–225, 2003. 2. G. Cai, H. W ang, and A . M. Mac Eac h ren. Comm u n icati ng v ague spatial concepts in human-GIS interactions: A collab ora tive dialogue approac h. Sp atial Information The ory , pages 287—300, 2003. 3. J.Y.C. Chen, E.C. Haas, K . Pillalamarri, and C.N. Jacobson. Hu man rob ot inter- face: Issues in op erator p erformance, interface d esi gn, and technolog ies. T echnical Rep ort AR L-TR-3834, Army R esea rc h Lab oratory (A R L), Ab erdeen, July 2006. 4. H. H. Clark. Using language . Cambridge Universit y Press, Cambridge, U K, 1996. 5. P . Cohen, H. Levesque, J. Nunes, and S. O v iatt. T ask-oriented d ialogue as a consequence of joint activity . In Pr o c e e dings of PRI CAI-90 , pages 203–208, 1990. 6. M.L. Cummings, S. Bruni, S. Mercier, and P .J. Mitc hell. Automation architec- ture for single op erator, multiple UA V command and control. The International Command and Contr ol Journal , 1(2):1–24, 2007. 7. S. Dixon and C. Wic kens. Con trol of m u ltiple-UA Vs : A workload analysis. In Pr o c e e dings of the 12th I nte rnational Symp osium on Aviation Psycholo gy , 2001. 8. M.R. Endsley . T ow ard a theory of situation aw areness in d ynamic systems. Human F actors , 37(1):32–64, 1995. 9. D.V. Gunn, W.T. Nelson, R.S. Bolia, J.S. W arm, D.A. Sch umsky , and K.J. Corco- ran. T arget acquisition with UA Vs: Vigilance displa y s and adv anced cueing inter- faces. In Pr o c e e dings of the Human F actors and Er gonomics So ciety 46th A nnual Me eting , pages 1541–1545, 2002. 10. C. Johnson. Inv erting th e control ratio : Human con t rol of large, autonomous teams. In Pr o c e e dings of AAMAS’03 Workshop on Humans and M ulti-A gent Sys- tems , 2003. 11. W.S. K im. Graphical op erator interface for space telero b otics . In Pr o c e e dings of the IEEE Internat ional Confer enc e on R ob otics and A utomation , pages 761–768, 1993. 12. F. Legra s and G. Coppin . Autonomy sp ectrum for a multiple UA Vs system. In COGIS’ 07 - C Ognitive systems with Inter active Sensors , 2007. 13. F ran¸ cois Legras. Ex p eriments in human op eration of a sw arm of UA Vs. In Pr o- c e e dings of the first c onfer enc e on Humans Op er ating Unmanne d Systems (HU- MOUS’08) , Brest, F rance, 3-4 Sept. 2008. 14. O. Lemon, A. Gruenstein, L. C a vedon, and S. Peters. Collaborative dialog ue fo r contro lling autonomous sy stems. In Pr o c e e dings of AAAI F al l Symp osium , 2002. 15. B. Martino v ski and D. T raum. Breakdown in human-mac hine interaction: th e error is th e clue. I n Pr o c e e di ngs of the ISCA tutorial and r ese ar ch worksh op on Err or hand ling in dialo gue systems , pages 11–16, 2003. 16. M. Mouloua, Gilson, J. R., Kring, and P .A. Hanco ck. W orkload, situation aw are- ness, and teaming issues for UA V /UCA V op erations. In Pr o c e e dings of the Human F actors and Er gonomics So ci et y , volume 45, pages 162–165, 2001. 17. R. Para suraman, T.B. Sheridan, and C.D. Wick ens. A mod el for types and lev els of human intera ction with automation. IEEE T r ansactions on Systems, Man, and Cyb ernetics - Part A: Systems and Humans , 30:286–297, 2000. 18. M. J. Pic kering and S. Garrod. T ow ard a mechanistic psyc hology of d ialogue. Behavior al and Br ain Sc ienc es , 27:169–225, 2004. 19. S. S aget and M. Guyomard. Goal-orien ted dialo g as a collaborative sub ordinated activit y in volving co llaborative acceptance. In Pr o c e e dings of Br andial’06 , pages 131–138 , Universit y of Po tsdam, Germany , 2006. 20. S. Saget, F. Legras, and G. Coppin. Collaborative model of in teraction and un- manned vehicle systems’ interface. I n HCP works hop on ”Sup ervisory Contr ol in Critic al Systems Management”, 3r d International Confer enc e on Human Center e d Pr o c esses (HCP-2008) , Delft, The N etherlands, 2008. 21. D.H. Sonnenw ald, K .L. Maglaughlin, and M.C. Whitton. Designing to supp ort situation aw areness across distances: an example from a scien t ifi c collaboratory . Information Pr o c essing & Management , 8(6):989–1011 , 2 004. 22. D. T raum. A Computational The ory of Gr oundi ng in Natur al L anguage Conversa- tion . PhD thesis, Computer Science Deptartment, U niv ersity of Ro chester, 1994. 23. David G. Ullma n. “OO-OO-OO!” the sound of a broken OO D A lo op. Cr osstalk , April 2007. 24. J. W esp, R .and Hesse, D. Keutmann , and K. Wheaton. Gestures maintain spatial imagery . Amer ic an Journal of Psycholo gy , 114(1):591 –600, 2001. 25. D.T. Williamson, M.H Drap er, G.L. Calhoun, and T.P . Barry . Commercial sp eec h recognition technology in the military domain: Results of tw o recen t research ef- forts. Internat i onal Journal of Sp e e ch T e chnolo gy , 8(1):9–16, 2005.
Original Paper
Loading high-quality paper...
Comments & Academic Discussion
Loading comments...
Leave a Comment