A Systematic Approach to Artificial Agents
Agents and agent systems are becoming more and more important in the development of a variety of fields such as ubiquitous computing, ambient intelligence, autonomous computing, intelligent systems and intelligent robotics. The need for improvement o…
Authors: Mark Burgin, Gordana Dodig-Crnkovic
A Systematic Approach to Artificial Agents Mark Burgin 1 and Gord ana Dodig- Crnkovic 2 1 Department of Mathematics, University of California, Los Angeles, California, USA, 2 School of Innovation, Design and Engineeri ng, Mälardalen University, Sweden , Abstract. Agents and agent systems are becomi ng more and more import ant in the development of a vari ety of fields such as ubiquito us computing, ambient intelligence, aut onomous computing, intelligent systems and intelligent robotics. The need f or improvement of our basic knowledge on agents is very essential. We take a sys tematic appro ach and pr esent extended c lassification of artificial agents which can be us eful for understanding of what artificial agents are and what they can be in the future. The aim of t his classification is to give us insights in what kind of agents can be created and what type of problems demand a specific kind of agents f or their solution. Keywords: Artificial agents, clas sification, perception, reasoning, evaluati on, mobility 1 Introduction Agents are advanced too ls people use to ac hieve differe nt goals and t o various problems. The main difference betwe en ordinary tools and age nts is that agents can function more or less inde pendently from thos e who delegated age ncy to the agents. For a long time people used only o ther people a nd sometim es animals as their agents. Developments in information processi ng technology, com puters and their networ ks, have made poss ible to build and use arti ficial agents. Now the most popular approach in artificial i ntelligenc e is based on agen ts. Intelligent age nts form a bas is for many ki nds of advanc ed software sy stems that incorporate va rying methodo logies, diverse sources of domain knowle dge, and a variety of data types. The intelligen t agent appr oach has been app lied extensivel y in business ap plications, and m ore recently in me dical decision su pport systems [12, 13] and ecology [1 5]. In the general parad igm, the human decision maker is c onsidered to be an agen t and is inc orporated in to the decisio n process. The overal l decision is facilitated b y a task manager that ass igns subtasks to the a ppropriate a gent and combines conc lusions reac hed by agents to form the final decisi on. 2 The Concept of an Agent There are several definitio ns of intelligent and softwa re agents. However, they describe rather than define agents in ter ms of their task, auto nomy, and communicat ion capabilitie s. Some of the m ajor definitio ns and descripti ons of agents are given in Ja nsen [7]. 1. Agents are se mi-autonomo us computer programs t hat intellige ntly assist the user with compu ter applicat ions by emp loying arti ficial int elligence t echniques to assist users wi th daily computer tasks, such as reading electr onic mail, m aintaining a calendar, and filin g information. Agents learn through exam ple-based reasonin g and are able to im prove their performance over time. 2. Agents are computatio nal systems that inha bit some complex, dynam ic environment, and sense and act a utonomously to rea lize a set of goals or tasks. 3. Agents are softw are robots that think and act on behalf o f a user to carry out tasks. Agents will help meet the gr owing need for more functiona l, flexible, and personal computin g and telecommu nications sy stems. Uses fo r intellig ent agents incl ude self-conta ined tasks, opera ting semi-autono mously, and c ommunication between the user and sy stems resources. 4. Agents are software programs that imple ment user delegat ion. Agents mana ge complexity, su pport user mobility, and lower the entry lev el for new users. Agents are a design m odel similar to clie nt-server compu ting, rather than strictly a technology, program , or product. Franklin and Graesser [4] have collected and analyzed a more extended l ist of definitions: An agent i s anything that ca n be viewed as perc eiving its envi ronment throug h sensors and acting up on that environment through effectors, Russel and Norvig, [10]. Autonomous agents are co mputational systems that inhabi t some complex dynamic environment, sense and act autonomously in this envir onment. By doing so they realize a set of goals or tasks f or which they are des igned, Maes , [8]. Let us define an agen t as a persistent softw are entity dedicate d to a specific purpose . 'Persistent' dis tinguishes age nts from subrou tines; agent s have their own ideas about how to accom plish tasks, the ir own agendas . 'Special purpose' distinguishes them from entire multi function appli cations; agents are typically much sma ller, Smith, Cypher and S pohrer, [11]. Intelligent age nts continuo usly perform three functions: perception o f dynamic conditio ns in the en vironment ; action to affect cond itions i n the en vironment; and reasoning to interpret perc eptions, solve p roblems, draw in ferences, and determ ine actions, Hayes-Roth, [5]. Intelligent age nts are software entities th at carry out some set of ope rations on behalf of a user or anoth er program, with some degree of independ ence or autonomy, and in so do ing, employ some kno wledge or repre sentation of the user's goals or desire s [6]. An autonomous a gent is a syst em situated within and a part of an environmen t that senses that envir onment and acts on it, over time , in pursuit of its own agenda and so as to effect what it senses in the future, Franklin and Graesser, [4]. However, as we mention ed in the Introduct ion, there are also natu ral and soci al agents. For in stance, the term “ agent” in the cont ext of busi ness or econo mic modeling refe rs to natural r eal world obje cts, such as or ganizations, compa nies or people. These real world objects are ca pable of displa ying autonom ous behavior. They react to extern al even ts and are capable o f initiating activiti es and intera ction with other objects (age ncy) . Thus, it is reasonab le to assume that an agent is anything (or anybo dy) that can be viewed as perceivin g its environment through sensors and acting upon thi s environment thr ough effector s . A human agent has eyes, ears, and other organs for sensors, and hands, legs, mouth, and other body par ts for ef fectors. A robotic age nt uses cameras, infrare d range finders and o ther sensing devic es as sensors and various body part s as effectors. A so ftware agent has communicati on channels both for sensors and effectors. This gives us the following inform ational structure of an agent, reflectin g agent’s information flows: Raw information Descriptiona l information Pr escriptional information Receptor(s) Processor(s) Effector(s) Fig. 1. The triadic i nformational struct ure of an agent. 3 Agent Typology There are diffe rent types of inte lligent age nts. For instance , Russel and Norvig [10] consider four types: 1. Simple reflex (or tropistic, or behav ioristic) agents. 2. Agents that keep track of the world. 3. Goal-b ased agents. 4. Util ity-based agents. The general structur e of the world in the form of the Existent ial Triad gives us thre e classes of agent s: - Physical agents. - Mental agent s. - Structural or information agents. People, anim als, and robots are example s of physica l agents. Softw are agents a nd Ego in the sense o f psycho analysis are exam ples of menta l agents. The hea d of a Turing machine (cf., for example, Burgin, [3]) is an exa mple of a structural age nt. Physical agents belong to thre e classes: - Biologi cal agent s. - Artificial agents. - Hybrid agents. People, animal s, and microorganisms are exam ples of biologica l agents. Robots are examples of ar tificial agents. Hybrid agent s consist of bi ological and art ificial parts (cf., for example, Venda, [14]). Mizzaro [9] classifies agent s according to three para meters. With respect to perception, he ide ntifies perceivin g agents with variou s perce ption levels. Complete perceiving agents , who have complete perception of the w orld, constitute the highest level of perceiving agents. Op posite to perce iving ag ents, the re are no perception agents , which are completely isolate d from their environment. This d oes not correlate with the defin ition of Russell a nd Norvig [10] but it is consiste nt with a gene ral defin ition of an agent. With resp ect to re asoning, th ere are rea soning agen ts with vario us reason ing capabilities. R easoning agents derive new knowle dge items from their existing knowledge s tate. On the hig hest level of reas oning agents, w e have omniscient agents, which are capable of making actua l all their pote ntial knowled ge by logical reason ing. Opposite to reasoning age nts, Mizzaro [9] ident ifies nonreasonin g agents, which are unable to derive new knowledge items fro m the existing knowledg e they possess. With respect to memory, there ar e permanent memory agents, which are capable o f never losi ng any por tions of their k nowledge. N o memory agents ar e oppos ite to permanent m emory agents because the y cannot keep their knowledge s tate. Volat ile memory agents are between these two categories. People ar e volatile me mory agents. Here we sugge st seven class ifications of age nts based on at tributive dime nsions of agents. According to the cognitive/intel ligence criter ion, there are: 1. Reflex (or tropistic , or behaviorist ic ) agents, which realize the simple sch ema action → reaction . 2. Model based agents, which ha ve a model of their environment. 3. Inference based agents, which use inference in their activit y. 4. Predictive ( prognostic , anticipat ive ) agents, which use prediction in their activity. 5. Evaluation bas ed agents , w hich use evaluati on in their activi ty. Some of these classes are also considered in Russell and Norvig [10]. Note that prediction and /or evaluation do not necessaril y involve inferenc e. In addition, accordin g to the dynamic criterion, there are : 1. static agents, w hich do not m ove (at least, by themselves), e.g., desktop computer; 2. mobile age nts, which can move to some extent of fre edom; 3. effector mobile agents, which have ef fectors that can move ; 4. receptor (sensor) mobile agents, which have rece ptors that can m ove. Note that mobility ca n be realized on different levels and in different degr ees. According to the interaction cr iterion, there ar e: 1. deliberative ( proactive ) agents, wh ich try to an ticipate wha t is going to happen in their enviro nment and to organ ize their acti vity taking into accoun t these p rediction s; 2. reactive agen ts, which react t o changes in the environment; 3. inactive agents, which do the same thin g independent ly of what happen s in the environm ent. According to the autonomy criterion, there are: 1. autonomo us agents; 2. dependent agents; 3. controlled agents. There are m any kinds and level s of depend ence. Control is con sidered as the highest level of dependence. According to the learning cr iterion, there a re: 1. learning agents do not learn at all; 2. remembering agents realize the lowest level of learn ing – remem bering or memorizing 3. conservative agents, do not le arn at all. According to the cooperation criterion, there a re: 1. competitive agents, do not co llaborate but only c ompete; 2. individuali stic agents, which do not interact w ith other agents; 3. collabor ative agents. There are many kinds a nd levels of compe tition. For instanc e, it can be competition by any cost or competition accord ing to defin ite (e.g., moral) rules or/a nd principles. Collaborat ion also can take d ifferent forms. There are m any kinds and level s of depend ence. Control is con sidered as the highest level of dependence. According to the learning cr iterion, there a re: 1. learning age nts do not learn at all; 2. remembering agents realize the lowest le vel of learni ng – remem bering or memorizing 3. conservative agents, do not le arn at all. There is one m ore dimension (c riterion for classification), which u nderlies all othe rs. It is the a lgorithmic dimension. I ndeed, an a gent can pe rform operatio ns (e.g., building a m odel of the e nvironment or m aking evaluatio n) and actio ns (e,g., moving from one place to anot her) in different modes and using var ious types of algor ithms. According to the learning cr iterion, there a re: 1. subrecursive agents use only subrecurs ive algorithms, e.g., fin ite automata; 2. recursive agents can use any recursive algo rithms, such as Turing machines, random access machines, Kolmogorov algorit hms or Minsky machines; 3. super-recursive agen ts can use some super-r ecursive algorit hms, such as inductive Turi ng machines or trial-and-error m achines. The differe nce between recursive an d super-rec ursiv e agents is that at so me moment after receivi ng or formulating a tas k and starti ng to fulfill it, a recursive agen t will stop and inform that the task is fulfil led. In a similar situation, super-recursive agent can fulfill tasks that do not deman d stopping. For ins tance, a program of a satellite computer which pe rforms observ ations of change s in the atmosp here for weather predictio n has to do observati ons all time because there is no such a moment when all these observations are completed. As a result, s uper-recursive age nts can perform much m ore tasks and solve m uch more problems than recursive agents (cf., for example, [2]). A c ognitive agent has a system of knowledge K. Such an agent perceives information from the world a nd it changes the initial knowledge s tate, i.e., the state of the system K. In general , agents may b e usefully cl assifi ed accordin g to the su bset of these properties tha t they enjoy, Franklin an d Graesser , [4]. When properties are orga nized in definite cl asses, it is possible to use sub-classificati on schemes via contro l structures, via environments (database, file system, netw ork, Interne t), via langua ge or via application s. For instance, distinct ion between data-based an d knowledge-base d agents is base d on such part of agent envir onment as the source of information. Generali zing the appro ach of Franklin and Graesser [4 ], we can clas sify agent s by their internal and e xternal componen ts. For instance, cont rol structure is an internal component, w hile a source of information is an external com ponent. A slightly different ap proach to the tax onomy of agent pro perties is based on aspect of an agent, for examp le, on agent func tions. Thus, separation of si gnal and image a nalysis agents is related to ag ent functions. Brus toloni [1] offer s another class ification by funct ions: regulation, plann ing and adaptive agents. Different types of au tomata can be associated with the types of agent s. Reflex agents may be modeled by autom ata without m emory, such as may be repr esented by decision tables. All other types of agents dem and memory. The third and highe r levels, in addit ion, need a sufficie ntly powerfu l processor, var ying by the leve l of the agent. Agents that perform simple tasks m ay use a finite a utomaton pr ocessor. More sophisticate d agents deman d processors that perfor m inference and have the computationa l power of Turing ma chines. Processors and program system s for intelligent agents have to ut ilize super-recursive automata and algori thms, Burgin, [3]. Conclusions Agency and agen t-based solut ions for a wide variety of c lasses of problems is becoming more and more interesting and im portant as we face situa tions of ubiquitous co mputing, ambient i ntelligenc e, autonomous comput ing, intellige nt systems and i ntelligent robotics – to name but a few issues. In this paper, w e present ed an ext ended cla ssification of artifi cial age nts as a co ntributio n to a systematic app roach. The aim of th is classification is to better understand what k inds of agents can be cr eated and what typ e of problems demand a speci fic kind of these agents for their soluti on. The need for improvement of our basic understanding of what ag ents are, wha t they can do and what they can b e made to be and to do is very essential for our civilization. References 1. Brustoloni, J.C.: Autonom ous Agents: Characterization and Requirements, Carnegie M ellon Technical Report CMU-CS-91-204, Pittsburg h: Carnegie Mellon University (1991) 2. Burgin, M.: Non linear Phenomen a in Spaces of Al gorithms, Internati onal Journal o f Computer Mathematics , v. 80, No. 12, pp. 1449 -1476 (2 003) 3. Burgin, M.: Super-recursive A lgorithms, Springer, N ew York/Berlin/Heidelberg (200 5) 4. Franklin, S., Graesser A.: Is it an Agent, or just a Program?: A Taxonom y for Autonomous Agents, In: Proceedings of the Third Internation al Workshop on Agent Theories, Architectur es, and Languages, Springer-Verlag (1996) 5. Hayes-Roth, B.: An Architect ure for Adaptive Intelligent Systems. Artificial Intelligence: Special Issue on Agent s and Interactivity, 72, 329-36 5 (1995) 6. IBM's Intelligent Agent Strate gy white paper, http://activist.gpl.ibm.com: 81 /WhitePa per/ptc2 .htm 7. Jansen, J.: Using Int elligent Agents to Enhance Search Engi ne Performance, Firstm onday, No.2/3, http://www.firs tmonday.dk (1996) 8. Maes, P.: Artifi cial Life Meets Ent ertainment: Life like Aut onomous Agents, Communicatio ns of the ACM, 38, 11, 108--1 14 (1995) 9. Mizzaro, S.: Towards a theory of episte mic information. Information Modelling and Knowledge Bas es, IOS Press, v. 12, Am sterdam, 1--20 (2001) 10. Russel , S.J. a nd No rvig, P.: Ar tific ial In telligence: A Modern Approach, Prentice-Hall, Englewood Cliffs, N.J. (1 995) 11. Smith, D. C.: Cypher A., Spohrer J. : KidSim: Programming Agents wi thout a Programming Language, Communications of the ACM, 37, 7, 55--6. (1994) 12. Hsu C., G oldberg H.S.: Knowled ge-mediated retrieval of laboratory observ ations, Proc. JAMIA, v.23:809--8 13(1999) 13. Lanzola G., Ga tti L., Falasconi S., Ste fanel li M.: A framework for building cooperative software agents in medi cal applications, Arti ficial Int elligence in Medicine, 16, 223--249 . (1999) 14. Venda, V.F.: Hybrid Intellig ence Systems: Evoluti on, Psychology, Inform atics. (Machine Engineering Industry - Mashin ostroenie, Moscow). (In Russian) (1990) 15. Judson, O.P.: The Ris e of the Individual-based mod el in Ecology, Trends in Ecology and Evolution, 9, 9-14 (1994)
Original Paper
Loading high-quality paper...
Comments & Academic Discussion
Loading comments...
Leave a Comment