Powered by AI morphological analysis + vector semantic search
cs.NE2010-07-050
Artificial Immune Systems Tutorial
The biological immune system is a robust, complex, adaptive system that defends the body from foreign pathogens. It is able to categorize all cells (or molecules) within the body as self-cells or non-self cells. It does this with the help of a distri…
Authors:
Uwe Aickelin, Dipankar Dasgupta
Search Methodologies: Introductory Tutor ials in Optimization and Decision Suppo rt Techniques Edmund K. Burke (Editor), Graham Kend all (Editor) Chapter 13 ARTIFICIAL IMMUNE SYSTEMS U. Aickelin # and D. Dasgu pta* # University of Nottingham, Nottingham NG8 1BB, UK *University of Memphis, Memphis, TN 38152, USA 1. INTRODUCTI ON The biological immune system is a robust, c omplex, adaptive system that defends t he body from f oreign pathogens. It i s able to c ategorize all cells (or molecules) within the bod y as self-cells or non-self cells. It does this wi th the help of a distributed task force that has the intellig ence to take action from a local and al so a gl obal pe rspective usi ng its network of chem ical 2 messengers for communication. There a re two major branches of the immune system. The innate i mm une system i s an u nchanging mechanism that detects and destroys certain invading organisms, whilst t he a daptive immune system responds to previously unknown foreign cells and builds a response to them that can remain in the body ov er a long period of tim e. This remarkable information processing biological system has caught the attention of com puter science in recent years. A novel c omputational intelligence technique, ins pired by immunology, has emerged, called Artific ial Imm une Systems. Several concepts from the immune have been extracted and applied for solution to real world science and engineering pr oblems. In this tutorial, we briefly describe the immune system metaphors that ar e relevant to ex isting Artificial Immune Systems methods. We will then show illustrative r eal-world problems suitable for Artificial I mmune System s and give a step- by-step alg orit hm walkthrough for one such problem. A comparison of the Artificial Immune Systems t o other well-known algorith ms, areas for future work, tips & tricks and a list of resources will round this tutorial off. It should be noted t hat as Artificial Immune Sy stems is still a young a nd evolving field, there is not yet a fixed algorithm template and hence actual implementations m ight differ somewhat from time to tim e and from those examples giv en here. 2. OVER VIEW OF THE BIOLOGICA L IMMUNE SYSTEM The biological immune s ystem is an elaborate defense system which has evolved over millions of years. While many details of the immune mechanisms ( innate and adaptive) and processes (hum eral and cellular) are yet unknown (even to immunologists), it is, however, well-known tha t the immune system uses multilevel (and overlapping) d efense both in parallel and s equential fashion. Depending on the type of the pat hogen, and the way it gets into the body, the imm une s ystem uses different response m echanisms (differential pathways) either to neutraliz e the pathogenic effect or to destroy the infected cells. A detailed overview of the i mm un e system can be found in many textbooks, for i nstance Kubi (2002). The immune features that are particularly relevant t o our tutorial are matching, diversity and distributed control. Matching re fers to the binding between antibodies and an tigens. Diversity ref ers to the f act that, in order to achieve opti mal antigen space coverage, antibody diversity must be encouraged acco rding to Hightower et al (1995). Distributed control means th at there is no central controller; 3 rather, the immune s ystem is g overned by loca l interactions among i mmune cells and antigens. Two of the m os t i mportant- cells in this p rocess are white blood cells, called T-cells, and B-cells. Both of these originate in t he bone marrow, but T-cells pass on to t he thymus to mature, before they circulate the body in the blood and lym phatic vessels. The T-cells are of three t ypes; T helper cells which ar e essential to the activation of B-cells, Killer T-cells which bind t o foreign invaders and i nject poisonous chemicals into them causi ng their destruction, a nd suppressor T- cells which i nhibit the action of other imm une cells thus preventing allergic reactions and autoimm une d iseases. B-cells are r esponsible for the production a nd secret ion of antibodies, which are specific protein s that bind to the antigen. Eac h B-cell c an only produce one particular antibody . T he a ntigen is found on t he surface of the invading organism and the binding of a n antibody to t he antigen is a signal to destroy the invading cell as shown in Figure 1. 4 AP C M H C p rot ein An tige n Pe ptid e T- cel l Ac tiva ted T -ce ll B- ce ll Ly mp ho kin es Ac tiva ted B- cel l (p las ma ce ll) ( I ) ( I II ) ( I V ) ( V ) ( V I ) ( V II ) ( I I ) Figure -1. Pictori al represen tation o f the essen ce of the acquir ed immune s ystem mech anis m (taken fro m de Cast ro an d van Zuben (19 99): I- II show the in vade enteri ng th e body and activating T-Cells, which then in IV acti vate the B -cells, V is the an tigen match ing, V I the antibo dy produ ction and VII the antigen ’s destr uction. As mentioned abov e, the hum an body is protected against foreign invaders by a m ulti -layered system . The imm une system is composed of physical barriers such as the skin and respiratory system; physiol ogical barriers such as destructive e nzym es and stomach a cids; and the i mmune system, whic h has can be broadly divided under two heads – Innate (non- specific) Imm unity a nd Adaptive (specific) Immunity, which are inter-linked and influence each ot her. The Adaptive Imm unity aga in is subdivided under two heads – Hum oral Immunity and Cell Mediated I mmunity. Innate Immunity: The Innate Immunity is pr esent at birth. Physiological conditions such as pH, temperature and chemical media tors provide inappropriate living conditions for foreign organisms. Also microorganisms are coated with antibodie s and/or complem ent products (opsonization) so that they are easily r ecognized. Extr acellular mater ial is th en ingested b y macrophages by a proc ess ca lled phagocytosis. Also T DH Cells influ ences 5 the phagocytosis of macrophag es by s ecreting certain chemical messengers called l ym phokines. The low levels of sialic acid on foreign antigenic surfaces m ake C 3 b bind to these surfaces f or a long time and thus activate alternative pathways. Thus M AC is f ormed, which puncture t he cell surfaces and kill the foreig n invader. Adaptive Imm unity: Adaptive Immunity is the main focus of interest here as learning, adaptabi lity, and mem ory are important characteris tics of Adaptive Immunity. It is subdi vided under two heads – Hu moral Immunity and Cell Mediated Im munity. Humoral immunity: Humoral immunity is mediated by antibodies contained in b ody fluids (k nown as hum ors). The hum oral branch of th e immune system involves interaction of B cells wit h antigen a nd their subsequent proliferation and differentiation into antibody-secreting plasma cells. Antibody functions as the effectors of the hum oral response by binding to antigen and facilitating its elimination. When an antigen is coat ed with antibody, it c an be elim inated in several ways. For exam ple, antibody can cross-link t he antigen, fo rming clusters that are m ore readily i ngested by phagocytic cells. Binding of antibody to antigen on a microorganism also can activate the complem e nt system, resul ting in lysis of t he foreign organism. Cellular immunity: Cell ular immunity is cell-m ediated; effector T cells generated i n response to antigen are responsible for cell -m ediat ed immunity. Cytotoxic T lymphocytes (CTLs) participate in cell-m ediated immune reactions by killing a ltered s elf-cells; they play an important role in t he killing of virus- infected cells and tumor cells. Cytokines secreted by T DH can mediate the cellular immunity, and activate various pha gocytic cells, enabling them to phagocytose and kill m icroorganism s more effectively. This t ype of cell-mediated immune r esponse is especially important i n host defense against intrace llular bacteria and protozoa. Whilst there i s more than one mechanism at work (see Far mer (1986), Kubi (2002) or Jerne (1973) for m ore details), the essential process is the matching of antigen and antibody, which leads to i ncreased concentrations (proliferation) of more closely matched antibodies. In particular, idiotypic network theory, negative se lection mechanism, and the ‘clonal selection ’ and ‘somatic hyperm utation’ theories are primarily used in Artificial Immune Systems models. 6 2.1 Immune Netw ork Theory The immune Network theory had been proposed in the mid-seventies (Jerne 1974 ). The hy pothesis was that th e imm une system maintains an idiotypic netwo rk of interconnec ted B cells for antigen recognition. T hese cells both stimulate and suppress each other in certai n ways that lead to the stabilization of the network. Two B ce lls are connec ted if the a ffinities t hey share exceed a certain t hreshold, and the strength of the connection is directly proportional to the affinity they share. 2.2 Negative Selection mechanis m The purpose of negative selection is t o pr ovide tolerance for self cells. I t deals with the immune system's abil ity to detect unknown antigens while not reacting to the self cells. During the g eneration of T-cells, receptors are made through a pseudo-random genetic rearrangement pr ocess. Then, they undergo a cen soring proce ss in t he thym us, called the negative selection. There, T-cells that react against self-proteins are destroyed; thus, only those that do not bind to self-proteins are allowed to leave the thymus. These matured T-cells then circulate throughout the body to perf orm immunological functio ns and protect the body ag ainst foreign antigens. 2.3 Clonal Selection Principle The clonal selection principle describes the basic features of an immune response to an antigenic stimulus. It establishes t he idea t hat only those cells that recognize the antigen proliferate, t hus being selected against those that do not. The main feature s of the clonal selection theory are that: • The new cells are copies of their parents (clone ) subjected to a mutation mechanism w ith hi gh rates (somatic hypermutation); • Elimination of newly differentiated lymphocytes carrying self-reactive receptors; • Proliferation and differentiation on contact of mature cells with antigens. When an antibody strongly m atches an antigen the corresponding B-cell is stimulated to p roduce clones of itself t hat then produce more antibod ies. This (hyper) mutation, is q uite rapid, often as much as “one m utation per cell division” (de Castro and Von Z uben, 1999). This a llows a very quick response to the an tigens. I t should be noted here that in the Artificial 7 Immune Systems literature, often no distinction is made between B-cells and the antibodies they produce. Both are subsumed under the word ‘antibody’ and statem ents such as mutation of antibodies (rather than mutation of B- cells) are comm on. There are many more features of the imm une system, including adaptation, imm unological memory and protection against auto-imm une attacks, not discussed here. In t he following section s, we will revisit som e important aspects of t hese concepts and show how they can be modelled in ‘artificial’ immune systems and then used to solv e real-world problems. First, let us give an o verview of typical problems that we be lieve are amenable to being solv ed by Artificial Imm une Systems. 3. ILLUSTRA TIVE PROBLEMS 3.1 Intrusion Detecti on Sy stems Anyone keeping up-to- date with cur rent affairs in computing can confirm numerous cases of attacks made on c omputer servers of well-know n companies. T hese attacks range from den ial-of-servic e attacks to extracting credit-card details and sometimes we fi nd ourselves thinking “haven’t they installed a firewall”? T he fact is they ofte n have a firewall. A f irewall is a useful, often es sential, but current firewall technology is insufficient to detect and block all k inds of attacks. However, on ports that need t o be open to the internet, a firewall can do little t o prevent attacks. Moreover, even if a port is blocked f rom internet access, this does not sto p an attack from inside the organisation. This is where Intrusion Detection Systems co me in. As the name su ggests, Intrusion Detection Systems a re installed to identify ( potential) attacks and to react by usually generating an a lert or blocking the unscrupu lous data. The main goal of Intrusion Detection Systems i s t o detect unauthorised use, misuse and abuse of com puter systems by both system insiders and external intruders. Most current Intrusion Detection Systems define suspicious signatures based on k nown intrusions and probes. T he obvious limit of this t ype of I ntrusion Detection Systems is its failure of detecting previously unknown intrusions. In contrast, the human i mmune system adaptively generates new im mune cel ls so that it i s able to detect previously unknown and rapidly evolving harmful antigens (Forrest et al 1994). T hus the challenge is to em ulate the success of the n atural system s. 8 3.2 Data Mining – C ollaborative Filte ring and Clustering Collaborative Filtering is the term for a broad range of algorithms that use si milarity measures to obtain recom mendations. The best-known example is pr obably the “people who bought this also bought” feature of the internet company A maz on (2003). However, any problem domain where users are required to rate i tems is amenable t o Collaborativ e Filtering techniques. Commercial ap plications are usually called recomm ende r systems ( Resnick and Varian 1997). A canonical example is movie recomm endati on. In traditional Collabora tive Filtering, t he items to be r ecomm ended are treated as ‘ black boxe s’. That i s, your recommendations are based p urely on the votes of other users, and no t on the content of the i tem. The preferences of a user, usually a set of votes on a n item, comprise a user profile, and t hese profiles are c ompared in order to b uild a n eighbourho od. T he key decis ion is what s imilarity measure i s used: The most comm on method to compare two users is a correlation-based measure like Pearson or Spearman, which giv es two neighbours a matching score between -1 and 1. T he canonical example is the k-Nearest-N eighbour alg orithm, which u ses a matching method to select k reviewers with high similarity measures. The votes from these reviewers, suitably weig hted, ar e used to make predictions and recomm endati ons. The evaluation of a Co llaborative Filtering algorithm usually centres on its accuracy . There is a difference bet ween prediction (given a movie, predict a given user’s rating of that mov ie) and recomm endation (given a user, suggest movies that are likely to attract a high rating). Prediction is easier to assess quantitat ively but recommendation i s a more natural fit to the movie domain. A r elated problem t o Collaborative Filtering is that of clustering data or users in a database. This is particularly use ful in very large databases, which h ave become t oo l arge to handle. Clustering work s by dividing the entries of the da tabase into groups, which contain p eople with similar preferences or i n general data of similar ty pe. 9 4. AR TIFI CIAL IMMUNE SYS TEMS BASI C CONCEPTS 4.1 Initialisation / Encoding To implement a ba sic Artificial Immune S ystem, four de cisions have to be made: Encoding , Similarity Measure, Selection and Mutation. Once an encoding has been fi xed a nd a suitable similarity m e asure is chosen, the algorithm will then per form selection and mutatio n, both based on the similarity measure, until stopping criteria are met. In this section, we will describe each of these components in turn. Along wit h other heuristics, choosing a sui table encoding is very important for the a lgorithm ’s success. Similar to Genetic Algorithms, there is close inter- play between t he encoding and t he fitness f unction (the later is in Artificial Im mune System s referred to as the ‘matching’ or ‘affinity’ function). Hence both ought t o be thought about at the same t ime. For the current discussion, let u s start with the encoding. First, let us de fine what we m ean by ‘ antigen’ and ‘antibody’ in the context of an applica tion domain. Typically , an antigen is the targ et or solution, e.g. the data item we need to check to see if it is an intrusion, or t he user that we need t o cluster or make a recommendation for . The antibodies are the remainder of the data, e .g. other users in the data base, a set of network traffic that has already bee n identified etc. Som etimes, there can be more than one antigen at a time and the re are usually a large number of antibodies present sim ultaneously. Antigens a nd antibodies are represented or encoded i n the same way. For most probl ems the m os t obv ious representation is a string of numbers or features, where the leng th is the number of variabl es, the position is the variable identifier and t he value (could be binary o r real) of the variable. For instance, in a five variable binary problem, an encoding could l ook like t his: (10010). As mentioned previously, f or data mining and intrusion detection applications. W hat would an encoding look l ike in these cases? For data mining, l et us c onsider the problem of recomm ending m ovies. Here th e encoding has to represen t a user’s profile with regards to the m ovies he has seen a nd how m uch he has (d is)liked th em. A possible encoding f or t his could be a list of numbers, where each number represents the 'vote' for an item. Votes could be binary (e.g. Did y ou visit this web page?), bu t can also 10 be integers in a range (say [ 0, 5], i.e. 0 - did not like the movie at all, 5 – did like the mov ie very much). Hence for the mov ie recommendation, a possible enco ding is: { } { } { } { } n n score id score id score id User , ... , , , 2 2 1 1 = Where id corresponds to the unique identifier of the movie being rated and score to this user’s score for that movie. This captures t he essential features of the data avai lable (Cayzer and Aick elin 2002). For intrusion detection, the encoding may be to encapsulate the essence of each data packet transferred, e.g. [], exam ple: [ <113.112.2 55.254> <108.200.111.12> <25> which represents an incoming data packet send to port 25. In these scenar ios, wildcards like ‘any port’ ar e also often used. 4.2 Similarity or Affi nity Measure As m entioned in the previ ous section, sim ilarity measure or m a tching rule is one of t he most important design choices in developing an Artificial Immune Sy stems algorithm, and is closely coup led to the encoding schem e. Two of the simplest matching al gorithms are best exp lained using bi nary encoding: Consider t he strings ( 00000) and (00011). If one does a bit-by-bit comparison, the first t hree bits are i dentical and hence we could give this pair a matching score of 3. In ot her words, we compute the opposite of the Hamm in g Dist ance (which is defined as the number of bits t hat have to be changed in order to m ake the two strings identical ). Now consi der this pair: ( 00000) and (01010). Again, simple bit matching gives us a similarity score of 3. However, the matching is quite different as the t hree m atching bits are not connected. Depending on the problem and encoding, th is might be b etter or w orse. Thus, a nother s imple m atching algorithm is to count the number of continuous bits that match and return the length of the longest matching as the s imilarity measure. For t he first example above this wo uld still be 3, for the seco nd exam ple thi s would be 1. If the encoding is non-binary, e.g. real variables, th ere are even more possibilities to compute the ‘ distance’ between t he two strings, f or i nstance we could compute the g eometrical (Euclidian) di stance etc. 11 For data mining problems, like t he movie r ecomm endation system, similarity often means ‘correlation’. Take th e movie recommendation problem a s an e xample and assume t hat we are trying to find users in a database that a re s imilar to t he key user who’s profile we re ar e trying to match in order t o make r ecomm endations. In thi s case, what we are trying to measure is how similar are the two users’ tastes. One of the easiest ways of doing this is to compute the P earson Correlation Coefficient be tween the two users. I.e. if the Pearson m easure is used to compare two use r’s u and v: ( ) ( ) ( ) ( ) ) 1 ( 1 1 2 2 1 ∑ ∑ ∑ = = = − − − − = n i n i i i n i i i v v u u v v u u r Where u and v are users, n is the num ber of overlapping votes (i.e. Movies for which both u and v have voted), u i is t he vote of user u for movie i and ū is the average vote of user u over all films (not just the overlapping votes). The measure is amended so default to a value of 0 if the t wo users have no films in comm on. During our research reported in Cayzer and Aickelin (2002a, 2002b) we also found it useful to introdu ce a penalty parameter (c .f. penalties in genetic algorithms) for users who onl y have very few films in comm on, which in essence reduces their correlation. The outcom e of t his measure is a value between -1 and 1, wher e values close to 1 mean st rong agreement, values near to -1 mean str ong disagreement and values around 0 m ean no correlation. From a data mining point of view, those users who score either 1 or -1 ar e the most useful and hence will be selected for further treatment by th e algorithm. For other applications, ‘matching’ m ight not actually be beneficial and hence t hose items that match might be el iminated. This approach is known as ‘ negative selection’ and mirrors what is believed to hap pen du ring t he maturation of B-cells who have to learn not to ‘match’ our own tissues as otherwise we would be subject to auto-imm une di seases. Under what circumstance would a negative selection al gorithm b e suitable for an Artificial Immune Systems implementation? Consider the 12 case of Intrusion Detectio n as solved by Hofmeyr and Forrest (2000). One way of solving this pr oblem is by defining a set of ‘self’, i.e. a trusted network, our company’s computers, known partners etc. Du ring the initialisation of the algori thm, we would then random l y create a large number of so called ‘ detectors’, i.e. strings that looks si milar to the sample Intrusion Detection Systems encoding given above. We would then subject these detectors t o a m atching algorithm that compares them to our ‘self’. Any matching detector would be eliminated a nd hence we select those that do no match (negative selection). A ll non-m atching d etectors will then form our final detector set. This de tector set is then used in the s econd phase of the algorithm to continuously monitor all network tra ffic. Should a match be found now t he algorithm would report this as a possible alert or ‘non-self’. There are a number of pro blems with this approach, which we shall discuss further in the Enhancem ents and Future App lication Section. 4.3 Negative, Clonal or Neighbour hood Selecti on The m eaning of this step di ffers somewhat depending on t he exact problem the Artificial Imm une S ystems is applied t o. We hav e already described the concept of negativ e selection above. For the film recomm ender, c hoosing a suitable neighbourhood means choosing good correlation sc ores and hence we will perform ‘ positive’ selection. How would the algorithm use this? Consider the Artificial Imm une Systems to be empty at the beginning. The t arget user is encoded as the antigen, and all other users in the databas e are possible antibodies. We add the antigen t o the Artificial Imm une Systems and then we add one candidate antibody at a time. Antibodies will start with a certain concentration value. T his value i s decreasing over time (death rate), similar to the evaporation i n Ant Systems. Antibodi es with a sufficiently low concentration are removed from the system, whereas antibodies with a high con centration may saturate. However, an antibody can increase its concentration by m atchi ng the antigen, the better the match the higher the increase (a process called ‘stimulation’). T he process of stimulation or increasing concentration c an also be regarded as ‘cloning’ if one thinks in a discrete setting. Once enoug h antibodies have been added to the system, it st arts to iterate a loop of re ducing conc entration and stimulation until at least one an tibody drops out. A ne w antibody is added and the process repeated until the Artifi cial Immune Systems is stabilised, i.e. there are no m ore drop-outs for a certain period of time. 13 Mathematically, at each s tep (iteration) an ant ibody’s concentration is increased by an amount dependent on its matching to ea ch antig en. In absence of matching, an antibody’s concentration wil l slowly decrease over time. Hence an Artificial Immune Systems iteration is governed by the following equation, based on Farmer et al (1986) : − = − = ∑ = N j i j i ji i x k y x m k rate death recognised antigens dt dx 1 3 2 ) ( Where: N is the number of an tigens. x i is the concentration of antibody i y j is the concentration of antigen j k 2 is the stimulation effect and k 3 is the death rate m ji is the matching function between antibody i & antib ody (or antigen) j The f ollowing p seudo code summarise t he A rtificial I mmune Systems of the movie recomm e nder: Initialise Artificial I mmune Systems Encode user for whom to make predictions as antig en Ag WHILE (Arti ficial Imm une Systems not Full) & (More Antibodies) DO Add next user as an ant ibody Ab Calculate matching scores between Ab and Ag WHILE ( Artificial Immune Systems a t full size) & (Artificial Immune Sy stems not Stabilised) DO Reduce Concentration o f all Abs by a fixed amount Match each Ab against A g and stimulate as nec essary OD OD Use final set of Antibod ies to produce recomm endation. In this exam ple, the Artificial Imm une S ystems i s considered stab le after iterating for t en iterations without chang ing in size. Stabilisation thus means that a sufficient number of ‘good’ neighbours have been identified and therefore a prediction ca n be m ade. ‘Poor’ nei ghbour s would be expected t o drop out of the Artificial Immune Systems after a few iterations. Once the Artificial Imm u ne System s has stabilised using the abov e algorithm, we use 14 the antibody concentra tion to weigh the neighbou rs and then perform a weighted average ty pe recomm endati on. 4.4 Somatic Hyper mutation The mutation most c omm onl y used in Artificial Imm une System s is very similar t o that found i n Genetic Algorithms, e.g. for binary strings bits are flipped, for real value strings one value is changed at random, or for others the order of elements is swapped. In addi tion, the mechanism is often enhanced by the ‘som atic’ idea, i.e. the closer the match (or the less close the match, depending on what we a re trying t o achieve), the more (or less) disruptive the m utati on. However, mutati ng t he data might not m ake sense for all problems considered. For instance, it woul d n ot be suitable for the movie recomm ender. Cert ainly, mutation could be used t o make users more similar to the target, however, the validity of recomm endations based on these artificial users is questionable a nd if over- done, we would end up with the target user itself. Hence for some problems, somatic Hypermutation is not used, si nce it is not immediately obvious how to mutate the data sensibly such that these artificia l entities still represent p lausible data. Nevertheless, for other problem dom ains, mutation mig ht be very usefu l. For instance, taking the ne gative selection approach to i ntrusion detection, rather than t hrowing away matching detectors in the first phase of the algorithm, these could be mutated to safe tim e and effort. Also, depending on the degree of matching t he mutation could e more or less strong. This was in fact one extension im plemented by Hofm eyr and Forrest (2000). For data mining problems, m utati on might also be useful, if for instance the aim is to cluster users. Then the ce ntre of each cluster (the antibodies) could be an artificial pseudo user that can be mutated at will until the desired degree of matching betwee n the centre and a ntigens in its cluster is reached. This is an approach im plemented by Castro and von Z ube n (2001). 15 5. COMP ARISON OF AR TIFICIAL I MMUNE SYSTEMS T O GEN ETIC ALGORIT HMS AND NEURAL NETWOR KS Going through the tutorial so fa r, you might already have not iced that both Genetic Algorithm s and Neural Networks have been mentioned a number of times. In fa ct, they both have a number of ideas in common wit h Artificial Imm une System s and the purpose of the following , self- explanatory t able, is t o pu t their s imilarities and differences next to ea ch other (see Dasgupta 1999). Evolutionary computation shares many elements, concepts like population, genotype phenotype mappin g, and proliferation of the most fitted are presen t in different Artificia l Immune System s methods. Artificial Immune System s models based on immune networks resembles the structures an d interactions of connec tionist m odel s. Some works hav e pointed out the sim ilarities and t he differences betwe en Artificial Immune Systems and artificial neural netwo rks (Dasgupta 1999 and De Castro and Von Zuben 2001). De Castro has al so used Artificial Imm une Systems to initialize the centres of radial ba sis function neural networks and to produce a good initial set of weig hts for feed-forward neura l networks. It should be not ed t hat some of the items in table 1 are gross simplifications, both to benefit the design of the table and not to overwhelm the reader. Some of these points are debatable; however, we believe that this comparison is valuable nevertheless to show exactly wher e Artificial Immune Sy stems fit in. The comparisons are based o n a Genetic Algorithm (GA) used for op timisation and a Neu ral Network (NN) used for Classification. GA (Optimisation) NN (Classification) Artificial Imm une Systems Components Chromosom e Strings Artificial Neurons Attribute Strings Location of Components Dynamic Pre-Defined Dynamic Structure Discrete Components Networked Components Discrete components / Networked Components Knowledge Storage Chromosom e Strings Connection Strengths Component Concentration / Network 16 Connections Dynamics Evolution Learning Ev oluti on / Learning Meta- Dynamics Recruitment / Elimination of Components Construction / Pruning of Connections Recruitment / Elimination of Components Interaction between Components Crossover Network Connections Recognition / Network Connections Interaction with Environment Fitness Function External Stim uli Recognition / Objective Function Threshold Activity Crowding / Sharing Neuron Activation Component Affinity Table 1: Com parison of Artificial Immune Systems to Genetic Algorithms and Neura l Networks. 6. EXTENSIONS OF AR TIFICIAL I MMUNE SYSTEMS 6.1 Idiotypic Netw orks - Netw ork Interactions (Suppression) The idiotypic effect builds on the premise that antibodies can m a tch other antibodies a s well as antig ens. It was first propos ed by Jerne ( 1973) and formalised into a model by Farmer et al (1986). The theory is currently debated by i mmunolog ists , with no cl ear consensus yet on its effects in t he humoral immune system (K uby 2002). Th e idiotypic network hypothesis builds on the recognition that antibodies can match other antibodies as well as antigens. Hence, an antibody may be matched by other anti bodies, which in turn may be matched b y yet other antibodies. This activation can continue to spread through the population and potentially has much e xplanatory power. It could, for exam ple, he lp explain how t he memory of past infections is maintained. Furthermore, it could r esult in the suppression of similar antibodies thus encouraging diversity in the a ntibody poo l. The idiotypic network has been formalised by a number of the oretical immunologists (Perelson a nd Weisbuch 1997): 17 ) 1 ( 2 1 1 1 1 i n j j i ji N j N j j i ij j i ji i x k y x m x x m k x x m c rate death recognised antigens recognised am I recognised antibodies c dt dx − + − = − + − = ∑ ∑ ∑ = = − Where: N is the number of an tibodies and n is the num ber of antigens. x i (or x i ) is the concentrat ion of antibody i (or j ) y i is the concentration of a ntigen j c is a rate constant k 1 is a suppressive effect and k 2 is the death rate m ji is the matching function between antibody i & antib ody (or antigen) j As can be seen from the above equation, the natu re of an idiotypic interaction can be either positi ve or negative. Moreover, if the matching function is symm etric , th en the balance between “I am recognised” and “Antibodies r ecognised” (parameters c and k 1 in the equation) wholly determines whether the idiotypic effect is positive or negative, and we can simplify the equation. We can simplify equation ( 1) above still if we only allow one antigen in t he Artificial Imm une Systems. In the new e quation (2), the first term is sim plified as we only have one ant igen, and the suppression term is normalised to al low a ‘like for like’ com parison between the different rate constants. The sim plified equation looks like this: ) 2 ( 3 1 2 1 i j i n j ij i i i x k x x m n k y x m k dt dx − − = ∑ = Where: k 1 is stimulation, k 2 suppression and k 3 death rate m i is the correlation between antibody i and the ( sole) antigen x i (or x i ) is the concentration of antibody i (or j ) y is the concentration of the (sole) antigen m ij is the correlation between antibodies i and j n is the number of antibodi es. Why would we want to use the i dotypic e ffect? Bec ause it m ight provide us with a way of achieving ‘diversity’, similar to ‘crowd ing’ or ‘fitness sharing’ in a genetic algorithm . For instance, in the movie rec ommender, we want to ensure t hat the final nei ghbourhood population is di verse, so that we get more i nteresting recommendations. Hence, to use the idiotypic effect in the movie recommender system mentioned previously, the pseudo c ode would be amended by adding the foll owing lines in i talic. 18 Initialise Artificial Imm une Systems Encode user for whom to mak e predictions as antigen Ag WHILE (Artificial Imm une Systems not Full) & (More Antibodies) DO Add next user as an antibody Ab Calculate matching scores betw een Ab and Ag and Ab and other Abs WHILE (Art ificial Immune Systems at full si ze) & (Artificial Immune System s not Sta bilised) DO Reduce Concentration o f all Abs by a fixed amount Match each Ab against Ag and stimulate as necessary Match each Ab against each o ther Ab and execute idiotypic effect OD OD Use final set of Antibodies to produce recomm e ndation. The diagrams in fi gure 3 below show the idiotypic effect using dotted arrows whereas standard stimulation is shown using black arrows. In the left diagram antibodies Ab1 a nd Ab3 ar e very similar and the y would have t heir concentrations reduced i n the ’Iterate Artificial Immune Systems’ stage of the algorithm above. However, i n the r ight diagram, the four antibodies are well separated from each other as well a s being c lose to the antigen and so would have their concentrations increased. At each iteration of the film recommendation Artifi cial Im mune Systems the concentration of the antibodies is c hanged according to the formula given on the next page. This will increase th e concentrat ion of antibodies that are si milar to the antigen and can allow either the stimulation, suppression, or both, of antibody-antibody i nteraction s to have an effect on the antibody concentration. More de tailed discussion of these effects on recommendation problems a re contained within Cayz er and Aickelin (2002a and 2002b). 19 Figure -2. Illustrat ion of the idiotypic ef fect 6.2 Danger Theory Over the l ast decade, a new theory ha s b ecome popular amongst immunologists. It is c alled t he Danger T heory, and its chief advocate is Matzinger (1994, 2001 and 2003). A number of advantages are clai med for this theory; not least that it provides a m ethod of ‘grounding’ the immune response. The theory is not complete, and there are som e doubt s about how much it actually changes behaviour and / or structure. Nevertheless, th e theory contains enough potentially interesting ideas to make it worth assessing its relevance to Artificial Im mune Systems. However, it is not simply a question of m atching in t he humoral immune system. It is fund amental that only the ‘correct’ cells are matched as otherwise this could lead t o a self-destructive autoimmune reaction. Classical immunology (Kub y 2002) stipulates that an imm une re sponse is triggered when the body encounters something non-self or foreign. It is not yet fully understood how this self-non-self discrimination is achieved, but many i mmunologists be lieve that t he difference between th em is lea rnt e arly in life. In part icular it is thought that the maturation process plays an important role t o ac hieve self-tolerance by el iminating those T and B-cells that react to self. In addition, a ‘confirmation’ signal is required; that is, for either B-cell or T ( killer) cell activation, a T (helper) lymphocyte must also be activated. This dual activation is f urther protection against the chance of accidentally reacting to self . Ab3 Ab2 Ab1 AG 20 Matzinger’s Danger T heory deb ates this point of view (f or a good introduction, see Matzinger 2003). Technical overviews can be found in Matzinger (1994) and Matz inger (20 01). She points out that there must be discrimination happening that goes beyond the self- non -self distinction described above. For instance: • There is no i mmune rea ction to foreign bacteria i n the gut or to the food we eat although both are fo rei gn entities. • Conversely, some auto-reactive processes are useful, for example against self molecules expressed by stressed cells. • The definition o f self is problematic – realistically, self is confined to the subset actually seen by the lymphocytes during maturation. • The human body changes over its lifetime and thus s elf changes as well. Therefore, th e question arises whether defences against n on-self learned early in life might be autoreactive later. Other aspects that seem t o be at odds with the traditional viewpoint are autoimmune dis eases and certain t ypes of tum ours that a re fought by the immune system (both attacks against s elf) and successful t ransplants (no attack against non-self). Matzinger concludes t hat the imm une syste m act ually di scriminates “some se lf from some non-self”. She asserts that the Danger Theory introduces not j ust new l abels, but a way of escaping the semantic difficulties with self and non-self, and thus provides grounding f or t he immune r esponse. If we ac cept the Danger Theory as valid we can t ake care of ‘non-self b ut harmless’ and of ‘self but harmful’ in vaders into our s ystem. To see how this is possible, we will have to exam ine th e th eory in more detail. The central idea in t he Danger Theory is that the immune system does not respond to non-self but to danger. Thus, just like t he self-non-self theories, it fundamentally supports the need fo r discrimination. However, it differs in the answer to what should be responded to. Instead of responding to foreignness, the immune system rea cts t o danger. This t heory is bor ne out of the observation t hat th ere is no need to attack everything tha t is foreig n, something that seems to be supported by the counter examples above. In this theory, da nger is measured by damage to cells indicated by distress signals that are sent out when c ells di e an unnatural death (cell stress or lytic cell death, as opposed to programm ed cell death, or apoptosis). Figure 4 de picts how we might picture an immune response acco rding to the Danger Theory (Aickelin and Cayzer (2002)). A cell that is i n distress 21 sends out an alar m signal, where upon antigens in the neighbourhood are captured by antigen-presenting cells such as macrophages, which then travel to the local lymph node and present the antigens to lymphocytes. Essential ly, the danger signal establishes a danger zone around itself. T hus B-cells producing antibodies that match antigens within the danger zone get stimulated a nd undergo the clonal expansion process. Those that do not match or are too far away do not get st imulated. Antigens Antibodies Match, but too far away Stimulation Danger Zone Danger Signal Damaged Cell Cells No match Figure -4. Danger Th eory Illu stration Matzinger admits that the exact nature of the danger signal is unclear. It may be a ‘positive’ signal (for exa mple heat shock protein r elease) or a ‘negative’ signal (for example lack of synaptic contact with a dendritic antigen-presenting cell). This is where the Danger Theory shares some of the problems associated with tr aditional sel f-non-self discrimination (i.e. how to discriminate danger from non-danger). However, in this c ase, the signal is grounded rather than being som e abstract representation of danger. How co uld we use t he D anger Theory in Artificial Im mune Sy stems? The Danger The ory is not about the way Artificial Im mune Systems represent data (Aickelin and Cayzer 20 02). Instead, it provides ide as about which data the Art ificial Imm une S ystems should represent a nd deal with. 22 They should focu s on dangerous, i.e. interesting data. It coul d be argued that the shift from non- self to danger is merely a sym b olic label change that achieves nothing. We do not believe this t o be the case, since danger is a grounded signal, and non-self is (typically) a set of feature vectors with no further information about whether all or some of these features are r equired over ti me. The dan ger signal h elps us to identify which subset of featur e vectors is of interest. A suitably defined danger signal thus overcomes many of the limitations of self-non- s elf selection. It restricts the domain of non- self to a manageable size, removes the need to sc reen against all self, and de als adaptively with scenarios where se lf (or non-self) changes over time. The challenge is clearly to define a suitable danger signal, a choice t hat might prove as critical as the choice of fitness function f or an evolutionary algorithm. I n addi tion, the physical distance in the biological system should be translated into a s uitable proxy measure for similarity o r causality in Artificial Immune System s. T his process is not l ikely to be trivial. Nevertheless, if these challenges are m et, then future Artificial Imm une Systems applications m ight de rive considerable be nefi t, and new insights, from the Danger Theory, in particular I ntrusion Detection Systems. 7. SOME PROMI SING AREAS FOR FUTU RE APPLICA TION It seems intuitively obvious, that Artificial Immune Systems should be most suitable f or computer securit y problem s. If the hum an immune system keeps our body alive and well, why can we not do the s ame for computers using Artificial Imm une Syst ems? Earlier, we h ave out lined the traditional approach to do this: However, in order t o provide viable Intrusion Detection System s, Artificial Imm u ne Systems must bui ld a set of detecto rs that a ccurate ly match anti gens. In current Artificial Immune Systems based Intrusion Detection Systems (Dasgupta and Gonzalez (2002), Esponda et al (2002), Hofmeyr and Forrest (2000)), both network connections and detectors are modelled as strings. Detectors are randomly created and t hen undergo a maturation phase where they are presented with good, i.e. s elf, connections. I f the detectors match any of the se th ey are eliminated otherwise they become mature. These mature detectors start to monitor new connections duri ng their lifetime. I f these mature detectors match an ything else, ex ceeding a cer tain t hreshold value, they be come activated. This is t hen r eported to a human ope rator who 23 decides whethe r there is a true anomaly . If so, the detectors are p romoted to memory detect ors with an indefinite life span and minimum activation threshold (immunisation) (Kim and Ben tley 2002). An approach such as t he above is known as negative selection as only those detectors (a ntibodies) that do not match live on (For rest et al. 1994). Earlier versions of negative selection algorithm used binary representation scheme, however, this scheme shows scaling problems when it is applied to real netwo rk t raffic (Kim and Bentley 2001). As the systems to be protected grow lar ger a nd lar ger so does self a nd nonself. Hence, it becomes more and more problem atic to find a set of d etectors that pr ovides adequate coverage, whilst being co mputationally efficient. It is inefficient, t o map the entire self or nonself universe, particularly as they will be changing over time and onl y a minority of nonself is harmful, whilst some sel f might cause damage (e.g. internal attack). This situation is further aggravated by the fact that the labels self and nonself are often ambiguous a nd even with expert knowledge t hey are not always applied correctly (Kim and Bentley 2002). How could this pr oblem b e overcome? One wa y could be to borrowed ideas f rom the Danger T heory to provide a way of grounding the response and hence remov ing the necessity to map se lf or nonself. I n our system, t he correlation o f low-lev el alerts (d anger signals) will trigg er a r eaction. A n important and recent research issue for Intrusion Detection Systems i s how to find true intrusion alerts from thousands and thousands of false alerts generated (Hofmeyr and Forrest 2000). Existing Intrusion Detection Sy stems employ various types of s ensors that monitor low-level system events. Those sensors report a nomalies of network tr affic patterns, unusual t erminations of UNIX processes, memory us ages, the attempts t o access unauthorised files, etc. (Kim and Bentley 2001). Although these reports a re useful signals of real intrusions, they are often mixed with false alerts and their unmanageable volume forces a security officer to i gnore most al erts ( Hoagland and Staniford 2002). Moreover, the low l evel of alerts makes it very hard for a security officer to identify a dvancing intrusions that usually consist of different stages of attack sequences. For instan ce, it is well know n that computer hackers use a number of pr eparatory stages (rArtificial Immune Systemsing low-level alerts) before actual hack ing according to Hoaglandand and Staniford. Hence, the correlations between intrusion alerts from di fferent attack s tages pr ovide more convincing attack scenarios than detecting an intrusion sc enario based on low-level alerts from individual stages. Furthermore, such scenarios allo w the Intrusion Detection Systems to detect intrusions early before dam age becomes serious. 24 To correlate Intrusion Detection Systems alerts for detection of an intrusion scenario, recent studies have e mployed t wo different approaches: a probabilistic approach (Valdes and Skinner (2001)) and an e xpert sy stem approach ( Ning et al ( 2002)). T he probabilistic approach represents known intrusion scenarios as Bayesian networks. The nodes of Bayesian networks are Intrusion Detection Systems alerts and the p osterior likelihood between nodes is updated as new alerts are collected. The updated likelihood ca n lead to conclusions about a s pecific intrusion scenario o ccurring or not. The expert syste m app roach initially bui lds possible intrusion scenarios by identifying l ow-level alerts. These alerts c onsist of prerequisites a nd consequences, and they are rep resented as hypergraphs. Known intrusion scenarios are de tected by obs erving t he low-level alerts at each stage, but these approaches h ave t he following problems ac cording t o Cuppens et al (2002): • Handling unobserved low-level al erts that compri se an intrusion scenario. • Handling optional prerequisite actions. • Handling intrusion scenario variations. The common trait of these pr oblems i s that the Intrusion Detection Systems can f ail to detect an intrusion w hen an incom plet e set of alerts comprising an intrusion scenario is reported. In handli ng this problem, t he probabilistic approach is somewhat more advantageous than the expert system approa ch because in t heory it allows the Intrusion Detection Systems to correlate m issin g or mutated aler ts. The current probabilistic approac h builds Bay esian networks based on the similarities between se lected alert features. However, these s imilarities alone can fail t o identify a causal relationship be tween prerequisite actions a nd actual attacks if pairs of prerequisite ac tions and actual attacks do not app ear frequently enough t o be reported. Attackers often do not repeat the same a ctions in order to disg uise their attempts. Thus, the current probabilist ic approach fails to detect intrusions that do not show strong si milarities between alert features but have caus al relationships leading to final attacks. T his limit means that such Intrusion Detection Systems fail to d etect sophisticated intrusion scenar ios. We pr opose Artificial Imm u ne S ystems based on Danger Theory ideas that can handle the above Intrusion Detection Systems alert correlation problems. T he Danger Theory explains t he i mmune response of the human body by the interaction between Antigen Presenting Cells an d various signals. The immune response of each Antigen Presenting Cell is determined by the generation of danger s ignals through c ellular stress or cell death. In 25 particular, the ba lance and cor relation be tween different danger signals depending on different-cell death causes would appear to be critical to the immunological outcom e. T he investig ati on of this hypothesis is the main research goal of the immunologists for this project. The wet experiments of this project focus on understanding how t he Antigen Pr esenting Cells react to the b alance of different types of signals, and how this reaction l eads t o an overall i mm une response. Similarly, our Intrusion Detection Systems investigation will centre on understanding h ow intrusion scenarios would be detected by reacting to t he balance of various types of al erts. In the Human Immune System, Antigen Presenting Cells activate according to the b alance of apoptotic a nd necrotic cells and this ac tivation leads to p rotective immune responses. Similarly, the s ensors in Intrusion Detection Systems r eport various l ow-level al erts and the correlation of these alerts will lead to the construction of an intrusion scenario. 8. TRICKS OF THE TR ADE Are Artificial Imm une Systems suit able for pure optim isation? Depending on what is meant by optimisation, the answer is probably no, in the same se nse as ‘pure’ genetic algorithms are not ‘ function o ptimizers’. One h as to keep in mind that although the i mmune system is about matching and surv ival, it is really a team effort where multiple s olutions are produced all the time that together provide t he answer. Hence, in our opinion Artificial Immune Systems is probably m ore suited as an optimiser where multiple solutions are of benefit, either directly, e.g. because th e problem has multiple objectives or indirectly, e.g. when a neighbourhood of solution sis produced that is then u sed to generate the desired outcome. However, Artificial Immune Systems c an be made into more focused optimisers by adding hill- climbing or other functions that exploit local or problem specific knowledg e, similar to the idea of augment ing genetic algorithm to m emetic al gorithms. What problems are Artificial Im mune Systems most suitable for? As mentioned in the previous paragraph, we believe that a lthough using Artificial Imm une S ystems for pur e optimisation, e.g. the Travelling Salesman Problem or J ob Shop Sch eduling, ca n be made to work, this is probably missing the point. Artificial Imm une Systems are powerful when a population of solution is essential either during the s earch or as an outcome. Furthermore, the problem h as t o have some conce pt of ‘matching’. Finally, because at their heart Artifici al Immune S ystems are evoluti onary algorithms, they are m ore suitable for problems that change over time r ather and ne ed to b e so lved again and again, rather th an one-off optimisations. 26 Hence, the evidence seems to poi nt to Data Mining in its wider meaning as the best area for Artificial Im mune Systems. How do I set the parameters? Unfortunately, ther e i s no shor t answer t o this question. As w ith the majority of other heuristics t hat require parameters to operate, t heir setting i s individual to the prob lem solved and universa l values are not av ailable. However, it is fair t o say that along with other evolutionary algorithms Artificial Immune Systems are robust with re spect to parameter values as long as they are chosen from a sensible range. Why not use a Genetic Alg orithm instead? Because you may miss out on the benefits of the idiotypic network effects. Why not use a Neural Network instead? Because you may miss out on t he benefits of a popu lation of solutions and the evolutionary selection pressure and m utation. Are Artificial Immune Systems Learning Classifier Systems under a different name? No, not quite. However, to our knowledge Learning Classifier Systems are p robably the m ost similar of the better known meta-heuristic, as they also combine some features of Evolutionary Algori thms a nd Neural Networks. However, these f eatures a re different. Someone who is interested in implementing and Artificia l Immune Systems or Learning Cla ssifier Systems is likely to be well advised to read abo ut both approaches to see which one is most suited for the problem at hand. 9. CONCLUSION S The i mmune system is highly distributed, highly adaptive, self - organising in nature, maintains a memory of past e ncounters, a nd has the ability to continually learn about new encounters. The Artificial Imm une Systems i s an example of a system developed aroun d t he current understanding of the i mmune syste m. It il lustrates how a n Artificial Immune Systems can capture the basic elements of the immune system a nd exhibit some of its chief characteristics. Artificial Im mune S ystems can incorporate many properties of natural immune syste ms, in cluding diversity, distributed computation, error 27 tolerance, dynamic l earning and adaptation and self- monitoring. T he human immune system has motivated scientists and engineers for fi nding powerful information processing algorithms that has sol ved complex en gineering tasks. The Artificial Immune S ystems is a general framework f or a distributed adaptive system and could, in principle, be applied to many domains. Artificial Immune Systems can be applied to classification problems, optimisation tasks and other d omains. Lik e many biologically inspired systems it is adaptive, distributed and autonomous. The primary advantages of the Artificial Immune Systems are that it only requires positive examples, and the patterns it has learnt can be explicitly examined. In addition, because it is self-organizing, it do es not require effort to optimize any system parameters. To us, the attraction of the immune system is that if an a daptive pool of antibodies can produce 'intelligent' beha viour, can we har ness the power of this computation to ta ckle the problem of preference matching, recommendation and intrusion detection? Our conjecture is that if the concentrations of those antibodies t hat provide a better match are a llowed t o increase over time, we shoul d end up with a subset of good matches. However, we ar e no t interested in optimising, i.e. in finding t he one best match. I nstead, we require a set of an tibodies that are a close match but which are at the s ame ti me d istinct from each other for s uccessful recommendation. This is where we pr opose to harne ss t he idiotypic effects of binding antibodies to sim i lar antibodies to encourag e diversity. 10. SOURCES OF ADDITIONAL INFORMA TION The following websites, books a nd proceedings should be an e xcellent starting point for those readers wishing t o learn more abo ut Artificial Immune System s. • Artificial Immune Systems and Th eir Applications by D Dasgupta (Editor), Springer Verlag, 1999. • Artificial Imm une Systems: A New Com putat ional Intelligence Approach by L de Castro, J Timm i s, Springer Verlag, 2002. • Immunocom puting: Principles and Applications by A Tarak anov et al , Springer Verlag, 2003. • Proceedings of the International Conference on Art ificial Immune Systems (ICARIS), Springer Ver lag, 2003. 28 • Artificial Immune Systems Forum Webpage: http://www.artificial- immune-sy ste ms.org/artist.htm • Artificial Immune S ystems Bibliography: http://issrl.cs.mem phis.edu/ Artificial Immune Sy stems/Artificial Immune Systems_bibliography .pdf REFERENCES Aickelin U, Bentle y P, Cayzer S , Kim J an d McLeo d J (20 03): Dan ger Theor y: The Link between Arti ficial I mmu ne Systems and Intr usion Detectio n Systems?, Pro ceedin gs 2 nd Intern ational Conferen ce o n Artificial Immune S ystems, pp 147 -155, Sprin ger, Ed inbur gh, UK. Aickelin U and C ayzer S (200 2): Th e D anger Theor y an d Its Applicati on t o Artificial Immune S ystems, P roceedings 1 st Internation al Conference on Artificial Im mune Systems, p p 14 1-148, Canterbu ry, UK. Amazon (2003 ), Amazon.co m Reco mmendat ion s, http:// www.amazon .com/. Cayzer S an d Aic kelin U (20 02a), A Reco mmend er S ystem based o n th e Immune Network, i n Proceed ings CE C2002, pp 807 -813, Hono lulu, USA. Cayzer S and Aickeli n U (2002 b), On the Effects of Idiot ypic Interaction s for Recommen datio n Co mmuniti es in Artificial I mmune S ystems, P roceed ings 1 st Intern ational Conference o n Artifi cial Immun e S ystems, pp 154-160, Canterbu ry, UK. Cupp ens F et al (20 02) , Cor relation in an Intru sion P rocess, In ternet Secu rity Co mmunicatio n Worksho p (S ECI'02). Dasgupt a, D E ditor) (1999 ), Artificial Immun e Systems and Th eir Applicatio ns, Springer Verlag, 1999. Dasgupt a D, Gonzalez F (2 002 ), An Immun it y-Based Techn iqu e to Char acterize Intru sion s in Comput er Networks, IEEE Trans. Evol . Comput. Vol 6 ; 3, pp 1081-1 088. L. N. De Castr o an d F . J. Von Zu ben (2001), Learning an d O ptimizatio n Us ing th e Clo nal Selectio n P rinciple. Accepted for publication at the IEEE Tran saction s on Evo lutionar y Comput ation, Special Issu e on Ar tificial I mmune S ystems. Delgado J and Ishii (2001 ), Multi-agent Learni ng in Recommend er Systems For Informatio n Filteri ng on the In ternet Journal of Co -op erative Infor mation Systems, vol . 1 0, pp . 8 1-100, 2001 . Espo nda F , Fo rrest S, Helman P (2 002), Posit ive an d N egative De tectio n, IEEE Transact ions on Systems , Man and Cybernetics . Farmer J D, P ackard NH and Perel son AS ( 1986 ), The immun e s ystem, adap tation , and machin e learnin g Ph ysica, vol. 22, pp. 18 7-204, 198 6. Forrest , S,. Per elson, A. S., Allen , L., and R. Cherukur i. S elf-Non self D iscrimin ation in a Comput er. In P roceed ings of IE EE Symposium on Research in Security and Privacy, pages20 2--212 , Oakland , May 16-1 8 199 4. Goldsb y R, Kind t T, Osb orne B , Kuby Im munolo gy, Fou rth Editi on, W H Freeman, 2 000. Highto wer RR, Fo rrest S an d P erelson AS (1995). The evolution of e mergent organizati on in immune system g ene libraries, Proceedings of th e 6th Conference on Genetic Algo rithms, pp. 3 44-350, 1995. Hoagland J, S taniford S (20 02), Viewing Intru sion Detection Systems al erts: Lesso ns from Sno rtSnarf, h ttp://www.silico ndef ense.com/ so ftware/sn ortsn arf 29 Hofmeyr S, Forres t S (2000), Architect ure for an Artificial Immu ne Systems, Evolu tionary Comput ation, V ol. 7, N o. 1, pp 1289-12 96. Jerne N K (1 973), Towards a net work th eory o f the immune s yste m Annal s of I mmuno logy, vol. 12 5, no. C, pp. 373 -389, 1973 . Kim J, B entley P (1999), The Artificial I mmune M od el for Netwo rk In trusio n Detection, 7th Europ ean Con gress on Intelligen t Techniq ues an d Soft Compu ting (EUF IT'99). Kim J, Ben tley P (2001), Evalu atin g Negati ve S electio n in an Artificial I mmun e S ystems for Network Intr usion Detectio n, Genetic and Evolu tionary Co mput ation C onference 200 1, 1330 -1337. Kim J, Bentley P (2002), Towards an Artificial Immune Systems for Network Intrusion Detectio n: An Investi gatio n of Dyna mic Clon al Selectio n, th e Congress on Evolu tionary Comput ation 20 02, pp 1015-102 0. Kubi J (2002), Imm un ology, Fifth Edition by R ichard A. Goldsby, Thomas J. Kindt , B arbara A. Osbo rne, W H Freeman . Matzin ger P (20 03), htt p://cmmg.b iosci.wa yne.edu/ asg/poll y.html Matzin ger P (2001), The Danger Model i n Its H istorical C ontext, Scandinavian Jour nal of Immunol ogy, 5 4: 4-9, 2001. Matzin ger P (1994 ), To lerance, Danger and th e Exten ded Family, Annu al Revie w o f Immunol ogy, 1 2:991-10 45, 1994 . Morr ison T an d Aickelin U (200 2): An Arti ficial I mmune S ystems as a Reco mmender S ystem for Web Sites, in P ro ceedings of the 1st Intern ational Con ference on Artificial Immune Systems (IC ARIS-2 00 2), pp 161 -169, Can terbury, UK . Ning, P, Cui Y, Reeves S (2002), Constructin g Attack S cenario s through Correlatio n of Intru sion Alerts, In P roceed ings of t he 9t h ACM Confer en ce on Compu ter & Commun ication s Secu rity, pp 245- 254. Perel son AS and Weisbuch G (1997), Immun olo gy for p h ysicists Reviews o f M od ern Physics, vo l. 69 , pp. 12 19-1267 , 1997 . Resnick P an d V arian HR (1 997), Recommend er s ystems Commun icatio ns of the ACM, vol . 40, p p. 56-58 , 1997. Vald es A, Skin ner K (2001), P robabilist ic Alert Co rrelatio n, RAID’20 01, pp 54-68.
Comments & Academic Discussion
Loading comments...
Leave a Comment