Stream Computing
Stream computing is the use of multiple autonomic and parallel modules together with integrative processors at a higher level of abstraction to embody "intelligent" processing. The biological basis of this computing is sketched and the matter of lear…
Authors: Subhash Kak
Stream Computing Subhash Kak Introduction Stream computing is often seen to be embodied in com pute-intensive kernel functions that are applied to each element in the data stream one at a time. These kernel functions operate in sequence in a pipelined fashion in what is essentially SIMD (single instruction multiple data) architecture. The advantage of doing this is the simplification of interconnects to get large increase in performance and in simplified programming. This processing is a take off on the style of computing that is found in DSP applic ations such as in voice, images, and video applied to a much larger range of applications. Here we speak of stream computing in a much larger philosophical setting that is sometimes expressed in the idea of stream of consciousness. In literature, this idea is meant to imply the internal monologue that goes on in the mind. Such a stream has no single focus and it may shift from one to another, suggesting that the processi ng that lies behind it corresponds to a variety of processing centers that randomly project – in sequence – in the the ater of the mind. But the motivation for this technical paradigm is the im perative of neuroscience more than the literary allusion. The brain is composed of several modules each of which is essentially an autonomous neural network. Thus the visual network responds to visu al stim ulation and also during visual imagery, which is when one sees with the mind’s eye. Likewise, the motor network produces movement and it is active during imagined movements. Howeve r, although the brain is modular, a part of it, located for most people in the left hemisphe re, m onitors the modules and interprets their individual actions in order to create a unified idea of the self. In other words, there is a higher integrative or interpretive module that synt hesizes the actions of the lower modules [ 1]. Thus, if one were to construct a machine that is similar in organization to the brain, we would need a system of autonomic and parallel centers to gether with integrative processors that work at a higher level of abstraction to implement high le vel “intelligent” processing. It would be wise to use this framework to create new architectures to deal with high-volume data involving high-level reasoning. As a caveat it must be said that this, in itself, will not endow the s ystem with biological type of intelligence since another hallmark of biological intelligence that we are not in a position to simulate effectively in our implementations is that of reorganization with respect to changin g environment [2-4]. From a practical implementation point of view, one can view stream computing as an overarching paradigm of which the current SIMD implementations of uniform stream processing are an elementary embodiment. The general streaming paradigm would have several SIMD machines operate in parallel on many copies of the data followed by further processing on another machine that operates at a higher abstraction. The new computing paradigm should make it possible to perform rapid processing of fast, high-volume data streams. Subhash K ak The Path to Streaming Classical computers are based on ideas that developed in the 1930s and 1940s to give shape to the intuition of how the rational mind performs computation. The general-purpose computing machine was visualized to consist of four main parts. These are the parts relating to the arithmetic logic unit, memory, control, and interface with the human operator. Memory Control Unit Arithmetic Unit or CPU Output Input Figure 1: The stored-program computer architecture The conception of the computer instrumented the formal notion of an algorithm . The innovative leap in building a general purpose device was the storing not on ly of data and the intermediate results of computation, but also the instruc tions that brought abo ut the computation. In the classical computer’s memory there is no fundamental distinction between data and instruction, which is considered a shortcomi ng by some. Other claimed shortcoming are: the memory is monolithic and it must be sequentia lly addressed; it is single dimensional whereas in nature patterns of memory are multidimensional; and the attributes of data are not stored together with it, which is in contrast to what obtains in a higher level language where we expect a generic operation to take on a meaning determ ined by the meaning of its operands. The reason why the architecture of the classical comp uter came to have this form is clear enough when we see it as an embodiment of serial computation carried out by the rational m ind in arithmetic and other numerical tasks. However, whereas some computations carried out by humans (especially those dealing with numerical computations) do fall within the category that is well captured by serial computation, there are a vast number of other computations that do not. In particular, tasks associated with “intelligence,” which typically involves processing enormous am ounts of data do not involve deliberate computation. In such tasks, autonomous centers appear to carry out computations independently, reducing the dimens ions of the data and ma pping it into an abstract space where further computations are done. 2 The Stream Computing P aradigm Processor 1 Processor 2 ……… Processor N Decision Module 1 Decision Module 2 Decision Module N Higher Order Processo r Input Output Figure 2: The stream computing framework Although much of the computations are done in parallel, this is not the parceling out of computational tasks to different processors by ta king advantage of the parallel components of the algorithm, which is what happens in what is tec hnically called “parallel computing” [5]. Rather, here the entire data is seemingly pushed into a variety of autonomous processors, quite as a stream of water is pushed into various channels with different function, j ustifying the term stream computing . The higher-order processor cannot be generic and it must use specific application knowledge to design it. The Biological Context There is a wealth of experimental evidence fro m neuroscience that suggests that the conscious mind “creates” its reality in order to have a narra tive that is “consistent” with the information reaching it from various specialized modules. This is seen most clearly in subjects who have suffered brain injury where the e ffect becomes most pronounced. In the 60s and the 70s, Kornhuber and Deecke perfo rmed a series of experiments to measure th e correlation between electrical activity in the brai n (EEG) and a voluntary act. They found that the EEG from the area corresponding to the finger in th e motor cortex for a subject who is about to move a finger starts to build up several hundred m illiseconds before the conscious decision to make the act is made [6]. The conscious mind a ppears to label such an act its own free decision although one might dispute this. Libet et al, in a variation of this experiment, showed that the EEG potential appeared to increase about 0.3 seconds before the subject made his “c onscious choice” to flex his finger. These results are in agreement with the idea of the cortex constructs a model that is consistent with the mediating experience [7]. Likewise, blindsight subjects can see the comp leted Kanizsa triangle in their supposedly blind field. This indicates that the space of our visual experience is located outside of the visual cortex. 3 Subhash K ak Figure 3: A Kanizsa triangle Other examples of the conscious brain harmonizing th e activity of various modules of the brain come from split-brain patients. In the words of Gazzaniga [8 ]: In a complication of stroke called anosognosia with hemiplegia, patients cannot recognize that their left arm is theirs because the str oke damaged the right parietal cortex, which manages our body’s integrity, position, a nd movement. .. When patients with this disorder are asked about their arm and why th ey can’t move it, they will say “It’s not mine” or “I just don’t feel like moving it”—reasonable conclusions, given the in put that the left-hemisphere interpreter is receiving. The left-hemisphere interpreter is not only a m aster of belief creation, but it will stick to its belief system no matter what. Patients w ith “reduplicative paramnesia,” because of damage to the brain, believe that there are copies of people or places. In short, they will remember another time and mix it with th e present. As a result, they will create seemingly ridiculous, but masterful, stories to uphol d what they know to be true due to the erroneous messages their damaged brain is sending their intact interpreter. These findings do not fit into any simple model of computation, reminding us that there remains a big gulf between the reality of biological com puting and our current explanations. Other perspectives from which to look at information processing by the brain are those of learning and structure. Learning of memories is rela ted to two distinct types: short-term and long- term. It is also believed that much of learning is based on recursive primitives. Recursion m ay be seen as the nesting of stories inside stories, dolls inside dolls, or the flowering of leaves and branches in a self-similar manner in a tree. The motor-sensory cortex maps the world as experienced by the senses and it is connected in a nested form to deeper layers. Recursive representations are used implicitly or e xplicitly in scientific thought, literature, art and music. It is the basis of mental images not only of physical systems but also of behavior. In music, recursion helps in the creation of structure, as in the employing of a delayed version of a tune as its own accompaniment. In Indian music, rhythms and melodies are created out of set notes constituting the raga , which is then syncopated, translated, and rhythmically shrunk and enlarged to communicate emotion. Baroque compo sers have used recursion in their canons and fugues. 4 The Stream Computing P aradigm A single piece of music may begin as a melody. The rules by which the piece expands are that of recursive incorporation, replication, cleaving, and displacement, one transformation of the ba sic pattern nesting inside another. The notes of the basic melody are replicated several times throughout the piece, and incorporated within this re plication m ay be its play at twice or half the speed and this may be assigned to different instrument voices. The core melody may be played at other times in retrograde and assigned to a differe nt instrument to increase the beauty of the piece. The texture may change further with an inversion of everything that was done so far as in Johan Sebastian Bach’s Musical Offering. Other behavior, whether routine or creative, is also constructed of similar primitives. Learning Learning requires networks that have the capacity to generalize very quickly since the decision time is short. It was the motivation to model short-term memory that led to the development of the corner classification (CC) family of the instantaneously trained neural networks (ITNN) [9- 11] that learn instantaneously and have very good generalization performance. ITNN networks are an example of prescriptive learning; they trai n the network by isolating the corner in the n- dimensional cube of the inputs represented by th e input vector being learnt. In specific modules, the processors may have strong feedback connections [12-15]. Another approach to learning is that of using recursive structures which are implicitly mapped to the recursive primitives of the information process. This approach which is called the Network of Networks (NoN) approach is based on a model of cortical processing. It is based on two fundamental ideas in neurobiology: (1) groups of interacting nerve cells, or neurons, encode functional information, and (2) processing occurs simultaneously across different levels of neural organization. Although the NoN model makes simplifying assumptions about neural activity, systems using this approach have performed ve ry well. Nested distributed systems denote the core architecture of the model. In the context of describing computational features of the cerebral cortex, this organizing principle was proposed by Mountcastle [ 16] and applied by Sutton and Anderson [17]. The formation of clusters and levels among neurons is based on thei r interconnections. The NoN model suggests that despite enormous diversity in the connection pa tterns associated with individual neurons, ma ny neural circuits can be subdivided into essentially similar sub-circuits, where each sub circuit contains many types of ne urons. This hierarchy is evident in the cerebral cortex, which has long been considered the most complex and elusive of neural circuits. (See Figure 4 adapted from [17]). This nesting arrange m ent serves to link different and often widely separated regions of the cortex in a precise but distributed manner. Several physiological responses, such as those occurring in the visual co rtex in response to optical stimuli, may be associated with each first-level sub-circuit. Regarding the question of universals underlying aesthetics in art and music [18], it has been argued that although there are no specific univer sals that have a quantitative form, there are qualitative ones that are ultimately related to the ne urophysiological basis of the cognitive system. These qualitative universals also have a recursive nature. 5 Subhash K ak Figure 4: Schematic representation of the cerebral cortex. Three networks of intermediate level organization are disp layed, showing recursive structure. Applications Clearly stream computing is a paradigm who se implementations embody elements that are appropriate to the specific application. In practi ce, it would consist of several layers of SIMD processors working to abstract different applica tion-specific abstractions from several copies of the data. This should make it possible to do faster processing and analysis as well as decision making in business and science. It should also allow one to deal with data in a variety of forms that include transaction and internet data , video, voice, and data from sensors. The processing of data will begin as it streams in and, therefore, it must use models and abstractions that can deal with incomplete inform ation. This is to be contrasted from conventional approaches where the data is first collected and stored and then analyzed. Clearly , the conventional approach takes time that might not be av ailable in applications such as automated trading or in surveillance. One can visualize applications of stream computing include those to life sciences (as in computational biology, computational immunology and visualization), autom ated trading (as is done by hedge funds), and various types of gr aphics-based consumer applications. Stream computing should also be of importance in secur ity applications, data mining, and intrusion detection. References 1. M.S. Gazzaniga, The Cognitive Neurosciences , MIT Press, Cam bridge, Mass., 1995. 2. S. Kak, “ The Three Languages of the Brain: Q uantum, Reorganizat ional, and Ass ociative,” Learni ng as Self-Organi zation , K. Pribram and J. King, eds., Lawr ence Erlbaum, Mahwah, N.J., 1996, pp. 185–219. 3. S. Kak, “Artificial and biological intelligence.” ACM Ubiquity 4 , no. 42, 2005; arXiv:cs/0601052 4. S. Kak, “Active agents, intelligen ce, and quantum computing," Information Sci ences , vol. 128, pp. 1-17, 2000. 5. S. Kak, “Multilayered array computing.” Information Sciences , vol. 45, 1988, pp. 34 7-365. 6. L. Deecke and H.H. Korn huber, “An electrical sign of pa rticipation of the m esial “supplementary” motor cortex in h uman volunt ary finger m ovement .” Brain Res 159, 473-476, 1978. 6 The Stream Computing P aradigm 7 7. B. Libet et al., “Subjective Referral of the Tim ing for a Conscious Se nsory Experience.” Brain 102, 193- 224, 1979. 8. M.S. Gazzaniga, The Ethi cal Brain. Dana Press, 2005. 9. S. Kak, “ Faster web sea rch and predic tion using i nstantaneously t rained neural networks,” IEEE Intelligent Systems , vol. 14, pp. 79-82 , November/December 1999. 10. S. Kak, “ A class of instantaneousl y trained neu ral networks.” Information Scien ces , vol. 148, pp. 97- 102, 2002. 11. K.W. Tan g and S. Kak, “Fast classification net works for signal processing.” C ircuits, Systems, Signal Processing , vol. 21, pp. 207-224, 2002. 12. S. Kak, “F eedback neural netw orks: new cha racteristics and a ge neralization,” Circui ts, Systems, and Signal Proces sing , vol. 12, pp. 263-27 8, 1993. 13. S. Kak, “Self-indexin g of neural mem ories,” Physics Letters A, vol. 143 , pp. 293-296, 19 90. 14. S. Kak an d M.C. Stinson, “A bicameral neural network whe re information ca n be inde xed,” Electronics Letters , vol. 25, pp. 203-205, 1989. 15. D.L. P rados and S. Ka k, “Neural netw ork capacity usin g the delta rul e,” Electronics Letters , vol. 25, pp. 197-199, 1989 . 16. V.B. Mountcastle, “An organizing pr inci ple for cerebral function: Th e unit module and the distributed system.” In The Mindful Brain , G. Edelman and V. B. Mountcastle Eds., pp. 7-50, 197 8. 17. J.P. Sutt on and J.A. Anderson, “C omputational an d neurobiol ogical features o f a network of networks.” In Neuro biology of Computat ion , J. M. Bower E d. Boston: Kluwer Academ ic, pp. 317 -322, 1995. 18. S. Kak, “The golden mean and th e physics of aesthetics.” Foar m Magazine , 5, pp. 73-81, 2006. arXiv:physics/0411195 Department of Computer Science, Oklahoma State University, Stillwater, OK 74078, U.S.A.
Original Paper
Loading high-quality paper...
Comments & Academic Discussion
Loading comments...
Leave a Comment