Integrated Information as a Metric for Group Interaction: Analyzing Human and Computer Groups Using a Technique Developed to Measure Consciousness
📝 Abstract
Researchers in many disciplines have previously used a variety of mathematical techniques for analyzing group interactions. Here we use a new metric for this purpose, called ‘integrated information’ or ‘phi.’ Phi was originally developed by neuroscientists as a measure of consciousness in brains, but it captures, in a single mathematical quantity, two properties that are important in many other kinds of groups as well: differentiated information and integration. Here we apply this metric to the activity of three types of groups that involve people and computers. First, we find that 4-person work groups with higher measured phi perform a wide range of tasks more effectively, as measured by their collective intelligence. Next, we find that groups of Wikipedia editors with higher measured phi create higher quality articles. Last, we find that the measured phi of the collection of people and computers communicating on the Internet increased over a recent six-year period. Together, these results suggest that integrated information can be a useful way of characterizing a certain kind of interactional complexity that, at least sometimes, predicts group performance. In this sense, phi can be viewed as a potential metric of effective group collaboration. Since the metric was originally developed as a measure of consciousness, the results also raise intriguing questions about the conditions under which it might be useful to regard groups as having a kind of consciousness.
💡 Analysis
Researchers in many disciplines have previously used a variety of mathematical techniques for analyzing group interactions. Here we use a new metric for this purpose, called ‘integrated information’ or ‘phi.’ Phi was originally developed by neuroscientists as a measure of consciousness in brains, but it captures, in a single mathematical quantity, two properties that are important in many other kinds of groups as well: differentiated information and integration. Here we apply this metric to the activity of three types of groups that involve people and computers. First, we find that 4-person work groups with higher measured phi perform a wide range of tasks more effectively, as measured by their collective intelligence. Next, we find that groups of Wikipedia editors with higher measured phi create higher quality articles. Last, we find that the measured phi of the collection of people and computers communicating on the Internet increased over a recent six-year period. Together, these results suggest that integrated information can be a useful way of characterizing a certain kind of interactional complexity that, at least sometimes, predicts group performance. In this sense, phi can be viewed as a potential metric of effective group collaboration. Since the metric was originally developed as a measure of consciousness, the results also raise intriguing questions about the conditions under which it might be useful to regard groups as having a kind of consciousness.
📄 Content
1
Integrated Information as a Metric for Group Interaction:
Analyzing Human and Computer Groups Using a Technique
Developed to Measure Consciousness
David Engel1,2,3 and Thomas W. Malone1,2,*
1 Massachusetts Institute of Technology, Center for Collective Intelligence, Cambridge, MA
02142, USA
2 Massachusetts Institute of Technology, Sloan School of Management, Cambridge, MA 02142,
USA
3 Now at Google, Inc., Zurich, Switzerland
- Corresponding author. Email: malone@mit.edu
Abstract. Researchers in many disciplines have previously used a variety of mathematical techniques for analyzing group interactions. Here we use a new metric for this purpose, called “integrated information” or “phi.” Phi was originally developed by neuroscientists as a measure of consciousness in brains, but it captures, in a single mathematical quantity, two properties that are important in many other kinds of groups as well: differentiated information and integration. Here we apply this metric to the activity of three types of groups that involve people and computers. First, we find that 4-person work groups with higher measured phi perform a wide range of tasks more effectively, as measured by their collective intelligence. Next, we find that groups of Wikipedia editors with higher measured phi create higher quality articles. Last, we find that the measured phi of the collection of people and computers communicating on the Internet increased over a recent six-year period. Together, these results suggest that integrated information can be a useful way of characterizing a certain kind of interactional complexity that, at least sometimes, predicts group performance. In this sense, phi can be viewed as a potential metric of effective group collaboration. Since the metric was originally developed as a measure of consciousness, the results also raise intriguing questions about the conditions under which it might be useful to regard groups as having a kind of consciousness.
Introduction A vast number of phenomena in the world arise out of the interactions of individuals in groups, from the emotional tone of a family [1,2] to the productivity of an economy [3] to the spread of disease in a community [4], and researchers in a variety of disciplines have used many different mathematical tools to analyze these phenomena. For instance, psychologists have used Markov models to analyze the sequences of actions in small groups of people [5–7], economists have
2
used general equilibrium theory to analyze the interactions among buyers and sellers in a market
[8], and sociologists have used graph theory to analyze various kinds of social networks [4,9].
In this paper, we examine another mathematical technique that has not previously been used for
analyzing group interactions. This technique, based on information theory, is intriguing because
it was developed as a physical measure that would correlate with the consciousness of a brain
[10–14]. We will see, however, that the metric is general enough to apply to many other kinds of
systems, and we focus here on using it to analyze groups of people and computers.
What is integrated information?
The metric we use is called “integrated information” or “phi” and was proposed by Tononi and
colleagues [10–14]. There have been several successively refined versions of phi (summarized
in [12]), but all the versions aim to quantify the integrated information in a system. Loosely
speaking, this means the amount of information generated by the system as a whole that is more
than just the sum of its parts. The phi metric does this by splitting the system into subsystems
and then calculating how much information can be explained by looking at the system as a whole
but not by looking at the subsystems separately.
In other words, for a system to have a high value of phi, it must, first of all, generate a large
amount of information. Information can be defined as the reduction of uncertainty produced
when one event occurs out of many possible events that might have occurred [15]. Thus a
system can produce more information when it can produce more possible events. This, in turn, is
possible when it has more different parts that can be in more different combinations of states. In
other words, a system needs a certain kind of differentiated complexity in its structure in order to
generate a large amount of information.
But phi requires more than just information; it also requires the information to be integrated at
the level of the system as a whole. A system with many different parts could produce a great
deal of information, but if the different parts were completely independent of each other, then the
information would not be integrated at all, and the value of phi would be 0. For a system to be
integrated, the events in some parts of the system need to depend on events in other parts of the
system. And the stronger a
This content is AI-processed based on ArXiv data.