Designing Interaction for Multi-agent Cooperative System in an Office Environment

Reading time: 5 minute
...

📝 Original Info

  • Title: Designing Interaction for Multi-agent Cooperative System in an Office Environment
  • ArXiv ID: 2002.06417
  • Date: 2020-10-29
  • Authors: 원문에 명시된 저자 정보가 제공되지 않았습니다.

📝 Abstract

Future intelligent system will involve very various types of artificial agents, such as mobile robots, smart home infrastructure or personal devices, which share data and collaborate with each other to execute certain tasks.Designing an efficient human-machine interface, which can support users to express needs to the system, supervise the collaboration progress of different entities and evaluate the result, will be challengeable. This paper presents the design and implementation of the human-machine interface of Intelligent Cyber-Physical system (ICPS),which is a multi-entity coordination system of robots and other smart devices in a working environment. ICPS gathers sensory data from entities and then receives users' command, then optimizes plans to utilize the capability of different entities to serve people. Using multi-model interaction methods, e.g. graphical interfaces, speech interaction, gestures and facial expressions, ICPS is able to receive inputs from users through different entities, keep users aware of the progress and accomplish the task efficiently

💡 Deep Analysis

Figure 1

📄 Full Content

Multi-robots concept was introduced in the early 2000s to improve the system's robustness and capabilities [2]. After 20 years of development, the current multi-robot system becomes more complex and consists of multiple artificial agents. Those agents can be very different in their form and functionality, such as mobile robots, static smart home infrastructure, or smartphones. One of the challenges for an intelligent system can be seamless interactions between artificial agents and human, which requires the system share concepts about existing objects and ongoing events in their environment [11][12].

Our work presents a human-machine interface with which is aimed at tackling the mentioned challenge. The interface design is based on a multi-robotic system called ICPS (Intelligent Cyber-Physical system), which is implemented in a typical office workspace. It consists of three kinds of entities: SmartLobby, which is a lobby equipped with cameras and other sensors, and touch screen tables; Johnny, Ira and Walker, three mobile robots that are able to move inside the office; and Receptionist, a stationary booth at the reception of the office, which is equipped with camera, microphone and a touch screen. These entities are coordinated by ICPS to perform certain tasks, such as fetching objects, searching for persons or guiding guests to specific locations. Through Multi-model interaction methods, e.g. graphical interfaces, speech interaction, gestures and facial expressions, ICPS is able to receive requests from users, provide feedback about the progress and execute the task efficiently.

There are multiple related publications in the domain of human-robot interaction in an office environment. For example, the CMU Snackbot project [8] implemented the industrial design of an autonomous robot for snack delivery in an office space targeted at long-term operation. Also, some researchers [13] showed a service robot system that focused on the longer operation and on asking humans for help. In STRANDS project [6], researchers implemented a robot to monitor an indoor office environment and generate alerts when they observed prohibited or unusual events. The robot has a head, eyes and ledlights, which can deliver non-verbal communication cues. Leonardi et al [9] suggested an interface which enables a native user to trigger certain actions based on personalized rules. The triggers include various IoT devices, such as wearables, lights or smart TV. The single robot architecture has been proven to be stable over a long time, pursuing service tasks in interaction with people. However, there is little research regarding multi-robot collaboration to service people in the long term running. Since we are targeting at operating in a real office environment, and the present humans need to be considered by the system. A recent overview of the field of human-robot teaming and the associated challenges can be found in [3]. Also, there is earlier work considering humans during robots’ actions [1], where the state of the human (e. g., standing or sitting) is considered for appropriate motion planning. We are however trying to more tightly incorporate the humans in the system’s behaviour. As will be explained later in this work, humans are not seen as uncontrollable constraints, but instead, their capabilities are taken into account and the system might opt to ask the human for help. One part of a multi-entity system is the knowledge organization and distribution amongst the components. This requires the right abstraction level due to the different sensors and capabilities of the system entities. Semantic representations are an efficient method to achieve this and a way to make knowledge gathered by single robots available to other robots [16]. As presented in [14], this seems feasible in a larger scope by using a representation that includes knowledge about the environment as well as past actions of a robot. Semantic representations also provide different ways of allowing for extendibility of the knowledge under an open-world assumption. On the one hand, it allows for reasoning over unknowns, for example by incorporating the concept of hypotheses [7]. On the other hand, the semantic representation can be connected to external world knowledge. This has for example been shown in the KnowRob project [15], where information from sensory data is associated to predefined ontological information. Furthermore, in our earlier work, we also showed that such a representation is well suited for interacting with humans and for generating humanunderstandable explanations of a reasoning process [4]. In that previous work, the focus was more on how to represent knowledge (in particular relations between tools, actions, and objects) rather than on symbolic planning. For planning, we are building upon traditional AI methods similar to the work presented in [5]. There, the planning domain and problem (e. g., a search task in an unknown environment) are def

📸 Image Gallery

cover.png

Reference

This content is AI-processed based on open access ArXiv data.

Start searching

Enter keywords to search articles

↑↓
ESC
⌘K Shortcut