OpenSocInt: A Multi-modal Training Environment for Human-Aware Social Navigation
Reading time: 2 minute
...
๐ Original Info
- Title: OpenSocInt: A Multi-modal Training Environment for Human-Aware Social Navigation
- ArXiv ID: 2601.01939
- Date: 2026-01-05
- Authors: Victor Sanchez, Chris Reinke, Ahamed Mohamed, Xavier Alameda-Pineda
๐ Abstract
In this paper, we introduce OpenSocInt, an open-source software package providing a simulator for multi-modal social interactions and a modular architecture to train social agents. We described the software package and showcased its interest via an experimental protocol based on the task of social navigation. Our framework allows for exploring the use of different perceptual features, their encoding and fusion, as well as the use of different agents. The software is already publicly available under GPL at https://gitlab.inria.fr/robotlearn/OpenSocInt/.๐ Full Content
The computational study of social interactions is an important field of research with many applications such as video games, virtual reality, and social robotics. The field is inherently multi-modal as, in practice, several sensors are available to describe the scene around the agent(s). Examples of such sensors are cameras, LI-DARs, ultrasound, and microphones. OpenSocInt provides a framework for simulating social situations and various sensor percepts, with the main aim of training agents for social interaction. The prominent example is human-aware social navigation, and two examples of this environment (with or without obstacles/furniture) are provided in Figure 1.
…(๋ณธ๋ฌธ์ด ๊ธธ์ด ์ผ๋ถ๊ฐ ์๋ต๋์์ต๋๋ค.)
Reference
This content is AI-processed based on open access ArXiv data.