In this paper, we present a tool to assess users ability to change tasks. To do this, we use a variation of the Box and Blocks Test. In this version, a humanoid robot instructs a user to perform a task involving the movement of certain colored blocks. The robot changes randomly change the color of blocks that the user is supposed to move. Canny Edge Detection and Hough Transformation are used to assess user perform the robot's built-in camera. This will allow the robot to inform the user and keep a log of their progress. We present this method for monitoring user progress by describing how the moved blocks are detected. We also present the results of a pilot study where users used this system to perform the task. Preliminary results show that users do not perform differently when the task is changed in this scenario.
As social low-cost robots continue to be developed, their adaption for educational purposes has been increased, especially for children with disabilities or users in the aged population. These Humanoid Robots have been especially useful in helping users with Autism Spectrum Disorder to connect and establish a relationship [11]. This brings up the question of how everyday users would interact with robots. How would everyday users with no special needs respond to robots issuing instructions and evaluating them?
In this paper, we present a prototype system to assess user performance while performing tasks while being instructed by a humanoid robot. This system will detect how many blocks a user moves during a modified version of the Box and Blocks Test, as well as monitor the rate of speed at which the user performs the tasks. We will also present the results of a pilot study, which shows the accuracy of our block detection technique and compares times that users took during these tasks. We will first cover work of others related to our system. Then we will describe our system, including the equipment used, the block detection technique, and our experimental setup and procedure. We will then present and discuss our results.
Lastly, we will present our conclusions and go over future extensions of development and testing.
Socially assistive robots are becoming increasingly popular, especially when it comes to being able to interact with humans and assist them to enhance skill-based tasks [3]. These kinds of robots aim to help humans via social interaction, which instructs, observes, and gives feedback. In the following, several methods which have used socially assistive robots are described.
Litoiu and Scassellati [4] proposed a method to instruct physical activities using humanoid robot. The authors concentrated on identifying problems with movements and prioritizing them. For classification purposes, a supervised learning method was used to classify the problems to work on, and recommend the design of several user studies to determine the effectiveness of the algorithm.
Similarly, Yamada and Miura [14] proposed an approach for human to robot teaching. The authors claimed that their proposed robot advisor instructs a human learner how to perform a block assembly task. The authors believe that the key to understanding an interaction is to properly determine basic probability assignments, which are set manually based on the degree of knowledge transfer for each task and instruction.
Likewise, Shen and Wu [13] evaluate the user experience of robots for elderly people. Their results show that a robot was more effective and preferred by subjects over a human instructor for delivering information and instructing physical exercise.
Sauppe and Mutlu present an autonomous instructional robot. The authors compare different instructional strategies, which can have an effect on user performance and experience. Their analysis of human instructor interactions identified two key instructional strategies, including grouping instructions and summarizing the result of subsequent instructions. They implemented the above strategies by using a humanoid robot which is able to correct its mistakes and misunderstandings. The authors also claimed that their findings are outstanding for the design of instructional robots. [10].
Schneider et al. proposed a framework for coordinating motivational interaction scenarios with robots in the instruction of physical activities. The authors have tested their framework with three different physical activity models where they have used the same motivational interaction scenario. The authors concluded that their model can be applied to systematically test different aspects of motivation using a robot in coaching physical activities [12].
In the same manner, Ding and Chang [2] developed a Kinectsensor-based sport instructor robot system for rehabilitation and physical activity training of the aged population. They used Kinect camera in order to detect and recognize the gestures of people. The Kinect is used to check the correctness of the active gesture. The authors have claimed that the combination of humanoid robot and Kinect sensor worked successfully in order to perform gesture recognition. They have tested their framework with three different exercise activities.
Park and Kwon [8] recently published a scientific research about feasibility of teaching children with cognitive disabilities from robot instructors. Moreover, they have concluded that their results proved that children who were trained with a robot instructor notably enhanced their skills and functional knowledge, especially when children repeated the training session to get better results.
As can be inferred from the related work, robot instruction methods show outstanding results in comparison with human instructor’s due to several advantages which robots have over humans including: 1. Robots do not get tired, 2. Robots a
This content is AI-processed based on open access ArXiv data.