Fast prototyping and simulation of mobile robots. The choice of over 250 universities and research centers worldwide.

ARS Journal - Abstracts.

  • Volume 4 Number 1 March 2007

    Special Issue on Human - Robot Interaction

    Guest Editorial
    Kerstin Dautenhahn & Chrystopher L. Nehaniv

    For further informations in German please click here!
  • Methodology & Themes of Human-Robot Interaction: A Growing Research Field, Page 103-108
    Kerstin Dautenhahn

    Abstract: This article discusses challenges of Human-Robot Interaction, which is a highly inter- and multidisciplinary area. Themes that are important in current research in this lively and growing field are identified and selected work relevant to these themes is discussed.
    Keywords: Human-Robot Interaction, Methodologies, Interaction Studies

    Self-imitation and Environmental Scaffolding for Robot Teaching, Page 109-124
    Joe Saunders, Chrystopher L. Nehaniv, Kerstin Dautenhahn and Aris Alissandrakis

    Abstract: Imitative learning and learning by observation are social mechanisms that allow a robot to acquire knowledge from a human or another robot. However to be able to obtain skills in this way the robot faces many complex issues, one of which is that of finding solutions to the correspondence problem. Evolutionary predecessors to observational imitation may have been self-imitation where an agent avoids the complexities of the correspondence problem by learning and replicating actions it has experienced through the manipulation of its body. We investigate how a robotic control and teaching system using self-imitation can be constructed with reference to psychological models of motor control and ideas from social scaffolding seen in animals. Within these scaffolded environments sets of competencies can be built by constructing hierarchical state/action memory maps of the robot's interaction within that environment. The scaffolding process provides a mechanism to enable learning to be scaled up. The resulting system allows a human trainer to teach a robot new skills and modify skills that the robot may possess. Additionally the system allows the robot to notify the trainer when it is being taught skills it already has in its repertoire and to direct and focus its attention and sensor resources to relevant parts of the skill being executed. We argue that these mechanisms may be a first step towards the transformation from self-imitation to observational imitation. The system is validated on a physical pioneer robot that is taught using self-imitation to track, follow and point to a patterned object.
    Keywords: Social Robotics, Imitation, Teaching, Memory-based learning, Scaffolding

    Situated Dialogue and Spatial Organization: What, Where… and Why?, Page 125-138
    Geert-Jan M. Kruijff, Hendrik Zender, Patric Jensfelt and Henrik I. Christensen

    Abstract: The paper presents an HRI architecture for human-augmented mapping, which has been implemented and tested on an autonomous mobile robotic platform. Through interaction with a human, the robot can augment its autonomously acquired metric map with qualitative information about locations and objects in the environment. The system implements various interaction strategies observed in independently performed Wizard-of-Oz studies. The paper discusses an ontology-based approach to multi-layered conceptual spatial mapping that provides a common ground for human-robot dialogue. This is achieved by combining acquired knowledge with innate conceptual commonsense knowledge in order to infer new knowledge. The architecture bridges the gap between the rich semantic representations of the meaning expressed by verbal utterances on the one hand and the robot’s internal sensor-based world representation on the other. It is thus possible to establish references to spatial areas in a situated dialogue between a human and a robot about their environment. The resulting conceptual descriptions represent qualitative knowledge about locations in the environment that can serve as a basis for achieving a notion of situational awareness.
    Keywords: Human-Robot Interaction, Conceptual Spatial Mapping, Situated Dialogue

    A Monocular Pointing Pose Estimator for Gestural Instruction of a Mobile Robot, Page 139-150
    Jan Richarz, Andrea Scheidig, Christian Martin, Steffen Müller and Horst-Michael Gross

    Abstract: We present an important aspect of our human-robot communication interface which is being developed in the context of our long-term research framework PERSES dealing with highly interactive mobile companion robots. Based on a multi-modal people detection and tracking system, we present a hierarchical neural architecture that estimates a target point at the floor indicated by a pointing pose, thus enabling a user to navigate a mobile robot to a specific target position in his local surroundings by means of pointing. In this context, we were especially interested in determining whether it is possible to accomplish such a target point estimator using only monocular images of low-cost cameras. The estimator has been implemented and experimentally investigated on our mobile robotic assistant HOROS. Although only monocular image data of relatively poor quality were utilized, the estimator accomplishes a good estimation performance, achieving an accuracy better than that of a human viewer on the same data. The achieved recognition results demonstrate that it is in fact possible to realize a user-independent pointing direction estimation using monocular images only, but further efforts are necessary to improve the robustness of this approach for everyday application.
    Keywords: Human-Robot Interaction, Man-Machine-Interfaces, Gesture Recognition, Robotics