Transfer Learning of Sensorimotor Knowledge for Embodied Agents (2019-present)

Prior work has shown that learning haptic, tacile and auditory models of objects can allow robots to perceive object properties that may be undetectable using visual input alone. However, learning such models is costly as it requires extensive physical exploration of the robot's environment and furthermore, such models are specific to each individual robot that learns them and cannot directly be used by other, morphologically differnet robots that use different actions and sensors. To address these limitations, our current work focuses on enabling robots to share sensorimotor knowledge with other robots as to speed up learning. We have proposed two main approaches: 1) An encoder-decoder approach which attemtps to map sensory data from one robot to another, and 2) a kernel manifold learning framework which taks sensorimotor observations from multiple robots and embeds them in a shared space, to be used by all robots for perception and manipulation tasks.

Selected Publications:

Augmented Reality for Human-Robot Interaction (2018-present)

Establishing common ground between an intelligent robot and a human requires communication of the robot's intention, behavior, and knowledge to the human as to build trust and assure safety in a shared environment. Many types of robot information (e.g., motion plans) are difficult to convey to human users usinbg modalities such as language. To address this need, we have deloped an Augmented Reality (AR) system to project a robot's sensory and cognitive data in context to a human user. By leveraging AR, robot data that would typically be “hidden” can be visualized by rendering graphic images over the real world using AR-supported devices. We have evaluated our system in the context of K-12 robotics education where students leveraged this hidden data to program their robots to solve a maze navigation task. We are also conducting ongoing studies to examine how AR visualization can enhance human-robot collaboration in a shared task environment.

Selected Publications:

Grounded Language Learning (2016-present)

Grounded language learning bridges words like ‘red’ and ‘square’ with robot perception. The vast majority of existing work in this space limits robot perception to vision. In this project, we enable a robot to build perceptual models that use haptic, auditory, and proprioceptive data acquired through robot exploratory behaviors to go beyond vision. We investigate how a robot can learn words through natural human-robot interaction (e.g., gameplay) as well as how a robot should act towards an object when trying to identify it given a natural language query (e.g., "the empty red bottle"). Datasets from this research are available upon request.

Selected Publications:

Autonomous Service Robots (2014-present)

The aim of this project is the development of general purpose mobile robot platforms that are capable of executing a wide variety of complex, temporally extended service tasks in dynamic open-ended environments such as our homes and workplaces. Such robots must posess a variety of skills, ranging from human-aware navigation in dynamic and unstructured environments, to effective interaction with humans via natural language. They need to not only perform useful tasks for us but must also learn about the world in an open-ended fashion as to acquire new knowledge that is grounded in terms of their own perception and action. To that end, results from a number of experiments conducted as part of the Building-Wide Intelligence Project show that learning ‘in the wild’ with everyday users has the potential to greatly extend the cognitive and intelligence abilities of autonomous service robots that operate in human environments.

Selected Publications:

Curriculum Learning for RL Agents (2014-present)

In transfer learning, training on a source task is leveraged to speed up or otherwise improve learning on a difficult target task. The goal of this project is to develop methods that can automatically construct a sequence of source tasks -- i.e., a curriculum -- such that performance and/or training time on a difficult target task is improved.

Selected Publications:

  • Narvekar, S., Sinapov, J., and Stone, P. (2017)
    Autonomous Task Sequencing for Customized Curriculum Design in Reinforcement Learning
    In proceedings of the 2017 International Joint Conference on Artificial Intelligence (IJCAI), Melbourne, Australia, August 19-25, 2017. [PDF]

  • Svetlik, M., Leonetti, M., Sinapov, J., Shah, R., Walker, N., and Stone, P. (2017)
    Automatic Curriculum Graph Generation for Reinforcement Learning Agents
    In proceedings of the 31st Conference of the Association for the Advancement of Artificial Intelligence (AAAI), San Francisco, CA, Feb. 4-9, 2017. [PDF]

  • Developmental Learning of Object Properties (2009-present)

    This research introduced a framework for object perception and exploration in which the robot's representation of objects is grounded in its own sensorimotor experience with them. In this framework, an object is represented by sensorimotor contingencies that span a diverse set of exploratory behaviors and sensory modalities. The results from several large-scale experimental studies show that the behavior-grounded object representation enables a robot to solve a wide variety of tasks including recognition of objects based on the stimuli that they produce, object grouping and sorting, and learning category labels that describe objects and their properties. Large-scale datasets of robot object exploration from this research are available upon request.

    Selected Publications:

  • Sinapov et al. (2016). Learning to Order Objects using Haptic and Proprioceptive Exploratory Behaviors. In proceedings of the 2016 International Joint Conference on Artificial Intelligence (IJCAI) [PDF]

  • Sinapov et al. (2014). Grounding Semantic Categories in Behavioral Interactions: Experiments with 100 Objects. Robotics and Autonomous Systems, Vol. 62, No. 5, pp. 632-645, May 2014. [Link]

  • Sinapov et al. (2010). The Odd-One-Out Task: Towards an Intelligence Test for Robots. In proceedings of the 9th IEEE International Conference on Development and Learning (ICDL),Ann Arbor, Michigan, August 18-21, pp. 126-131, 2010. (Best Student Paper Award) [PDF] [Link]


  • Autonomous Robotic Manipulation of Hand-Held Tools (2011)

    During the Summer and Fall of 2011, I worked on the DARPA funded Autonomous Robotics Manipulation (ARM-S) project. The goal was to enable a 2-armed upper-torsoe humanoid robot with the ability to grasp objects and use tools (e.g., a hand-held drill, a flashlight, a stapler, etc.). My focus was on developing methods that allow a robot to discover the functional componentes of objects (e.g., the button on the hand-held drill) using exploratory behaviros and multi-modal perception.

    Selected Publications:

  • Hoffmann, H., Chen, Z., Earl, D., Mitchell, D., Salemi, B., and Sinapov, J. (2014)
    Adaptive Robotic Tool Use Under Variable Grasps.
    Robotics and Autonomous Systems, Vol. 62, No. 6, pp. 833-846, June 2014. [Link]


  • Sinapov, J., Earl, D., Mitchell, D., and Hoffmann, H. (2013).
    Interactive Audio-Tactile Annotation of 3D Point Clouds for Robotic Manipulation
    Presented at the 2013 ICRA Workshop on Mobile Manipulation: Interactive Perception, Karlsruhe, Germany, May 6, 2013. [Abstract]

  • Learning the Acoustic Properties of Objects (2008-2010)

    Natural sound provides important cues about objects -- we can typically identify an object's material, size, and various other properties based on the sounds that the object generates during contact with other objects. The goal of this project is to enable a robot to use natural sound as a source of information about the objects the robot interacts with.

    Selected Publications:

  • Sinapov, J., and Stoytchev, A. (2009).
    From Acoustic Object Recognition to Object Categorization by a Humanoid Robot
    In proceedings of the "Mobile Manipulation" workshop held at the Robotics Science and System Conference, 2009. [PDF]

  • Sinapov, J., Wiemer, M., and Stoytchev, A. (2009).
    Interactive Learning of the Acoustic Properties of Household Objects
    In proceedings of the 2009 IEEE International Conference on Robotics and Automation (ICRA). [PDF] [Link]

  • Autonomous Learning of Tool Affordances (2006-2009)

    Our ability to use external objects as tools is one of the hallmarks of human intelligence. The goal of this project is to enable robots to learn the affordances of tools in the context of reaching tasks.

    Publications:

  • Sinapov, J., Stoytchev, A. (2008).
    Detecting the Functional Similarities Between Tools Using a Hierarchical Representation of Outcomes
    In proceedings of the IEEE International Conference on Development and Learning (ICDL 2008) [PDF] [Link]

  • Sinapov, J., Stoytchev, A. (2007).
    Learning and Generalization of Behavior-Grounded Tool Affordances
    In proceedings of the IEEE International Conference on Development and Learning (ICDL 2007) [PDF] [Link]

  • Prediction of protein binding and post-translational modification sites (2007-2009)

    The goal of this project is to develop novel machine learning methods for detecting individual protein sites that have a target characteristic. More specifcally, we focused on two problems: 1) predicting Glycosylation sites; and 2) predicting protein-protein interaction sites.

    Selected Publications:

  • Caragea, C., Sinapov, J., Dobbs, D., Honavar, V. (2009).
    Mixture of experts models to exploit global sequence similarity on biomolecular sequence labeling
    BMC Bioinformatics, 2009 10:S4 [Link]

  • Caragea, C., Sinapov, J., Silvescu, A., Dobbs, D., Honavar, V. (2007).Glycosylation Site Prediction Using Ensembles of Support Vector Machine Classifiers
    BMC Bioinformatics, 2007 8:438 [Link]