Authors

* External authors

Venue

Date

Share

An Imitation from Observation Approach to Transfer Learning with Dynamics Mismatch

Siddharth Desai*

Ishan Durugkar*

Haresh Karnan*

Garrett Warnell*

Josiah Hanna*

Peter Stone

* External authors

NeurIPS-2020

2020

Abstract

We examine the problem of transferring a policy learned in a source environment to a target environment with different dynamics, particularly in the case where it is critical to reduce the amount of interaction with the target environment during learning. This problem is particularly important in sim-to-real transfer because simulators inevitably model real-world dynamics imperfectly. In this paper, we show that one existing solution to this transfer problem-- grounded action transformation --is closely related to the problem of imitation from observation (IfO): learning behaviors that mimic the observations of behavior demonstrations. After establishing this relationship, we hypothesize that recent state-of-the-art approaches from the IfO literature can be effectively repurposed for grounded transfer learning. To validate our hypothesis we derive a new algorithm -- generative adversarial reinforced action transformation (GARAT) -- based on adversarial imitation from observation techniques. We run experiments in several domains with mismatched dynamics, and find that agents trained with GARAT achieve higher returns in the target environment compared to existing black-box transfer methods.

Related Publications

Metric Residual Networks for Sample Efficient Goal-Conditioned Reinforcement Learning

AAAI, 2023
Bo Liu*, Yihao Feng*, Qiang Liu*, Peter Stone

Goal-conditioned reinforcement learning (GCRL) has a wide range of potential real-world applications, including manipulation and navigation problems in robotics. Especially in such robotics tasks, sample efficiency is of the utmost importance for GCRL since, by default, the …

The Perils of Trial-and-Error Reward Design: Misdesign through Overfitting and Invalid Task Specifications

AAAI, 2023
Serena Booth*, W. Bradley Knox*, Julie Shah*, Scott Niekum*, Peter Stone, Alessandro Allievi*

In reinforcement learning (RL), a reward function that aligns exactly with a task's true performance metric is often sparse. For example, a true task metric might encode a reward of 1 upon success and 0 otherwise. These sparse task metrics can be hard to learn from, so in pr…

DM2: Distributed Multi-Agent Reinforcement Learning via Distribution Matching

AAAI, 2023
Caroline Wang*, Ishan Durugkar*, Elad Liebman*, Peter Stone

Current approaches to multi-agent cooperation rely heavily on centralized mechanisms or explicit communication protocols to ensure convergence. This paper studies the problem of distributed multi-agent learning without resorting to centralized components or explicit communic…

  • HOME
  • Publications
  • An Imitation from Observation Approach to Transfer Learning with Dynamics Mismatch

JOIN US

Shape the Future of AI with Sony AI

We want to hear from those of you who have a strong desire
to shape the future of AI.