Authors

* External authors

Venue

Date

Share

Towards Principled Representation Learning from Videos for Reinforcement Learning

Dipendra Misra*

Akanksha Saran

Tengyang Xie*

Alex Lamb*

John Langford*

* External authors

ICLR 2024

2024

Abstract

We study pre-training representations for decision-making using video data, which is abundantly available for tasks such as game agents and software testing. Even though significant empirical advances have been made on this problem, a theoretical understanding remains absent. We initiate the theoretical investigation into principled approaches for representation learning and focus on learning the latent state representations of the underlying MDP using video data. We study two types of settings: one where there is iid noise in the observation, and a more challenging setting where there is also the presence of exogenous noise, which is non-iid noise that is temporally correlated, such as the motion of people or cars in the background. We study three commonly used approaches: autoencoding, temporal contrastive learning, and forward modeling. We prove upper bounds for temporal contrastive and forward modeling in the presence of only iid noise. We show that these approaches can learn the latent state and use it to do efficient downstream RL with polynomial sample complexity. When exogenous noise is also present, we establish a lower bound result showing that learning from video data can be exponentially worse than learning from action-labeled trajectory data. This partially explains why reinforcement learning with video pre-training is hard. We evaluate these representational learning methods in two visual domains, yielding results that are consistent with our theoretical findings.

Related Publications

EyeO: Autocalibrating Gaze Output with Gaze Input for Gaze Typing

CHI, 2025
Akanksha Saran, Jacob Alber, Cyril Zhang, Ann Paradiso, Danielle Bragg, John Langford*

Gaze tracking devices have the potential to expand interactivity greatly, yet miscalibration remains a significant barrier to use. As devices miscalibrate, people tend to compensate by intentionally offsetting their gaze, which makes detecting miscalibration from eye signals…

Human-Interactive Robot Learning: Definition, Challenges, and Recommendations

THRI, 2025
Kim Baraka, Ifrah Idrees, Taylor Kessler Faulkner, Erdem Biyik, Serena Booth*, Mohamed Chetouani, Daniel Grollman, Akanksha Saran, Emmanuel Senft, Silvia Tulli, Anna-Lisa Vollmer, Antonio Andriella, Helen Beierling, Tiffany Horter, Jens Kober, Isaac Sheidlower, Matthew Taylor, Sanne van Waveren, Xuesu Xiao*

Robot learning from humans has been proposed and researched for several decades as a means to enable robots to learn new skills or adapt existing ones to new situations. Recent advances in artificial intelligence, including learning approaches like reinforcement learning and…

Prosody as an Informative Teaching Signal for Agent Learning: Exploratory Studies and Algorithmic Implications

, 2024
Akanksha Saran, Matilda Knierim, Sahil Jain, Murat Han Aydoğan, Kenneth Mitra, Kush Desai, Kim Baraka

Agent learning from human interaction often relies on explicit signals, but implicit social cues, such as prosody in speech, could provide valuable information for more effective learning. This paperadvocates for the integration of prosody as a teaching signal to enhance age…

  • HOME
  • Publications
  • Towards Principled Representation Learning from Videos for Reinforcement Learning

JOIN US

Shape the Future of AI with Sony AI

We want to hear from those of you who have a strong desire
to shape the future of AI.