Venue

Date

Share

Event Tables for Efficient Experience Replay

Varun Kompella

Thomas Walsh

Samuel Barrett

Peter R. Wurman

Peter Stone

CoLLAs 2023

2023

Abstract

Experience replay (ER) is a crucial component of many deep reinforcement learning (RL) systems. However, uniform sampling from an ER buffer can lead to slow convergence and unstable asymptotic behaviors. This paper introduces Stratified Sampling from Event Tables (SSET), which partitions an ER buffer into Event Tables, each capturing important subsequences of optimal behavior. We prove a theoretical advantage over the traditional monolithic buffer approach and combine SSET with an existing prioritized sampling strategy to further improve learning speed and stability. Empirical results in challenging MiniGrid domains, benchmark RL environments, and a high-fidelity car racing simulator demonstrate the advantages and versatility of SSET over existing ER buffer sampling approaches.

Related Publications

VaryNote: A Method to Automatically Vary the Number of Notes in Symbolic Music

CMMR, 2023
Juan M. Huerta*, Bo Liu*, Peter Stone

Automatically varying the number of notes in symbolic music has various applications in assisting music creators to embellish simple tunes or to reduce complex music to its core idea. In this paper, we formulate the problem of varying the number of notes while preserving the…

LIBERO: Benchmarking Knowledge Transfer for Lifelong Robot Learning

NeurIPS, 2023
Bo Liu*, Yifeng Zhu*, Chongkai Gao*, Yihao Feng*, Qiang Liu*, Yuke Zhu*, Peter Stone

Lifelong learning offers a promising paradigm of building a generalist agent that learns and adapts over its lifespan. Unlike traditional lifelong learning problems in image and text domains, which primarily involve the transfer of declarative knowledge of entities and conce…

FAMO: Fast Adaptive Multitask Optimization

NeurIPS, 2023
Bo Liu*, Yihao Feng*, Peter Stone, Qiang Liu*

One of the grand enduring goals of AI is to create generalist agents that can learn multiple different tasks from diverse data via multitask learning (MTL). However, gradient descent (GD) on the average loss across all tasks may yield poor multitask performance due to severe…

JOIN US

Shape the Future of AI with Sony AI

We want to hear from those of you who have a strong desire
to shape the future of AI.