Authors

Venue

Date

Share

Dynamic Sparse Training for Deep Reinforcement Learning

Ghada Sokar

Elena Mocanu

Decebal Constantin Mocanu

Mykola Pechenizkiy

Peter Stone

IJCAI 2022

2022

Abstract

Deep reinforcement learning (DRL) agents are trained through trial-and-error interactions with the environment. This leads to a long training time for dense neural networks to achieve good performance. Hence, prohibitive computation and memory resources are consumed. Recently, learning efficient DRL agents has received increasing attention. Yet, current methods focus on accelerating inference time. In this paper, we introduce for the first time a dynamic sparse training approach for deep reinforcement learning to accelerate the training process. The proposed approach trains a sparse neural network from scratch and dynamically adapts its topology to the changing data distribution during training. Experiments on continuous control tasks show that our dynamic sparse agents achieve higher performance than the equivalent dense methods, reduce the parameter count and floating-point operations (FLOPs) by 50%, and have a faster learning speed that enables reaching the performance of dense agents with 40-50% reduction in the training steps.

Related Publications

Building Minimal and Reusable Causal State Abstractions for Reinforcement Learning

AAAI, 2024
Zizhao Wang*, Caroline Wang*, Xuesu Xiao*, Yuke Zhu*, Peter Stone

Two desiderata of reinforcement learning (RL) algorithms are the ability to learn from relatively little experience and the ability to learn policies that generalize to a range of problem specifications. In factored state spaces, one approach towards achieving both goals is …

Minimum Coverage Sets for Training Robust Ad Hoc Teamwork Agents

AAAI, 2024
Arrasy Rahman*, Jiaxun Cui*, Peter Stone

Robustly cooperating with unseen agents and human partners presents significant challenges due to the diverse cooperative conventions these partners may adopt. Existing Ad Hoc Teamwork (AHT) methods address this challenge by training an agent with a population of diverse tea…

Learning Optimal Advantage from Preferences and Mistaking it for Reward

AAAI, 2024
W. Bradley Knox*, Stephane Hatgis-Kessell*, Sigurdur Orn Adalgeirsson*, Serena Booth*, Anca Dragan*, Peter Stone, Scott Niekum*

We consider algorithms for learning reward functions from human preferences over pairs of trajectory segments---as used in reinforcement learning from human feedback (RLHF)---including those used to fine tune ChatGPT and other contemporary language models. Most recent work o…

JOIN US

Shape the Future of AI with Sony AI

We want to hear from those of you who have a strong desire
to shape the future of AI.