Authors

* External authors

Venue

Date

Share

Exploration-Intensive Distractors: Two Environment Proposals and a Benchmarking

Jim Martin Catacora Ocaña*

Roberto Capobianco

Daniele Nardi*

* External authors

AIxIA 2021

2021

Abstract

Sparse-reward environments are famously challenging for deep reinforcement learning (DRL) algorithms. Yet, the prospect of solving intrinsically sparse tasks in an end-to-end fashion without any extra reward engineering is highly appealing. Such aspiration has recently led to the development of numerous DRL algorithms able to handle sparse-reward environments to some extent. Some methods have even gone one step further and have tackled sparse tasks involving different kinds of distractors (e.g. broken TV, self-moving phantom objects and many more). In this work, we put forward two motivating new sparse-reward environments containing the so-far largely overlooked class of exploration-intensive distractors. Furthermore, we conduct a benchmarking which reveals that state-of-the-art algorithms are not yet all-around suitable for solving the proposed environments.

Related Publications

Outracing Champion Gran Turismo Drivers with Deep Reinforcement Learning

Nature, 2022
Pete Wurman, Samuel Barrett, Kenta Kawamoto, James MacGlashan, Kaushik Subramanian, Thomas J. Walsh, Roberto Capobianco, Alisa Devlic, Franziska Eckert, Florian Fuchs, Leilani Gilpin, Piyush Khandelwal, Varun Kompella, Hao Chih Lin, Patrick MacAlpine, Declan Oller, Takuma Seno, Craig Sherstan, Michael D. Thomure, Houmehr Aghabozorgi, Leon Barrett, Rory Douglas, Dion Whitehead Amago, Peter Dürr, Peter Stone, Michael Spranger, Hiroaki Kitano

Many potential applications of artificial intelligence involve making real-time decisions in physical systems while interacting with humans. Automobile racing represents an extreme example of these conditions; drivers must execute complex tactical manoeuvres to pass or block…

Planetary Environment Prediction Using Generative Modeling

AIAA SciTech Forum, 2022
Shrijit Singh*, Shreyansh Daftry*, Roberto Capobianco

Planetary rovers have a limited sensory horizon and operate in environments where limited information about the surrounding terrain is available. The rough and unknown nature of the terrain in planetary environments potentially leads to scenarios where the rover gets stuckan…

Tafl-ES: Exploring Evolution Strategies for Asymmetrical Board Games

AIxIA, 2021
Roberto Gallotta*, Roberto Capobianco

NeuroEvolution Strategies (NES) are a subclass of Evolution Strategies (ES). While their application to games and board games have been studied in the past [11], current state of the art in most of the games is still held by classic RL models, such as AlphaGo Zero [16]. This…

  • HOME
  • Publications
  • Exploration-Intensive Distractors: Two Environment Proposals and a Benchmarking

JOIN US

Shape the Future of AI with Sony AI

We want to hear from those of you who have a strong desire
to shape the future of AI.