Authors

* External authors

Venue

Date

Share

Visual Reward Machines

Elena Umili*

Francesco Argenziano*

Aymeric Barbin*

Roberto Capobianco

* External authors

NeSy 2023

2023

Abstract

Non-markovian Reinforcement Learning (RL) tasks are extremely hard to solve, because intelligent agents must consider the entire history of state-action pairs to act rationally in the environment. Most works use Linear Temporal Logic (LTL) to specify temporally-extended tasks. This approach applies only in finite and discrete state environments or continuous problems for which a mapping between the continuous state and a symbolic interpretation is known as a symbol grounding function. In this work, we define Visual Reward Machines (VRM), an automata-based neurosymbolic framework that can be used for both reasoning and learning in non-symbolic non-markovian RL domains. VRM is a fully neural but interpretable system, that is based on the probabilistic relaxation of Moore Machines. Results show that VRMs can exploit ungrounded symbolic temporal knowledge to outperform baseline methods based on RNNs in non-markovian RL tasks.

Related Publications

Real-time Trajectory Generation via Dynamic Movement Primitives for Autonomous Racing

ACC, 2024
Catherine Weaver*, Roberto Capobianco, Peter R. Wurman, Peter Stone, Masayoshi Tomizuka*

We employ sequences of high-order motion primitives for efficient online trajectory planning, enabling competitive racecar control even when the car deviates from an offline demonstration. Dynamic Movement Primitives (DMPs) utilize a target-driven non-linear differential equ…

Towards a fuller understanding of neurons with Clustered Compositional Explanations

NeurIPS, 2023
Biagio La Rosa*, Leilani H. Gilpin*, Roberto Capobianco

Compositional Explanations is a method for identifying logical formulas of concepts that approximate the neurons' behavior. However, these explanations are linked to the small spectrum of neuron activations used to check the alignment (i.e., the highest ones), thus lacking c…

Memory Replay For Continual Learning With Spiking Neural Networks

IEEE MSLP, 2023
Michela Proietti*, Alessio Ragno*, Roberto Capobianco

Two of the most impressive features of biological neural networks are their high energy efficiency and their ability to continuously adapt to varying inputs. On the contrary, the amount of power required to train top-performing deep learning models rises as they become more …

JOIN US

Shape the Future of AI with Sony AI

We want to hear from those of you who have a strong desire
to shape the future of AI.