Venue

Date

Share

Efficient Real-Time Inference in Temporal Convolution Networks

Piyush Khandelwal

James MacGlashan

Pete Wurman

Peter Stone

ICRA-2021

2021

Abstract

It has been recently demonstrated that Temporal Convolution Networks (TCNs) provide state-of-the-art results in many problem domains where the input data is a time-series. TCNs typically incorporate information from a long history of inputs (the receptive field) into a single output using many convolution layers. Real-time inference using a trained TCN can be challenging on devices with limited compute and memory, especially if the receptive field is large. This paper introduces the RT-TCN algorithm that reuses the output of prior convolution operations to minimize the computational requirements and persistent memory footprint of a TCN during real-time inference. We also show that when a TCN is trained using time slices of the input time-series, it can be executed in real-time continually using RT-TCN. In addition, we provide TCN architecture guidelines that ensure that real-time inference can be performed within memory and computational constraints.

Related Publications

VaryNote: A Method to Automatically Vary the Number of Notes in Symbolic Music

CMMR, 2023
Juan M. Huerta*, Bo Liu*, Peter Stone

Automatically varying the number of notes in symbolic music has various applications in assisting music creators to embellish simple tunes or to reduce complex music to its core idea. In this paper, we formulate the problem of varying the number of notes while preserving the…

LIBERO: Benchmarking Knowledge Transfer for Lifelong Robot Learning

NeurIPS, 2023
Bo Liu*, Yifeng Zhu*, Chongkai Gao*, Yihao Feng*, Qiang Liu*, Yuke Zhu*, Peter Stone

Lifelong learning offers a promising paradigm of building a generalist agent that learns and adapts over its lifespan. Unlike traditional lifelong learning problems in image and text domains, which primarily involve the transfer of declarative knowledge of entities and conce…

FAMO: Fast Adaptive Multitask Optimization

NeurIPS, 2023
Bo Liu*, Yihao Feng*, Peter Stone, Qiang Liu*

One of the grand enduring goals of AI is to create generalist agents that can learn multiple different tasks from diverse data via multitask learning (MTL). However, gradient descent (GD) on the average loss across all tasks may yield poor multitask performance due to severe…

  • HOME
  • Publications
  • Efficient Real-Time Inference in Temporal Convolution Networks

JOIN US

Shape the Future of AI with Sony AI

We want to hear from those of you who have a strong desire
to shape the future of AI.