arrow_upward
favorite
Are you interested in joining this workshop?
Let us know by clicking on the heart!
Thank you for your interest!
We look forward to seeing you there.
people0 people reacted.
background

RemembeRL: what can past experience tell us about our current action?

public Workshop
CoRL 2025
location_on COEX Convention & Exhibition Center
Seoul, South Korea
event Saturday
September 27th, 2025
schedule Start: 9.30 AM
End: 4.40 PM

Robot learning traditionally relies on applications of machine learning methods via the induction paradigm, i.e. where the agent extrapolates a policy in the form of a condensed, unified model given a set of training data. Under this formulation, training data is therefore discarded at test time, and the agent may no longer explicitly reason on past experience to inform its current action. In other words, the agent is not allowed to remember previous situations.
However, we observe that humans are able to conveniently recall past experiences to solve decision making problems, and notice a close connection with recent trends towards in-context learning, meta-learning, and retrieval-augmented inference in the Reinforcement Learning (RL) and Imitation Learning (IL) literature.

In this workshop, we investigate how robot learning algorithms can be designed for robotic agents to explicitly retrieve, reason, attend, or leverage on past experience to quickly adapt to previously unseen situations or bootstrap their learning.
Recent advances in the field allow agents to remember by framing the problem as in-context learning and meta-learning, e.g. by incorporating memory in the form of explicit, contextual input to a policy. Similarly, recent works propose retrieval mechanisms (retrieval-augmented policies) for agents to conveniently reason about the current action based on previous experience.

We then raise the question: "Should robots know, or remember how to act?"

Our objective is to understand the interconnection between the fields of in-context learning, meta-learning, transductive and retrieval-augmented inference for robot learning applications. In this context, we aim to discuss about recent contributions, challenges, opportunities and novel directions for future research.

help_outline Open questions for the community

  • Should robotic agents know or remember how to act?
  • How can past experience be integrated implicitly or explicitly in RL and IL algorithms?
  • Does reasoning on past experience improve agent performance even for tasks that do not require memory?
  • Can agents generalize their memory over unseen data and tasks? (e.g. in-context learning with OOD contexts)
  • What are the interconnections between in-context learning, meta-learning, and retrieval-based approaches?
In-context learning Meta learning Transductive inference Retrieval-augmented policies Memory-based policies Non-parametric policies Episodic memory & control

Invited Speakers

TBD

Stephen James

Imperial College London
In person
Sim-to-Real Transfer Generative models
TBD

Chelsea Finn

Stanford University
In person
In-context Learning Meta-Learning
TBD

Abhishek Gupta

University of Washington
In person
Retrieval-augmented policies Robot learning
TBD

Gunhee Kim

Seoul National University
In person
In-context RL Web/GUI Agent Control
Edward Johns

Edward Johns

Imperial College London
In person · Tentative
In-context Learning Retrieval-augmented policies
Amanda Prorok

Amanda Prorok

University of Cambridge
In person · Tentative
Memory-based policies Recurrent Neural Networks
Hung Le

Hung Le

Deakin University
Remote
Memory-based policies Recurrent Neural Networks

Program

schedule

Coming Soon

We are currently finalizing the workshop program. Please check back soon for the detailed schedule, speaker presentations, and interactive sessions.

Organizers

Gabriele Tiboni

Gabriele Tiboni

University of Würzburg
public gabrieletiboni.com · Main organizer
Federico Ceola

Federico Ceola

Italian Institute of Technology (IIT)
Yat Long Lo

Yat Long Lo

Amazon
Wenyan Yang

Wenyan Yang

Aalto University
Vivienne Wang

Vivienne Wang

Aalto University
Carlo D'Eramo

Carlo D'Eramo

University of Würzburg

Advisory Board

Georgia Chalvatzaki

Georgia Chalvatzaki

TU Darmstadt
Tatiana Tommasi

Tatiana Tommasi

Politecnico di Torino

Call for Papers

We invite contributions that explore memory-based, in-context, retrieval-augmented, or transductive approaches to robot learning from diverse perspectives, including but not limited to the following topics. We particularly encourage early-stage ideas and preliminary results that can spark discussion and inspire new directions in the field.

In-context Learning for Robotics Meta Learning for Robotics Episodic Memory and Control Retrieval-augmented Policies Non-parametric Policies Memory-based Policies Adaptive Learning and Planning for Control Reinforcement & Imitation Learning Generalization in Robot Learning Transductive Inference Instance-based Learning Context-aware Decision Making Meta-Reasoning and Meta-Cognition Explicit vs. implicit memory Memory architectures Memory efficiency and scalability Evaluation metrics for memory Interpretability of memory systems

Submission Format

  • Papers should be submitted through OpenReview (Link).
  • Papers may be up to 8 pages.
  • They should be formatted using the CoRL 2025 LaTeX template (link).
  • Acknowledgments, References, and Appendix (optional) will not count towards the page limit.
  • Submissions must be anonymized.
  • Authors are encouraged to submit a supplementary file containing further details for reviewers, to be submitted through OpenReview as a single zip file.

Reviewing Process

  • The reviewing process will be double-blind, single-phase (i.e., no rebuttal).

Publication

  • Accepted papers will be non-archival.
  • There will be no formal proceedings.
  • At least one author for each accepted paper must attend the workshop in-person.

Important Dates

Submission Deadline: Aug 22th, 2025, AoE
Author Notification: Sep 12th, 2025, AoE
Camera Ready Deadline: Sep 19th, 2025, AoE
Workshop Date: Sep 27th, 2025