RemembeRL: what can past experience tell us about our current action?
CoRL 2025
Seoul, South Korea
September 27th, 2025
End: 4.40 PM
Robot learning traditionally relies on applications of machine learning methods via the induction paradigm, i.e. where the agent extrapolates
a policy in the form of a condensed, unified model given a set of training data.
Under this formulation, training data is therefore discarded at test time, and the agent may no longer explicitly reason on past experience to inform its current action. In other words, the agent is not allowed to remember previous situations.
However, we observe that humans are able to conveniently recall past experiences to solve decision making problems, and notice a close connection with recent trends towards in-context learning, meta-learning, and retrieval-augmented inference in the Reinforcement Learning (RL) and Imitation Learning (IL) literature.
In this workshop, we investigate how robot learning algorithms can be designed for robotic agents to explicitly retrieve, reason, attend, or leverage on past experience to quickly adapt to previously unseen situations or bootstrap their learning.
Recent advances in the field allow agents to remember by framing the problem as in-context learning and meta-learning, e.g. by incorporating memory in the form of explicit, contextual input to a policy.
Similarly, recent works propose retrieval mechanisms (retrieval-augmented policies) for agents to conveniently reason about the current action based on previous experience.
We then raise the question:
"Should robots know, or remember how to act?"
Our objective is to understand the interconnection between the fields of in-context learning, meta-learning, transductive and retrieval-augmented inference for robot learning applications.
In this context, we aim to discuss about recent contributions, challenges, opportunities and novel directions for future research.
Open questions for the community
- Should robotic agents know or remember how to act?
- How can past experience be integrated implicitly or explicitly in RL and IL algorithms?
- Does reasoning on past experience improve agent performance even for tasks that do not require memory?
- Can agents generalize their memory over unseen data and tasks? (e.g. in-context learning with OOD contexts)
- What are the interconnections between in-context learning, meta-learning, and retrieval-based approaches?