Environments in Reinforcement Learning are usually only partially observable. To address this problem, a possible solution is to provide the agent with information about the past. However, providing complete observations of numerous steps can be excessive. Inspired by human memory, we propose to represent history with only important changes in the environment and, in our approach, to obtain automatically this representation using self-supervision. Our method (TempAl) aligns temporally-close frames, revealing a general, slowly varying state of the environment. This procedure is based on contrastive loss, which pulls embeddings of nearby observations to each other while pushing away other samples from the batch. It can be interpreted as a metric that captures the temporal relations of observations. We propose to combine both common instantaneous and our history representation and we evaluate TempAl on all available Atari games from the Arcade Learning Environment. TempAl surpasses the instantaneous-only baseline in 35 environments out of 49. The source code of the method and of all the experiments is available at https://github.com/htdt/tempal.

Temporal Alignment for History Representation in Reinforcement Learning / Ermolov, A.; Sangineto, E.; Sebe, N.. - 2022-:(2022), pp. 2172-2178. (Intervento presentato al convegno 26th International Conference on Pattern Recognition, ICPR 2022 tenutosi a Palais des Congres de Montreal, can nel 2022) [10.1109/ICPR56361.2022.9956553].

Temporal Alignment for History Representation in Reinforcement Learning

Sangineto E.;Sebe N.
2022

Abstract

Environments in Reinforcement Learning are usually only partially observable. To address this problem, a possible solution is to provide the agent with information about the past. However, providing complete observations of numerous steps can be excessive. Inspired by human memory, we propose to represent history with only important changes in the environment and, in our approach, to obtain automatically this representation using self-supervision. Our method (TempAl) aligns temporally-close frames, revealing a general, slowly varying state of the environment. This procedure is based on contrastive loss, which pulls embeddings of nearby observations to each other while pushing away other samples from the batch. It can be interpreted as a metric that captures the temporal relations of observations. We propose to combine both common instantaneous and our history representation and we evaluate TempAl on all available Atari games from the Arcade Learning Environment. TempAl surpasses the instantaneous-only baseline in 35 environments out of 49. The source code of the method and of all the experiments is available at https://github.com/htdt/tempal.
2022
26th International Conference on Pattern Recognition, ICPR 2022
Palais des Congres de Montreal, can
2022
2022-
2172
2178
Ermolov, A.; Sangineto, E.; Sebe, N.
Temporal Alignment for History Representation in Reinforcement Learning / Ermolov, A.; Sangineto, E.; Sebe, N.. - 2022-:(2022), pp. 2172-2178. (Intervento presentato al convegno 26th International Conference on Pattern Recognition, ICPR 2022 tenutosi a Palais des Congres de Montreal, can nel 2022) [10.1109/ICPR56361.2022.9956553].
File in questo prodotto:
Non ci sono file associati a questo prodotto.
Pubblicazioni consigliate

Licenza Creative Commons
I metadati presenti in IRIS UNIMORE sono rilasciati con licenza Creative Commons CC0 1.0 Universal, mentre i file delle pubblicazioni sono rilasciati con licenza Attribuzione 4.0 Internazionale (CC BY 4.0), salvo diversa indicazione.
In caso di violazione di copyright, contattare Supporto Iris

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11380/1343946
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact