Autonomous Agents trained with Reinforcement Learning (RL) must explore the effects of their actions in different environment states to learn optimal control policies or build a model of such environment. Exploration may be impractical in complex environments, hence ways to prune the exploration space must be found. In this paper, we propose to augment an autonomous agent with a causal model of the core dynamics of its environment, learnt on a simplified version of it and then used as a “driving assistant” for larger or more complex environments. Experiments with different RL algorithms, in increasingly complex environments, and with different exploration strategies, show that learning such a model improves the agent behaviour.
Improving Reinforcement Learning-Based Autonomous Agents with Causal Models / Briglia, G.; Lippi, M.; Mariani, S.; Zambonelli, F.. - 15395:(2025), pp. 267-283. (Intervento presentato al convegno 25th International Conference on Principles and Practice of Multi-Agent Systems, PRIMA 2024 tenutosi a Kyoto, Japan nel 2024) [10.1007/978-3-031-77367-9_20].
Improving Reinforcement Learning-Based Autonomous Agents with Causal Models
Briglia G.;Lippi M.;Mariani S.;Zambonelli F.
2025
Abstract
Autonomous Agents trained with Reinforcement Learning (RL) must explore the effects of their actions in different environment states to learn optimal control policies or build a model of such environment. Exploration may be impractical in complex environments, hence ways to prune the exploration space must be found. In this paper, we propose to augment an autonomous agent with a causal model of the core dynamics of its environment, learnt on a simplified version of it and then used as a “driving assistant” for larger or more complex environments. Experiments with different RL algorithms, in increasingly complex environments, and with different exploration strategies, show that learning such a model improves the agent behaviour.Pubblicazioni consigliate
I metadati presenti in IRIS UNIMORE sono rilasciati con licenza Creative Commons CC0 1.0 Universal, mentre i file delle pubblicazioni sono rilasciati con licenza Attribuzione 4.0 Internazionale (CC BY 4.0), salvo diversa indicazione.
In caso di violazione di copyright, contattare Supporto Iris