Neural-symbolic methods have gained considerable attention in recent years because they are valid approaches to obtain synergistic integration between deep reinforcement learning and symbolic reinforcement learning. Along these lines of research, this paper presents an extension to a recent neural-symbolic method for reinforcement learning. The original method, called State-Driven Neural Logic Reinforcement Learning, generates sets of candidate logic rules from the states of the environment, and it uses a differentiable architecture to select the best subsets of the generated rules that solve the considered training tasks. The proposed extension modifies the rule generation procedure of the original method to effectively capture a recursive pattern among the states of the environment. The experimental results presented in the last part of this paper provide empirical evidence that the proposed approach is beneficial to the learning process. Actually, the proposed extended method is able to tackle diverse tasks while ensuring good generalization capabilities, even in tasks that are problematic for the original method because they exhibit recursive patterns.

Capturing a Recursive Pattern in Neural-Symbolic Reinforcement Learning / Beretta, D.; Monica, S.; Bergenti, F.. - 3579:(2023), pp. 17-31. (Intervento presentato al convegno 24th Workshop "From Objects to Agents", WOA 2023 tenutosi a Roma, Italia nel 6th-8th November 2023).

Capturing a Recursive Pattern in Neural-Symbolic Reinforcement Learning

Monica S.;Bergenti F.
2023

Abstract

Neural-symbolic methods have gained considerable attention in recent years because they are valid approaches to obtain synergistic integration between deep reinforcement learning and symbolic reinforcement learning. Along these lines of research, this paper presents an extension to a recent neural-symbolic method for reinforcement learning. The original method, called State-Driven Neural Logic Reinforcement Learning, generates sets of candidate logic rules from the states of the environment, and it uses a differentiable architecture to select the best subsets of the generated rules that solve the considered training tasks. The proposed extension modifies the rule generation procedure of the original method to effectively capture a recursive pattern among the states of the environment. The experimental results presented in the last part of this paper provide empirical evidence that the proposed approach is beneficial to the learning process. Actually, the proposed extended method is able to tackle diverse tasks while ensuring good generalization capabilities, even in tasks that are problematic for the original method because they exhibit recursive patterns.
2023
24th Workshop "From Objects to Agents", WOA 2023
Roma, Italia
6th-8th November 2023
3579
17
31
Beretta, D.; Monica, S.; Bergenti, F.
Capturing a Recursive Pattern in Neural-Symbolic Reinforcement Learning / Beretta, D.; Monica, S.; Bergenti, F.. - 3579:(2023), pp. 17-31. (Intervento presentato al convegno 24th Workshop "From Objects to Agents", WOA 2023 tenutosi a Roma, Italia nel 6th-8th November 2023).
File in questo prodotto:
Non ci sono file associati a questo prodotto.
Pubblicazioni consigliate

Licenza Creative Commons
I metadati presenti in IRIS UNIMORE sono rilasciati con licenza Creative Commons CC0 1.0 Universal, mentre i file delle pubblicazioni sono rilasciati con licenza Attribuzione 4.0 Internazionale (CC BY 4.0), salvo diversa indicazione.
In caso di violazione di copyright, contattare Supporto Iris

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11380/1367070
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact