Active Inference in Hebbian Learning Networks. (arXiv:2306.05053v2 [cs.NE] UPDATED)


Active Inference in Hebbian Learning Networks. (arXiv:2306.05053v2 [cs.NE] UPDATED)
By: <a href="http://arxiv.org/find/cs/1/au:+Safa_A/0/1/0/all/0/1">Ali Safa</a>, <a href="http://arxiv.org/find/cs/1/au:+Verbelen_T/0/1/0/all/0/1">Tim Verbelen</a>, <a href="http://arxiv.org/find/cs/1/au:+Keuninckx_L/0/1/0/all/0/1">Lars Keuninckx</a>, <a href="http://arxiv.org/find/cs/1/au:+Ocket_I/0/1/0/all/0/1">Ilja Ocket</a>, <a href="http://arxiv.org/find/cs/1/au:+Bourdoux_A/0/1/0/all/0/1">Andr&#xe9; Bourdoux</a>, <a href="http://arxiv.org/find/cs/1/au:+Catthoor_F/0/1/0/all/0/1">Francky Catthoor</a>, <a href="http://arxiv.org/find/cs/1/au:+Gielen_G/0/1/0/all/0/1">Georges Gielen</a>, <a href="http://arxiv.org/find/cs/1/au:+Cauwenberghs_G/0/1/0/all/0/1">Gert Cauwenberghs</a> Posted: June 23, 2023

This work studies how brain-inspired neural ensembles equipped with local
Hebbian plasticity can perform active inference (AIF) in order to control
dynamical agents. A generative model capturing the environment dynamics is
learned by a network composed of two distinct Hebbian ensembles: a posterior
network, which infers latent states given the observations, and a state
transition network, which predicts the next expected latent state given current
state-action pairs. Experimental studies are conducted using the Mountain Car
environment from the OpenAI gym suite, to study the effect of the various
Hebbian network parameters on the task performance. It is shown that the
proposed Hebbian AIF approach outperforms the use of Q-learning, while not
requiring any replay buffer, as in typical reinforcement learning systems.
These results motivate further investigations of Hebbian learning for the
design of AIF networks that can learn environment dynamics without the need for
revisiting past buffered experiences.

Provided by:
http://arxiv.org/icons/sfx.gif

DoctorMorDi

DoctorMorDi

Moderator and Editor