Speaker
Description
The ability to store and recall information based on associations between objects is considered a characteristic trait of intelligent systems. In biological neural networks, learning is believed to take place at a synaptic level by modification of synaptic connections. In this project, we study the statistical physics of memory and learning through the Hopfield Model of Associative Memory. We computationally simulate the model in Python and devise an algorithm to find the critical memory capacity of a Hopfield Network.
However, unlike magnetic systems, the structure of wiring in the brain is far from homogeneous. The synaptic wiring in the brain is far from random like that in spin glasses or regular like that in an Ising lattice and instead follows evolutionary favorable organizing principles. To study how the collective function of the brain and neural systems depends on the structure, we use tools from graph theory and generative network models to simulate the Hopfield Model on a Watts-Strogatz (WS) small-world network which interpolates between regular and random network structures.
We devise a set of open-source Python codes that simulate the Hopfield Network on any given network structure and numerically estimates the memory capacity as a function of various parameters. Finally, we understand how changes in the network structure affect the function by varying the rewiring probability of a WS network and study the overlap with the desired state using an algorithm of ensemble averaging over multiple initial states with random and sequential noise to characterize the recall quality of the network.
We find the small world networks achieve performance as good as a random network but for a fraction of total wiring length, and are thus favorable. Our findings support the experimental evidence for the existence of small-world characteristics in biological networks in literature.