Plasticity of Neural Circuits and Neurobiology of Learning
An essential fundamental capability of a biological nervous system is its ability to learn from experience, improving the behavior of the organism and adapting it to demands of the environment to gain maximal benefit and avoid harm. Strong evidence has been collected for the hypothesis that learning is implemented in the brain via modification of different properties of its distributed neural networks. This ability of the brain to modify its own organization is termed plasticity. To achieve plasticity, the brain makes use of different mechanisms that are able to change the organization of its neural networks, e.g. synaptic plasticity that changes the strength of the connections between the neurons, somatic plasticity that tunes the neurophysiological response properties of single neurons or neurogenesis, which adds or deletes new cells in specific brain areas. Different plasticity processes also posess different time scales, ranging from milliseconds to hours and days. Fundamental question arises how all this richness in forms of plasticity leads to functional implementation of learning that is observed on the system level of the whole organism.
In our series of lectures followed by the project session we will treat both biological phenomena of plasticity and computations and objectives of learning as of an optimization procedure that seeks to improve organism's wellfare. We elaborate on the link between plasticity mechanisms and certain types of computations that are required for the learning process to be successful. Understanding this link, which is an active topic of ongoing research, will ultimatively provide detailed insight into learning as an information processing routine that utilizes certain kind of generic computations implemented with certain kind of plasticity mechanisms in the neural substrate of the brain.
We put special focus on learning from rewarding or punishing consequences of self-generated behavior - reinforcement learning. In reinforcement learning, only sparse outcome feedback about success or failure of its own actions is provided to the organism, which constitutes a much harder learning problem than usual supervised learning setting that is often employed in machine learning. In the project, we will work on implementing different neural circuits that are able to perform reinforcement learning with spiking neurons. As a demonstration, we target a spiking neural network that can learn a classical arcade pong game just by experiencing ball hits and misses without providing any further prior knowledge how to control the game.
Project prerequisites: Basic Linear Algebra, Basic Probability Theory, Basics in Differential Equations, Basic programming skills in Python. Installation of open source (GPL) spiking neural network simulator is required for the project part.
Associated topics: Biological Neural Networks, Spiking Neural Networks, Plasticity, Unsupervised Learning, Reinforcement Learning, Neural Network Modeling, Self-Organization.
Dr. Jenia Jitsev, Computation in Neural Circuits Lab, Institute for Neuroscience and Medicine (INM-6), Research Center Jülich, Jülich, Germany