THE PROJECT


Background

Neurons show an impressive variety of dendritic structures as shown below. At these dendrites, most of the incoming signals converge and information is transmitted via synapses contacting dendritic branches.


various dendritic structures

Recompiled from Gerstner et al. (2014)


Evidence exists that dendritic structures perform different types of computations (see figure below) for example (left) spatio-temporal filtering e.g. a low-pass operation, (middle) logical gating for information selection (here AND-gate operation) which relies of the coincidence of the incoming signals and (right) routing operations by letting signals pass using a “switch-like mechanism”. More such operations exist and – remarkably – all of them are subject to adaptation mechanisms based on changes of the synaptic transmission strength (“synaptic plasticity”).


schematic drawing of dendritic calculation types

Recompiled from Payeur et al. (2019)


ADOPD addresses the question how neuronal dendrites perform adaptive (learnable) computations
and how this could be translated into optical (photonic) hardware.



Work Packages

The ADOPD project divides into four work packages (see figure below).


project structure broken into work packages

WP1 and WP4 provide theoretical contributions on the function of dendritic processing with a focus on synaptic plasticity on dendrites. We concentrate mainly on heterosynaptic plasticity, where neighboring synapses directly influence each other (without any signal flow from the cell body).


WP1 is offering novel biophysical insights on heterosynaptic dendritic plasticity based on Calcium diffusion. Here we have also introduced a novel dendritic learning rule with learning rate annealing to achieve stability on the network level. These investigations cannot directly be used for photonic implementation. Hence, part of the work in WP1 is dedicated to the translation of detailed biophysical theories into more abstract models suitable for transfer to hardware. Here we are focusing on the “Input Correlation (ICO)” learning rule (Porr & Wörgötter, 2006).


schematic of a photonic circuit emulating four dendritic branches

WP2 addresses the question how to implement the ICO rule using single mode fiber technology. The figure above depicts a photonic-circuity that allows for the adaptation of synaptic weight in four independent channels, emulating four dendritic branches.


single- vs multi-mode fibers

WP3 is performing significant ground work for abstracting dendritic functions into parallel optical computation via the use of different speckle patterns that arise in multi- or few-mode fibers (MMF, FMF, see figure above). This is an important conceptual step, as it may allow subsuming the computations of different dendritic branches, investigated in WPs 1 and 4, in a holographic manner into the same MMF or FMF. ADOPD focuses on FMFs, because of the lower complexity of the resulting speckle pattern and the resulting smaller sensitivity to noise in the read-out process. Accordingly, a new FMF, specific for our purpose, was fabricated.


different dendritic roles and their photonic counterparts

WP4 provides deeper insights into context-dependent dendritic processing on multiple branches using different structures and learning rules, especially also the ICO rule, which allows easy processing of context on different dendritic structures. The figure above suggests different roles for apical and basal dendrites, where the context signal is provided apically and the main signal is provided basally (left). To the right we show how it is possible to implement this in an adaptive manner using the ICO rule, which offers an avenue for photonic implementation of context dependent dendritic learning, too.



Impact

The novel methods of ADOPD are already gradually creating impact on technology, whith potential outreach to society. This is due to the fact that neuromorphic computing technologies are powerful, because they reduce energy consumption while at the same time still supporting super-high-speed computation. This, combined with emulated neuronal plasticity at a spatially distributed structure (dendritic learning), is not only novel, but may gradually lead to systems where speed and low-power are paired with adaptation.



References

[1]
Gerstner, W., Kistler, W.M., Naud, R. and Paninski, L (2014). Neuronal Dynamics: From single neurons to networks and models of cognition and beyond. Cambridge University Press.
[2]
Payeur, A., Béïque, J.-C., and Naud, R. (2019). Classes of dendritic information processing. Current opinion in Neurobiology, 58:78-85.
[3]
Porr, B. & Wörgötter, F. (2006). Strongly Improved Stability and Faster Convergence of Temporal Sequence Learning by Using Input Correlations Only. Neural Computation, June, Volume 18, p. 1380–1412.