Deep Neural Networks and Bayesian Brain Theories

We combine mathematics, computer science and cognitive neuroscience to build and test new theories about how the brain works

We build theories about the biophysics of brain sources (connectivity, synaptic transmission and neuromodulation), the information processing these sources perform (error and state learning, hierarchical Bayesian inference) and individual variability in humans (differences in brain responses over human subjects). We are particularly interested in studying the cortical connectivity and representations of sensory stimuli and memories in animals, healthy subjects and patients.  In collaboration with colleagues from MIT, we use  deep neural networks and predictive coding to answer questions in attention, memory and decision-making. We are also fascinated by the possibility of using insights from brain function to build better artificial intelligence (AI) algorithms.

Selected papers

D.A. Pinotsis* and E.K. Miller,  Beyond dimension reduction: Stable electric fields emerge from and allow representational drift,  bioRxiv,  2021

D.A. Pinotsis, M. Siegel and E.K. Miller, Sensory processing and categorization in deep and cortical networks, NeuroImage, 202, 116-118 (2019)

D.A.Pinotsis,T.J. Buschman and E.K. Miller, Working memory load modulates neuronal coupling, Cerebral Cortex, (2018) doi:

D.A Pinotsis and E.K. Miller, New approaches for studying cortical representations, AAAI Spring Symposium Series Technical Report (2017)

D.A.Pinotsis,S.L. Brincat and E.K. Miller, On memories, neural ensembles and mental flexibility, NeuroImage, 157, 297-313 (2017)

D.A. Pinotsis, N. Brunet, A. Bastos, C.A. Bosman, V. Litvak, P. Fries and K.J. Friston, Contrast gain-control and horizontal interactions in V1: a DCM study, NeuroImage, 92:143-155 (2014)