Lab Meetings

We meet regularly to host external speakers, discuss progress of current lab projects  and rehearse presentations by lab members in London and elsewhere. Past and upcoming external talks include: 

April 12, 2021 :

Compositional problem solving: Investigating a task general strategy to problem solving in brain data and network models

Jascha Achterberg (Cambridge)

It is widely argued that the power of human cognition rests heavily on compositional problem solving: the ability to break a complex problem down into its simple subcomponents, solving those in separate attentional episodes to then reintegrate the results into the overall solution to the complex problem. Results from patient studies and neuroimaging point us to the conclusion that solving task compositionality is a key component in human general intelligence and is located in the brain’s Multiple Demand (MD) Network. As recent modelling work made progress towards formalising compositionality and creating model based agents with an ability to recognise a task’s compositional structure, there is an exciting perspective to investigate and model the neuronal basis behind compositional problem solving.

This talk will review the basic theory behind compositional problem solving and its link to the MD Network before discussing new modelling approaches and experimental ideas to further understand the brain’s ability to solve task compositionality.

March 29, 2021 :

Predictive coding as a consequence of energy efficiency in recurrent neural networks

Nasir Ahmad (Donders)

Predictive coding is a promising framework for understanding sensory processing. It proposes the inhibition of predictable sensory inputs and preferential processing of surprising inputs. Hierarchical predictive coding architectures, composed of separate prediction and error units, have therefore been constructed to explore this principle . Here we show that rather than architecturally hard-wiring prediction and error nodes in a recurrent neural network, these nodes emerge when the network is trained to minimise energy. We also demonstrate multiple timescales of prediction and show through lesioning that this network integrates evidence over time.
March 22, 2021 :

Dysconnection and the immunological basis of schizophrenia

Anjali Bhat (UCL)

Schizophrenia has been cast, from different neuroscientific perspectives, as a sensory processing disorder, a highly heritable genetic disorder as well as a neurodevelopmental disorder. The dysconnection hypothesis attempts to draw together several of these disparate strands of research to create a coherent picture of how schizophrenia arises, implicating neuromodulatory processes governing synaptic gain control as the aetiological core of psychotic symptoms – i.e., a functional (or perhaps Bayesian) synaptopathy. It calls for studies that empirically ‘close the explanatory gap between pathophysiology at the molecular (synaptic) level and the psychopathology experienced by patients’ (Friston et al, 2016). One strand that has not yet been woven into this tapestry is immunology, which has been overwhelmingly linked with schizophrenia in recent years – the reasons for which are not well understood. In this talk I will overview my recent work, which has used a variety of methods with the aim of ‘closing the explanatory gap’. I begin with an exploration of the genetics of mismatch negativity, a key biomarker for schizophrenia. Next, I present experiments of prenatal immunity in neuronal networks grown out of hair samples from patients with schizophrenia and healthy controls. Finally, I introduce inference from the perspective of the immune system – immunoceptive inference – as a new way of understanding interactions between the immune system and the brain. 

February 22, 2021 :
A neurally plausible computational model of WM
Shekoofeh Hedayati (The Pennsylvania State University)

Theories of working memory(WM) as a multicomponent store(Baddeley & Hitch, 1974; Baddeley, 2000) and activated long term memory (Ericsson & Kintsch,1995) have considered visual knowledge to be essential in WM function. Similarly, behavioral data has shown the impact of visual knowledge on memory encoding by comparing its performance for familiar vs. unfamiliar items (Zimmer & Fischer; 2020). Yet, no computational model has explained the mechanism underlying the interaction between these two constructs and the implications that it has for our understanding of WM.
We propose a neurally plausible computational model of WM called Memory for Latent Representations (MLR) that represents visual knowledge using a modified Variational Autoencoder and then builds memories out of the latent representations in the model. MLR encodes the visual information by flexibly allocating shared neural resources and retrieves them through pixel-wise reconstructions. Consistent with human behavior, the model shows how familiar items can be encoded more efficiently than unfamiliar items. MLR also captures the behavioral capabilities of humans in WM tasks. These capabilities are 1) representing specific attributes of an item (e.g. shape, color, etc.) with varying degrees of precision according to the task demand (Swan, Collins & Wyble, 2016). 2) representing both categorical and visual attributes of an item. 3) representing novel shape configurations that someone has not been trained on. 4) The ability to rapidly tune encoding parameters to provide flexibility of encoding in an uncertain task.

February 8, 2021 :
Modelling image complexity with deep learning
Fintan Nagle (Imperial)

It is commonplace to describe images as "complex" or "simple", but what underlies this judgement? To investigate, we collected image complexity ratings using a custom online platform and modelled them using a deep CNN. We first asked whether observers agree on the complexity of an image. 2AFC relative complexity judgements showed a good level of agreement and allowed us to obtain consistent complexity ratings. A CNN with pretraining on an object recognition task predicted complexity ratings with r = .83. This model offers a promising method to evaluate perceived complexity. Finally I discuss the role of low-level visual features, object perception, and textures in complexity judgement.

November 16, 2020 :
A Comparison between Different Social Interaction in Multi-User Brain-Computer Interface (BCI) Gaming: Competitive vs. Collaborative
Finda Putri (Glasgow)

The growing interest in BCI gaming has initiated the possibility of multi-user BCI games, which require the involvement of two or more players with integrated brain signals to a BCI application. Multi-user BCI is a complex process, with challenges surrounding the technical aspects and the behavioural aspects all should be carefully considered. Several studies have tested multi-user BCI games based on different types of control paradigms. However, the information related to the neurological changes caused by the BCI gaming interaction is still relatively limited. The aim of this thesis is to investigate the electrophysiological changes that occur in the brain, due to interactive multi-user BCI gaming, where the BCI paradigm used is based on the alpha band non-verbalised operant conditioning.

Forty able-bodied healthy participants were involved in the multi-user BCI gaming experiments that were divided into two main experiments i.e. competitive and collaborative gaming. The BCI game was presented as two bars on each side of the screen with a seesaw placed in between. The bars are controlled by the changes of relative alpha power (RA) of each player, recorded from Pz. When one bar is higher than the other, the seesaw will be tilted down towards the side with a higher bar. A pair of players were asked to control their assigned bar to achieve a different goal for each gaming interaction. In competitive gaming, they were asked to upregulate their RA (which consequently increases their bar) to increase the bar for ≥ 10% above their opponent’s bar for ≥ 1s to score. In collaborative gaming, they were asked to balance the seesaw (thus, balancing their bars) to reach a similar bar level within the range of ±5% from each other for ≥ 0.5s to score. Offline analyses were performed on the EEG data, such as power spectral density, source localisation, and brain connectivity analyses.

The results revealed that different types of interaction produce different kinds of responses in the brain activity of the players. These responses include the different spatial distribution of alpha power, the changing source localisation pattern, and the changes in intra- and inter-brain connectivity. It was found that the level of dominance between players also produces different brain activity responses, in both gaming interactions. Additionally, it was revealed that the level of posterior alpha power during resting state can be used to predict the gaming performance. Overall, the study has contributed to providing information regarding the brain’s cortical activity changes during different kinds of multi-user BCI gaming interaction, which is expected to benefit the development of multi-user BCI games and for other methodological considerations in designing multi-user BCI application.

October 12, 2020 : 
Distinguishing vigilance decrement and low task demands from mind-wandering: A machine learning analysis of EEG
Christina Jin (Groningen)

Mind-wandering is a ubiquitous mental phenomenon that is defined as self-generated thought irrelevant to the ongoing task. Mind-wandering tends to occur when people are in a low-vigilance state or when they are performing a very easy task. In the current study, we investigated whether mind-wandering is completely dependent on vigilance and current task demands, or whether it is an independent phenomenon. To this end, we trained support vector machine (SVM) classifiers on EEG data in conditions of low and high vigilance, as well as under conditions of low and high task demands, and subsequently tested those classifiers on participants' self-reported mind-wandering. Participants' momentary mental state was measured by means of intermittent thought probes in which they reported on their current mental state. The results showed that neither the vigilance classifier nor the task demands classifier could predict mind-wandering above-chance level, while a classifier trained on self-reports of mind-wandering was able to do so. This suggests that mind-wandering is a mental state different from low vigilance or performing tasks with low demands—both which could be discriminated from the EEG above chance. Furthermore, we used dipole fitting to source-localize the neural correlates of the most import features in each of the three classifiers, indeed finding a few distinct neural structures between the three phenomena. Our study demonstrates the value of machine-learning classifiers in unveiling patterns in neural data and uncovering the associated neural structures by combining it with an EEG source analysis technique.

July 13, 2020 : 
A computational study of the role of Renshaw cells in the mammalian locomotor circuit
Priscilla Corsi (EPFL)

In this study we considered the role of the inhibitory interneurons known as Renshaw cells (RC) in the activity of a simulated locomotor neural network. We used an integrate-and fire-model to reproduce RCs experimental three-phases responses, consisting of a fast activation, a relaxation time and a slow activation. Simulations of RCs within a model of muscle spindle reflex neural network highlighted multiple roles Renshaw cells hold in locomotion. We show that RCs synchronize the pool of motor neurons they act on and regulate the relative duration of the antagonist muscle bursts during the gait cycle. This refined model can be used to simulate the interaction between electrodes and spinal circuits to improve the efficacy of spinal cord stimulation protocols.

May 18, 2020 : 
Oscillatory Dynamics in Deep Brain Stimulation for Depression.

Vineet Tiruvadi ( Emory)

Deep brain stimulation (DBS) has shown promise as a therapy for psychiatric depression. DBS in the subcallosal cingulate cortex (SCC) is the most well studied target but has demonstrated inconsistent results between open-label and clinical studies. Objective signatures of disease and therapy are needed to more systematically tune and improve SCC-DBS. These signatures could then be used to study (a) the neural dynamics underlying the disease and (b) the influence effected by DBS on those dynamics.

Recent refinements of therapy have narrowed the therapeutic target to specific white matter tracts in the SCC (SCCwm), leading to improvements in treatment response when targeted with per-patient tractography. Additionally, advances in DBS hardware and machine-learning analyses enable chronic intracranial recordings and meaningful inference with sparse, noisy data. Together, these advances enable unprecedented study of antidepressant DBS directly in patients over months of recovery.

Today, I'll be presenting my dissertation work in identifying neural oscillations in the SCC predictive of depression state and in characterizing the direct effects of SCCwm-DBS on these oscillations. Using a prototype DBS device capable of simultaneous stimulation and recording (Activa PC+S; Medtronic PLC) our group collected (a) chronic SCC-LFP multiple times a day over seven months, and (b) combined SCC-LFP and dense-array EEG at therapy onset under various DBS parameters in a set of six TRD patients treated with SCCwm-DBS. First, we characterized and corrected for mismatch compression in the differentially recorded SCC-LFP. We then developed a linear decoding model of depression state from SCC-LFP oscillations and identified a candidate readout that achieved significant correlation with empirical depression measures and significant classifier performance. Finally, we show that SCCwm-DBS evokes specific changes across primarily EEG recordings in oscillatory patterns consistent with chronic SCC-LFP changes.

The results of the work enable reliable measurements of oscillations over chronic timeperiods, including compound oscillations in the SCC predictive of depression state that can be used to inform DBS parameter management. These oscillations are not directly modulated by SCCwm-DBS, suggesting antidepressant DBS in the SCC requires precise targeting of patient-specific SCCwm through individualized tractography. Future work will focus on the development of control-theoretic models for systematic engineering of adaptive antidepressant DBS and the reverse-engineering of neural dynamics underlying emotion.

May 11, 2020 : 
Coupled place cell – grid cell system for flexible navigation.

Dmitri Laptev ( UCL)

Place cells and grid cells in the hippocampal-entorhinal system are thought to integrate sensory and self-motion information into a representation of estimated spatial location, but the precise mechanism is unknown. We developed and simulated a parallel attractor system in which place cells form a recurrent attractor network driven by environmental inputs and grid cells form a continuous attractor network performing path integration driven by self-motion, with inter-connections between them allowing both types of input to influence firing in both ensembles. We show that such a system is needed to explain the temporal dynamics and spatial patterns of place and grid cell firing in experiments involving changing a familiar correspondence between environmental and self-motion inputs. Our results support the hypothesis that place and grid cells provide two different but complementary representations, based on environmental sensory inputs and self-motion inputs integration, respectively. More generally, the study supports the emerging notion that grid cells provide a universal coding mechanism, independent of specific sensory inputs, that can be adjusted to a wide range of tasks in different physical (e.g. spatial, auditory) and non-physical (e.g. conceptual knowledge) domains, where it enables flexible behaviour. 

April 27, 2020 : 
Electrical Stimulation of Alpha Oscillations Stabilizes Performance on Visual Attention Tasks.

Michael Clayton ( University of Oxford)

Neural oscillations in the alpha band (7–13 Hz) have long been associated with reductions in attention. However, recent studies have suggested a more nuanced perspective in which alpha oscillations also facilitate processes of cognitive control and perceptual stability. Transcranial alternating current stimulation (tACS) over occipitoparietal cortex at 10 Hz (alpha-tACS) can selectively enhance EEG alpha power. To assess the contribution of alpha oscillations to attention, we delivered alpha-tACS across 4 experiments while 178 participants performed sustained attention tasks. Poor performance on all visual tasks was previously associated with increased EEG alpha power. We therefore predicted initially that alpha-tACS would consistently impair visual task performance. However, alpha-tACS was instead found to prevent deteriorations in visual performance that otherwise occurred during sham- and 50 Hz-tACS. This finding was observed in 2 experiments, using different sustained attention tasks. In a separate experiment, we also found that alpha-tACS limited improvements on a visual task where learning was otherwise observed. Consequently, alpha-tACS appeared to exert a consistently stabilizing effect on visual attention. Such effects were not seen in an auditory control task, indicating specificity to the visual domain. We suggest that these results are most consistent with the view that alpha oscillations facilitate processes of top-down control and attentional stability.

January 22, 2020 (together with Quirks): 
Using graphical models to study effective connectivity in social- and brain networks.
Jan-Philip Franken (University of Edinburgh)

The present talk will discuss two recent applications of graphical models. First, it will focus on modelling social networks through simple graphs to investigate how people deal with statistical dependencies while integrating their direct observations with the communicated beliefs of their social environment (see e.g., Whalen, Griffiths, & Buchsbaum, 2018; Madsen, Bailey, & Pilditch, 2018, for related work). Here, we will discuss results of ongoing work exploring to what extent people can be characterised as Bayesian reasoners when confronted with statistical dependencies in their social environment. Second, the talk will explore recent applications of graphical modelling to brain networks in rodents to investigate effective connectivity between regions of interest (see e.g., Zeidman et al., 2019, for a tutorial). Preliminary results from a collaborative project between City and MIT will be presented where we will explore the potential benefits of using graphical modelling and Bayesian inference for understanding changes in resting state activity in the somatosensory system of rats.

January 6, 2020: 
Zapping states and maps: Exploring neural representations in attention and working memory using combinations of TMS and fMRI

Eva Feredoes (Reading)

Current models of attention and working memory suggest many shared cognitive processes and neural mechanisms, and I will present causal evidence using TMS and concurrent TMS-fMRI that contributes to this view. Specifically, across a series of behavioural TMS studies, we have shown that visual working memory items are in a flexible state determined by the allocation of attention, and which requires the involvement of visual brain areas. I will also present results from several concurrent TMS-fMRI studies suggesting that enhancement of neural representations is a general mechanism by which attention might protect relevant information in the face of competing irrelevant information. This work contributes to evidence showing that short-term information representation and retention is neurally more complex and dynamic than previously thought.

November 6, 2019: 
Modelling the Brain: From Dynamical Complexity to Neural Synchronisation, Chimera-like States, Information Flow Capacity and Dynamic Range

Chris G. Antonopoulos (Essex)

In this talk, I will present a review of my recent work on the study of the brain, aiming to reveal relations between neural synchronisation patterns and information flow capacity, namely the largest amount of information per time unit that can be transmitted between the different parts of the brain networks considered. I will start with the working hypothesis, that brains might evolve based on the principle of the maximisation of their internal information flow

capacity. In this regard, we have found that synchronous behaviour and information flow capacity of the evolved networks reproduce well the same behaviours observed in the brain dynamical networks of the Caenorhabditis elegans (C. elegans) soil worm and humans. Then, I will talk about the verification of our hypothesis by showing that Hindmarsh-Rose (HR) neural networks evolved with coupling strengths that maximise the information flow capacity are those with the closest graph distance to the brain networks of C. elegans and humans. Then, I will present results from a recently published paper on spectacular neural synchronisation phenomenon observed in modular neural networks such as in the C. elegans brain network, called chimera-like states. I will show that, under some assumptions, neurons of different communities of the brain network of the C. elegans soil worm equipped with HR dynamics are able to synchronise with themselves whereas others, belonging to other communities, remain essentially desynchronised, a situation that alternates dynamically in time. Finally, I will

discuss results on the dynamic range in the C. elegans brain network that corroborate the above findings from our earlier studies.

October 21, 2019: 

A computational study of Major Depressive Disorder biomarkers

Anna Anissimova (City-University of London)

Electroencephalogram recordings have been used in multiple studies of psychiatric disorders. In this study, a set of biomarkers extracted from EEG data from patient is shown to be able to distinguish between patients and healthy controls. We used dynamic causal modelling (DCM) to obtain a set of interpretable features for a supervised learning algorithm.  We  analysed  the EEG signal recorded from 15 depressed and 35 healthy participants during multi-source interference task. The best interpretable results (using linear SVM) were achieved after feature selection and dataset balancing steps. Testing the final model on unseen data demonstrated a balanced accuracy of approximately 64%, recall of 50% and specificity of 78%. These findings suggest that  using DCM, it is possible to  build a reliable and interpretable classifier and distinguish between MDD patients and healthy controls. This can have important applications in clinical practice.

October 8, 2019: 

Neural couplings and the time-on-task effect

Katharina Wagner (University of Gent)

Frontal beta band power has been shown to increase both as a consequence of elevated task demands and with time on task. Beta band power may thus be a neural correlate of cognitive effort. The present study reports effective connections between the prefrontal and premotor cortex, areas known to be involved in cognitive control, and shows that effective connections change in line with frontal beta band power. We applied Dynamic causal modeling (DCM) to electrocorticographic recordings of two monkeys that are performing a cognitive task. Among model architectures with varying presence and direction of effective connections in each hemisphere, we found a fully connected model in the left hemisphere and a model containing a forward connection from the prefrontal to the premotor region and a backward connection in opposite direction. Using beta band power of each electrode as a predictor for connectivity strength within the parametrical empirical Bayes (PEB) framework, we found that in both hemispheres the strength of the forward connections from the superficial pyramidal cells of the premotor to the spiny stellate cells of the prefrontal area changed in line with prefrontal beta power both when task demands increased and as the session progressed. The consistent change between task and time and between hemispheres was also seen in the self-connection of the prefrontal deep pyramidal cells and the intrinsic connection of premotor spiny stellate to superficial cells. Thus, the increase in cognitive effort may be related to changes in the feedforward connections and other intrinsic connections. 

September 24, 2019: 
Spatiotemporal characterisation of neural signals for active multi-sensing and decision-making
 Ioannis Delis (Leeds) 

Perceptual decisions rely on the integration of information from the environment, which typically involves the combination of stimuli from different senses. The quality of sensory evidence depends highly on our actions that affect how we acquire information from the external world. Importantly, the processing of this multisensory information requires the interaction of multiple neural processes over time. These interactions remain poorly understood primarily because of the lack of unifying methodology that allows their characterization at both the behavioural and neural levels. Here I will present our recent work on decoding neurophysiological signals to explain decision-making behaviour.  

I will first introduce the processes involved in the formation of perceptual decisions and characterise their neural correlates. Then, I will discuss the process of active sensing where movement is used to gather information in order to make perceptual choices. I will first present a behavioural paradigm to study active exploration and perceptual selection. I will then introduce a computational framework for the joint analysis of neural and behavioural signals. This approach, coupled with the cognitive modelling of decision-making behaviours, provides a window into the mechanisms underlying active decision formation. Finally, I will present recent work extending the above methodology to study multi-sensory information processing. 

April 15, 2019:

Representations of touch in the somatosensory cortices

Luigi Tamè (Kent)

Detecting and discriminating sensory stimuli are fundamental functions of the nervous system. Electrophysiological and lesion studies suggest that macaque primary somatosensory cortex (SI) is critically involved in discriminating between stimuli, but is not required simply for detecting stimuli. By contrast, transcranial magnetic stimulation (TMS) studies in humans have shown near-complete disruption of somatosensory detection when a single pulse of TMS is delivered over SI. In my presentation, in accordance with macaque studies, I will provide empirical evidence suggesting that human SI is required for discriminating between tactile stimuli and for maintaining stimulus representations over time, or under high task demand, but may not be required for simple tactile detection. Moreover, I will provide empirical evidence showing that human SI, rather than higher level brain areas, is critically involved in the estimation of tactile distance perception as well as bilateral integration of touch.

March 18, 2019: 
Mapping relational knowledge in the service of flexible cognition
Mona Garvert (Max Planck, UCL) 

Our environment is replete with statistical structure and similar cause-effect relationships hold across related experiences. By extracting and storing relational knowledge efficiently, the brain can therefore predict states and reinforcements that were never directly experienced. In physical space, the hippocampal-entorhinal system organises statistical regularities between landmarks in a cognitive map, which provides a coordinate system that enables inferences about spatial relationships. In this talk, I demonstrate that a similar map-like organisation of knowledge can also be observed for discrete relationships between objects that are entirely non-spatial, suggesting that the same codes may also organise other dimensions of our experiences. When subjects need to flexibly switch between cognitive maps characterised by the same underlying structure, but a different distribution of stimuli, structural knowledge is abstracted away from sensory representations in the medial prefrontal cortex over time. Such a separation of structure from stimulus representations may facilitate the generalisation of knowledge across sensory environments and thereby accelerate learning in novel situations. Together, these studies suggest a potential neural mechanism underlying the remarkable human ability to draw accurate inferences from little data.