Lab Meetings

We meet regularly to host external speakers, discuss progress of current lab projects  and rehearse presentations by lab members in London and elsewhere. Past and upcoming external talks include: 

April 19, 2022 :

The use of Magnetoelectric Nanoparticles for the study of psychiatric and neurodegenerative diseases

Dr Marta Pardo(University of Miami)

The brain is a massive network of neurons which are interconnected through chemical and electrical field oscillations. It is hard to overestimate the significance of the ability to control chemical and physical properties of the network at both the collective and single-cell levels. Most psychiatric and neurodegenerative diseases are typically characterized by certain aberrations of these oscillations. Recently, magnetoelectric nanoparticles (MENPs) have been introduced to achieve the desired control. MENs effectively enable wirelessly controlled nanoelectrodes deep in the brain.
We introduce MENPs to become a game-changing tool in future applications for the treatment of brain alterations. Unlike other stimulation approaches, MENPs have the potential to enable a wirelessly controlled stimulation at a single-neuron level without requiring genetic modification of the neural tissue and no toxicity has yet been reported.

February 15, 2022 :

Some thoughts on visual encoding and decoding

Professor Ning Qian (Columbia University)

External stimuli evoke sensory responses in the brain (a process termed encoding), and these responses lead to the subjective perception of the stimuli (decoding). For encoding, a key question is: what do sensory responses represent?  Many theories assume that a neuron’s higher firing rate indicates a greater probability of its preferred stimulus in the input. However, this contradicts 1) the adaptation phenomena where prolonged exposure to, and thus increased probability of, a stimulus reduces the firing rates of cells tuned to the stimulus; and 2) the observation that rare, unexpected stimuli capture attention and increase neuronal firing. We propose, based on the Minimum Description Length (MDL) principle, that neurons’ firing rates are proportional to optimal code length, and their spike patterns are the actual code, for useful features in inputs. This hypothesis explains adaptation-induced changes of V1 orientation tuning curves. For decoding, most theories assume that it follows the same low-to-high-level hierarchy established for encoding. However, we show that this assumption contradicts the main results of a simple psychophysics experiment. To explain the data, we propose that the brain prioritizes decoding of higher-level features because they are more behaviorally relevant, and more invariant and categorical, and thus easier to specify and maintain in noisy working memory, and that more reliable higher-level decoding constrains less reliable lower-level decoding.

January 25, 2022 :

Brain dynamics of auditory pattern recognition

Dr Leonardo Bonetti (Oxford)

Pattern recognition is a major scientific topic. Strikingly, while machine learning algorithms are constantly refined, the human brain emerges as an ancestral biological example of such complex procedure. However, how it recognizes and transforms sequences of single objects into meaningful temporal patterns remains elusive. Here, we conducted two studies using both magnetoencephalography (MEG) and magnetic resonance imaging (MRI) to investigate the brain mechanisms underlying recognition of previously learned auditory patterns compared to new variations thereof. Overall, our results showed that the recognition of the previously memorized patterns was associated to the recruitment of a large brain network comprising auditory cortex, cingulate gyrus, hippocampus, insula, frontal operculum, and inferior temporal cortex. Notably, while auditory cortex and cingulate gyrus were more active in the beginning of the patterns, hippocampus, insula, and inferior temporal cortex occurred mainly at the end of the patterns. In addition, the detection of the varied musical patterns was associated to a sharp peak originated first in the auditory cortex and second in ventro-medial prefrontal cortex and hippocampus. In these studies, we have shown the brain mechanisms underlying auditory pattern recognition, delineating the main brain regions involved and their temporal dynamics. Future perspectives should involve the computation of generative models that could help us testing the directionality of the functional connectivity between brain areas. For instance, I believe that dynamic causal modelling (DCM) may provide useful insights to achieve such aim.
July 20, 2021 :

Pulse Rate Variability: Assessing cardiovascular and autonomic health from photoplethysmography

Elisa Mejia-Mejia (City, University of London)

Pulse rate variability (PRV) assesses the behaviour over time of the inter-beat intervals (IBIs) measured from pulse waves, such as the photoplethysmogram (PPG). It has been proposed in recent decades as a surrogate of heart rate variability (HRV), which describes the behaviour of instantaneous heart rate over time, measured from the electrocardiogram (ECG).HRV has been largely used for the assessment of cardiac autonomic activity. Both branches of the autonomic nervous system (ANS), i.e.  the sympathetic and  parasympathetic  nervous  systems,  are  in  charge  of  controlling  the  firing rate of the sinus node in the heart, and hence their behaviour determines heartrate  (Fox,  2016).  By  characterising  the  changes  in  heart  rate  over  time,  i.e. measuring HRV, it is possible to assess the ANS using a non-invasive, indirect measure.  Since PPG is perhaps the most widely used physiological signal nowadays, and due to its simplicity and the easiness of obtaining the signal for longer periods of time, PRV has been proposed as an alternative to HRV. Nonetheless,  their  relationship  is  not  straightforward,  and  some  researchers claim  that,  although  similar,  PRV  and  HRV  should  not  be  consider  exactly the same,  and further studies are needed for understanding the different processes  that  might  be  present  in  PRV  that  are  not  necessarily  visible  in  HRV.  Regardless of the relationship between HRV and PRV, the latter is now largely used in research, and it has been proposed as a useful tool for several applications,  such as the detection,  characterization,  and monitorization of somatic diseases; the assessment of mental health; sleep studies; and pharmaceutical research. In this presentation, the basic principles for the extraction of PRV from PPG signals will be presented, and some results will be shown regarding the effects of cardiovascular changes in PRV and its relationship with HRV.
July 13, 2021 :

Galvanic Stimulation: a New Tool for Driving Populations of Neurons

Cynthia Steinhardt (Johns Hopkins)

Biphasic pulsatile stimulation is the safe standard for electrical stimulation in neural implants and interfaces. For this reason, it is also commonly used in microstimulation, evoked response and other electrical stimulation-based studies. Through our work with simulations and electrophysiology experiments in vestibular afferent, we show that pulsatile stimulation has unnatural, non-linear interactions with the axon that lead to complex population firing patterns. With these capabilities, we restore function but reach limitations in restoring behavior (e.g. cochlear implants restore hearing without pitch, vestibular prostheses restore eye movements but not of natural magnitude). 

We also perform studies using novel, implantable galvanic stimulation technology developed in our lab. We find large improvements in restoration of behavioral response (eye velocity) when stimulating vestibular afferents. We also uncover differences in the mechanism of galvanic and pulsatile stimulation that explain how galvanic stimulation smoothly drives firing rates up and down in individual neurons while preserving naturalistic spike timing. Our findings suggest galvanic stimulation can contrastingly drive the same firing pattern in populations of neurons within the field. With these promising findings, we open the door to the possibilities of using galvanic stimulation as a novel tool for studying calculations performed within local and interregional populations of neurons.
April 12, 2021 :

Compositional problem solving: Investigating a task general strategy to problem solving in brain data and network models

Jascha Achterberg (Cambridge)

It is widely argued that the power of human cognition rests heavily on compositional problem solving: the ability to break a complex problem down into its simple subcomponents, solving those in separate attentional episodes to then reintegrate the results into the overall solution to the complex problem. Results from patient studies and neuroimaging point us to the conclusion that solving task compositionality is a key component in human general intelligence and is located in the brain’s Multiple Demand (MD) Network. As recent modelling work made progress towards formalising compositionality and creating model based agents with an ability to recognise a task’s compositional structure, there is an exciting perspective to investigate and model the neuronal basis behind compositional problem solving.

This talk will review the basic theory behind compositional problem solving and its link to the MD Network before discussing new modelling approaches and experimental ideas to further understand the brain’s ability to solve task compositionality.

March 29, 2021 :

Predictive coding as a consequence of energy efficiency in recurrent neural networks

Nasir Ahmad (Donders)

Predictive coding is a promising framework for understanding sensory processing. It proposes the inhibition of predictable sensory inputs and preferential processing of surprising inputs. Hierarchical predictive coding architectures, composed of separate prediction and error units, have therefore been constructed to explore this principle . Here we show that rather than architecturally hard-wiring prediction and error nodes in a recurrent neural network, these nodes emerge when the network is trained to minimise energy. We also demonstrate multiple timescales of prediction and show through lesioning that this network integrates evidence over time.
March 22, 2021 :

Dysconnection and the immunological basis of schizophrenia

Anjali Bhat (UCL)

Schizophrenia has been cast, from different neuroscientific perspectives, as a sensory processing disorder, a highly heritable genetic disorder as well as a neurodevelopmental disorder. The dysconnection hypothesis attempts to draw together several of these disparate strands of research to create a coherent picture of how schizophrenia arises, implicating neuromodulatory processes governing synaptic gain control as the aetiological core of psychotic symptoms – i.e., a functional (or perhaps Bayesian) synaptopathy. It calls for studies that empirically ‘close the explanatory gap between pathophysiology at the molecular (synaptic) level and the psychopathology experienced by patients’ (Friston et al, 2016). One strand that has not yet been woven into this tapestry is immunology, which has been overwhelmingly linked with schizophrenia in recent years – the reasons for which are not well understood. In this talk I will overview my recent work, which has used a variety of methods with the aim of ‘closing the explanatory gap’. I begin with an exploration of the genetics of mismatch negativity, a key biomarker for schizophrenia. Next, I present experiments of prenatal immunity in neuronal networks grown out of hair samples from patients with schizophrenia and healthy controls. Finally, I introduce inference from the perspective of the immune system – immunoceptive inference – as a new way of understanding interactions between the immune system and the brain. 

February 22, 2021 :
A neurally plausible computational model of WM
Shekoofeh Hedayati (The Pennsylvania State University)

Theories of working memory(WM) as a multicomponent store(Baddeley & Hitch, 1974; Baddeley, 2000) and activated long term memory (Ericsson & Kintsch,1995) have considered visual knowledge to be essential in WM function. Similarly, behavioral data has shown the impact of visual knowledge on memory encoding by comparing its performance for familiar vs. unfamiliar items (Zimmer & Fischer; 2020). Yet, no computational model has explained the mechanism underlying the interaction between these two constructs and the implications that it has for our understanding of WM.
We propose a neurally plausible computational model of WM called Memory for Latent Representations (MLR) that represents visual knowledge using a modified Variational Autoencoder and then builds memories out of the latent representations in the model. MLR encodes the visual information by flexibly allocating shared neural resources and retrieves them through pixel-wise reconstructions. Consistent with human behavior, the model shows how familiar items can be encoded more efficiently than unfamiliar items. MLR also captures the behavioral capabilities of humans in WM tasks. These capabilities are 1) representing specific attributes of an item (e.g. shape, color, etc.) with varying degrees of precision according to the task demand (Swan, Collins & Wyble, 2016). 2) representing both categorical and visual attributes of an item. 3) representing novel shape configurations that someone has not been trained on. 4) The ability to rapidly tune encoding parameters to provide flexibility of encoding in an uncertain task.

February 8, 2021 :
Modelling image complexity with deep learning
Fintan Nagle (Imperial)

It is commonplace to describe images as "complex" or "simple", but what underlies this judgement? To investigate, we collected image complexity ratings using a custom online platform and modelled them using a deep CNN. We first asked whether observers agree on the complexity of an image. 2AFC relative complexity judgements showed a good level of agreement and allowed us to obtain consistent complexity ratings. A CNN with pretraining on an object recognition task predicted complexity ratings with r = .83. This model offers a promising method to evaluate perceived complexity. Finally I discuss the role of low-level visual features, object perception, and textures in complexity judgement.

November 16, 2020 :
A Comparison between Different Social Interaction in Multi-User Brain-Computer Interface (BCI) Gaming: Competitive vs. Collaborative
Finda Putri (Glasgow)

The growing interest in BCI gaming has initiated the possibility of multi-user BCI games, which require the involvement of two or more players with integrated brain signals to a BCI application. Multi-user BCI is a complex process, with challenges surrounding the technical aspects and the behavioural aspects all should be carefully considered. Several studies have tested multi-user BCI games based on different types of control paradigms. However, the information related to the neurological changes caused by the BCI gaming interaction is still relatively limited. The aim of this thesis is to investigate the electrophysiological changes that occur in the brain, due to interactive multi-user BCI gaming, where the BCI paradigm used is based on the alpha band non-verbalised operant conditioning.

Forty able-bodied healthy participants were involved in the multi-user BCI gaming experiments that were divided into two main experiments i.e. competitive and collaborative gaming. The BCI game was presented as two bars on each side of the screen with a seesaw placed in between. The bars are controlled by the changes of relative alpha power (RA) of each player, recorded from Pz. When one bar is higher than the other, the seesaw will be tilted down towards the side with a higher bar. A pair of players were asked to control their assigned bar to achieve a different goal for each gaming interaction. In competitive gaming, they were asked to upregulate their RA (which consequently increases their bar) to increase the bar for ≥ 10% above their opponent’s bar for ≥ 1s to score. In collaborative gaming, they were asked to balance the seesaw (thus, balancing their bars) to reach a similar bar level within the range of ±5% from each other for ≥ 0.5s to score. Offline analyses were performed on the EEG data, such as power spectral density, source localisation, and brain connectivity analyses.

The results revealed that different types of interaction produce different kinds of responses in the brain activity of the players. These responses include the different spatial distribution of alpha power, the changing source localisation pattern, and the changes in intra- and inter-brain connectivity. It was found that the level of dominance between players also produces different brain activity responses, in both gaming interactions. Additionally, it was revealed that the level of posterior alpha power during resting state can be used to predict the gaming performance. Overall, the study has contributed to providing information regarding the brain’s cortical activity changes during different kinds of multi-user BCI gaming interaction, which is expected to benefit the development of multi-user BCI games and for other methodological considerations in designing multi-user BCI application.

October 12, 2020 : 
Distinguishing vigilance decrement and low task demands from mind-wandering: A machine learning analysis of EEG
Christina Jin (Groningen)

Mind-wandering is a ubiquitous mental phenomenon that is defined as self-generated thought irrelevant to the ongoing task. Mind-wandering tends to occur when people are in a low-vigilance state or when they are performing a very easy task. In the current study, we investigated whether mind-wandering is completely dependent on vigilance and current task demands, or whether it is an independent phenomenon. To this end, we trained support vector machine (SVM) classifiers on EEG data in conditions of low and high vigilance, as well as under conditions of low and high task demands, and subsequently tested those classifiers on participants' self-reported mind-wandering. Participants' momentary mental state was measured by means of intermittent thought probes in which they reported on their current mental state. The results showed that neither the vigilance classifier nor the task demands classifier could predict mind-wandering above-chance level, while a classifier trained on self-reports of mind-wandering was able to do so. This suggests that mind-wandering is a mental state different from low vigilance or performing tasks with low demands—both which could be discriminated from the EEG above chance. Furthermore, we used dipole fitting to source-localize the neural correlates of the most import features in each of the three classifiers, indeed finding a few distinct neural structures between the three phenomena. Our study demonstrates the value of machine-learning classifiers in unveiling patterns in neural data and uncovering the associated neural structures by combining it with an EEG source analysis technique.

July 13, 2020 : 
A computational study of the role of Renshaw cells in the mammalian locomotor circuit
Priscilla Corsi (EPFL)

In this study we considered the role of the inhibitory interneurons known as Renshaw cells (RC) in the activity of a simulated locomotor neural network. We used an integrate-and fire-model to reproduce RCs experimental three-phases responses, consisting of a fast activation, a relaxation time and a slow activation. Simulations of RCs within a model of muscle spindle reflex neural network highlighted multiple roles Renshaw cells hold in locomotion. We show that RCs synchronize the pool of motor neurons they act on and regulate the relative duration of the antagonist muscle bursts during the gait cycle. This refined model can be used to simulate the interaction between electrodes and spinal circuits to improve the efficacy of spinal cord stimulation protocols.

May 18, 2020 : 
Oscillatory Dynamics in Deep Brain Stimulation for Depression.

Vineet Tiruvadi ( Emory)

Deep brain stimulation (DBS) has shown promise as a therapy for psychiatric depression. DBS in the subcallosal cingulate cortex (SCC) is the most well studied target but has demonstrated inconsistent results between open-label and clinical studies. Objective signatures of disease and therapy are needed to more systematically tune and improve SCC-DBS. These signatures could then be used to study (a) the neural dynamics underlying the disease and (b) the influence effected by DBS on those dynamics.

Recent refinements of therapy have narrowed the therapeutic target to specific white matter tracts in the SCC (SCCwm), leading to improvements in treatment response when targeted with per-patient tractography. Additionally, advances in DBS hardware and machine-learning analyses enable chronic intracranial recordings and meaningful inference with sparse, noisy data. Together, these advances enable unprecedented study of antidepressant DBS directly in patients over months of recovery.

Today, I'll be presenting my dissertation work in identifying neural oscillations in the SCC predictive of depression state and in characterizing the direct effects of SCCwm-DBS on these oscillations. Using a prototype DBS device capable of simultaneous stimulation and recording (Activa PC+S; Medtronic PLC) our group collected (a) chronic SCC-LFP multiple times a day over seven months, and (b) combined SCC-LFP and dense-array EEG at therapy onset under various DBS parameters in a set of six TRD patients treated with SCCwm-DBS. First, we characterized and corrected for mismatch compression in the differentially recorded SCC-LFP. We then developed a linear decoding model of depression state from SCC-LFP oscillations and identified a candidate readout that achieved significant correlation with empirical depression measures and significant classifier performance. Finally, we show that SCCwm-DBS evokes specific changes across primarily EEG recordings in oscillatory patterns consistent with chronic SCC-LFP changes.

The results of the work enable reliable measurements of oscillations over chronic timeperiods, including compound oscillations in the SCC predictive of depression state that can be used to inform DBS parameter management. These oscillations are not directly modulated by SCCwm-DBS, suggesting antidepressant DBS in the SCC requires precise targeting of patient-specific SCCwm through individualized tractography. Future work will focus on the development of control-theoretic models for systematic engineering of adaptive antidepressant DBS and the reverse-engineering of neural dynamics underlying emotion.

May 11, 2020 : 
Coupled place cell – grid cell system for flexible navigation.

Dmitri Laptev ( UCL)

Place cells and grid cells in the hippocampal-entorhinal system are thought to integrate sensory and self-motion information into a representation of estimated spatial location, but the precise mechanism is unknown. We developed and simulated a parallel attractor system in which place cells form a recurrent attractor network driven by environmental inputs and grid cells form a continuous attractor network performing path integration driven by self-motion, with inter-connections between them allowing both types of input to influence firing in both ensembles. We show that such a system is needed to explain the temporal dynamics and spatial patterns of place and grid cell firing in experiments involving changing a familiar correspondence between environmental and self-motion inputs. Our results support the hypothesis that place and grid cells provide two different but complementary representations, based on environmental sensory inputs and self-motion inputs integration, respectively. More generally, the study supports the emerging notion that grid cells provide a universal coding mechanism, independent of specific sensory inputs, that can be adjusted to a wide range of tasks in different physical (e.g. spatial, auditory) and non-physical (e.g. conceptual knowledge) domains, where it enables flexible behaviour. 

April 27, 2020 : 
Electrical Stimulation of Alpha Oscillations Stabilizes Performance on Visual Attention Tasks.

Michael Clayton ( University of Oxford)

Neural oscillations in the alpha band (7–13 Hz) have long been associated with reductions in attention. However, recent studies have suggested a more nuanced perspective in which alpha oscillations also facilitate processes of cognitive control and perceptual stability. Transcranial alternating current stimulation (tACS) over occipitoparietal cortex at 10 Hz (alpha-tACS) can selectively enhance EEG alpha power. To assess the contribution of alpha oscillations to attention, we delivered alpha-tACS across 4 experiments while 178 participants performed sustained attention tasks. Poor performance on all visual tasks was previously associated with increased EEG alpha power. We therefore predicted initially that alpha-tACS would consistently impair visual task performance. However, alpha-tACS was instead found to prevent deteriorations in visual performance that otherwise occurred during sham- and 50 Hz-tACS. This finding was observed in 2 experiments, using different sustained attention tasks. In a separate experiment, we also found that alpha-tACS limited improvements on a visual task where learning was otherwise observed. Consequently, alpha-tACS appeared to exert a consistently stabilizing effect on visual attention. Such effects were not seen in an auditory control task, indicating specificity to the visual domain. We suggest that these results are most consistent with the view that alpha oscillations facilitate processes of top-down control and attentional stability.

January 22, 2020 (together with Quirks): 
Using graphical models to study effective connectivity in social- and brain networks.
Jan-Philip Franken (University of Edinburgh)

The present talk will discuss two recent applications of graphical models. First, it will focus on modelling social networks through simple graphs to investigate how people deal with statistical dependencies while integrating their direct observations with the communicated beliefs of their social environment (see e.g., Whalen, Griffiths, & Buchsbaum, 2018; Madsen, Bailey, & Pilditch, 2018, for related work). Here, we will discuss results of ongoing work exploring to what extent people can be characterised as Bayesian reasoners when confronted with statistical dependencies in their social environment. Second, the talk will explore recent applications of graphical modelling to brain networks in rodents to investigate effective connectivity between regions of interest (see e.g., Zeidman et al., 2019, for a tutorial). Preliminary results from a collaborative project between City and MIT will be presented where we will explore the potential benefits of using graphical modelling and Bayesian inference for understanding changes in resting state activity in the somatosensory system of rats.

January 6, 2020: 
Zapping states and maps: Exploring neural representations in attention and working memory using combinations of TMS and fMRI

Eva Feredoes (Reading)

Current models of attention and working memory suggest many shared cognitive processes and neural mechanisms, and I will present causal evidence using TMS and concurrent TMS-fMRI that contributes to this view. Specifically, across a series of behavioural TMS studies, we have shown that visual working memory items are in a flexible state determined by the allocation of attention, and which requires the involvement of visual brain areas. I will also present results from several concurrent TMS-fMRI studies suggesting that enhancement of neural representations is a general mechanism by which attention might protect relevant information in the face of competing irrelevant information. This work contributes to evidence showing that short-term information representation and retention is neurally more complex and dynamic than previously thought.

November 6, 2019: 
Modelling the Brain: From Dynamical Complexity to Neural Synchronisation, Chimera-like States, Information Flow Capacity and Dynamic Range

Chris G. Antonopoulos (Essex)

In this talk, I will present a review of my recent work on the study of the brain, aiming to reveal relations between neural synchronisation patterns and information flow capacity, namely the largest amount of information per time unit that can be transmitted between the different parts of the brain networks considered. I will start with the working hypothesis, that brains might evolve based on the principle of the maximisation of their internal information flow

capacity. In this regard, we have found that synchronous behaviour and information flow capacity of the evolved networks reproduce well the same behaviours observed in the brain dynamical networks of the Caenorhabditis elegans (C. elegans) soil worm and humans. Then, I will talk about the verification of our hypothesis by showing that Hindmarsh-Rose (HR) neural networks evolved with coupling strengths that maximise the information flow capacity are those with the closest graph distance to the brain networks of C. elegans and humans. Then, I will present results from a recently published paper on spectacular neural synchronisation phenomenon observed in modular neural networks such as in the C. elegans brain network, called chimera-like states. I will show that, under some assumptions, neurons of different communities of the brain network of the C. elegans soil worm equipped with HR dynamics are able to synchronise with themselves whereas others, belonging to other communities, remain essentially desynchronised, a situation that alternates dynamically in time. Finally, I will

discuss results on the dynamic range in the C. elegans brain network that corroborate the above findings from our earlier studies.

October 21, 2019: 

A computational study of Major Depressive Disorder biomarkers

Anna Anissimova (City-University of London)

Electroencephalogram recordings have been used in multiple studies of psychiatric disorders. In this study, a set of biomarkers extracted from EEG data from patient is shown to be able to distinguish between patients and healthy controls. We used dynamic causal modelling (DCM) to obtain a set of interpretable features for a supervised learning algorithm.  We  analysed  the EEG signal recorded from 15 depressed and 35 healthy participants during multi-source interference task. The best interpretable results (using linear SVM) were achieved after feature selection and dataset balancing steps. Testing the final model on unseen data demonstrated a balanced accuracy of approximately 64%, recall of 50% and specificity of 78%. These findings suggest that  using DCM, it is possible to  build a reliable and interpretable classifier and distinguish between MDD patients and healthy controls. This can have important applications in clinical practice.

October 8, 2019: 

Neural couplings and the time-on-task effect

Katharina Wagner (University of Gent)

Frontal beta band power has been shown to increase both as a consequence of elevated task demands and with time on task. Beta band power may thus be a neural correlate of cognitive effort. The present study reports effective connections between the prefrontal and premotor cortex, areas known to be involved in cognitive control, and shows that effective connections change in line with frontal beta band power. We applied Dynamic causal modeling (DCM) to electrocorticographic recordings of two monkeys that are performing a cognitive task. Among model architectures with varying presence and direction of effective connections in each hemisphere, we found a fully connected model in the left hemisphere and a model containing a forward connection from the prefrontal to the premotor region and a backward connection in opposite direction. Using beta band power of each electrode as a predictor for connectivity strength within the parametrical empirical Bayes (PEB) framework, we found that in both hemispheres the strength of the forward connections from the superficial pyramidal cells of the premotor to the spiny stellate cells of the prefrontal area changed in line with prefrontal beta power both when task demands increased and as the session progressed. The consistent change between task and time and between hemispheres was also seen in the self-connection of the prefrontal deep pyramidal cells and the intrinsic connection of premotor spiny stellate to superficial cells. Thus, the increase in cognitive effort may be related to changes in the feedforward connections and other intrinsic connections. 

September 24, 2019: 
Spatiotemporal characterisation of neural signals for active multi-sensing and decision-making
 Ioannis Delis (Leeds) 

Perceptual decisions rely on the integration of information from the environment, which typically involves the combination of stimuli from different senses. The quality of sensory evidence depends highly on our actions that affect how we acquire information from the external world. Importantly, the processing of this multisensory information requires the interaction of multiple neural processes over time. These interactions remain poorly understood primarily because of the lack of unifying methodology that allows their characterization at both the behavioural and neural levels. Here I will present our recent work on decoding neurophysiological signals to explain decision-making behaviour.  

I will first introduce the processes involved in the formation of perceptual decisions and characterise their neural correlates. Then, I will discuss the process of active sensing where movement is used to gather information in order to make perceptual choices. I will first present a behavioural paradigm to study active exploration and perceptual selection. I will then introduce a computational framework for the joint analysis of neural and behavioural signals. This approach, coupled with the cognitive modelling of decision-making behaviours, provides a window into the mechanisms underlying active decision formation. Finally, I will present recent work extending the above methodology to study multi-sensory information processing. 

April 15, 2019:

Representations of touch in the somatosensory cortices

Luigi Tamè (Kent)

Detecting and discriminating sensory stimuli are fundamental functions of the nervous system. Electrophysiological and lesion studies suggest that macaque primary somatosensory cortex (SI) is critically involved in discriminating between stimuli, but is not required simply for detecting stimuli. By contrast, transcranial magnetic stimulation (TMS) studies in humans have shown near-complete disruption of somatosensory detection when a single pulse of TMS is delivered over SI. In my presentation, in accordance with macaque studies, I will provide empirical evidence suggesting that human SI is required for discriminating between tactile stimuli and for maintaining stimulus representations over time, or under high task demand, but may not be required for simple tactile detection. Moreover, I will provide empirical evidence showing that human SI, rather than higher level brain areas, is critically involved in the estimation of tactile distance perception as well as bilateral integration of touch.

March 18, 2019: 
Mapping relational knowledge in the service of flexible cognition
Mona Garvert (Max Planck, UCL) 

Our environment is replete with statistical structure and similar cause-effect relationships hold across related experiences. By extracting and storing relational knowledge efficiently, the brain can therefore predict states and reinforcements that were never directly experienced. In physical space, the hippocampal-entorhinal system organises statistical regularities between landmarks in a cognitive map, which provides a coordinate system that enables inferences about spatial relationships. In this talk, I demonstrate that a similar map-like organisation of knowledge can also be observed for discrete relationships between objects that are entirely non-spatial, suggesting that the same codes may also organise other dimensions of our experiences. When subjects need to flexibly switch between cognitive maps characterised by the same underlying structure, but a different distribution of stimuli, structural knowledge is abstracted away from sensory representations in the medial prefrontal cortex over time. Such a separation of structure from stimulus representations may facilitate the generalisation of knowledge across sensory environments and thereby accelerate learning in novel situations. Together, these studies suggest a potential neural mechanism underlying the remarkable human ability to draw accurate inferences from little data.