Organizing Committee
Abstract

Cracking the neural code is one of the longstanding questions in neuroscience. How does the activity of populations of neurons represent stimuli and perform neural computations? Decades of theoretical and experimental work have provided valuable clues about the principles of neural coding, as well as descriptive understandings of various neural codes. This raises a number of mathematical questions touching on algebra, combinatorics, probability, and geometry. This workshop will explore questions that arise from sensory perception and processing in olfactory, auditory, and visual coding, as well as properties of place field codes and grid cell codes, mechanisms for decoding population activity, and the role of noise and correlations. These questions may be tackled with techniques from information theory, mathematical coding theory, combinatorial commutative algebra, hyperplane arrangements, oriented matroids, convex geometry, statistical mechanics, and more.

Image for "Neural Coding and Combinatorics"

Confirmed Speakers & Participants

Talks will be presented virtually or in-person as indicated in the schedule below.

  • Speaker
  • Poster Presenter
  • Attendee
  • Virtual Attendee
  • Arman Afrasiyabi
    Yale University
  • Asohan Amarasingham
    City College, CUNY
  • Daniele Avitabile
    Vrije Universiteit Amsterdam
  • Andrea Barreiro
    Southern Methodist University
  • Amitabha Bose
    New Jersey Institute of Technology
  • Robyn Brooks
    University of Utah
  • Thomas Burns
    ICERM
  • Carlos Castañeda Castro
    Brown University
  • Teressa Chambers
    Brown University
  • Hannah Choi
    Georgia Institute of Technology
  • Giovanna Citti
    university of Bologna
  • Natasha Crepeau
    University of Washington
  • Julie Curtis
    University of Washington
  • Carina Curto
    The Pennsylvania State University
  • Rodica Curtu
    The University of Iowa
  • Steve Damelin
    Mathematical Scientist, Ann Arbor MI
  • Maria Dascalu
    University of Massachusetts Amherst
  • Julia E Grigsby
    Boston College
  • Aysel Erey
    Utah State University
  • Michael Frank
    Brown University
  • Marcio Gameiro
    Rutgers University
  • Tomas Gedeon
    Montana State University
  • Maria Geffen
    University of Pennsylvania
  • Tim Gentner
    University of California, San Diego
  • Juliann Geraci
    University of Nebraska- Lincoln
  • Chad Giusti
    Oregon State University
  • Betty Hong
    California Institute of Technology
  • Vladimir Itskov
    The Pennsylvania State University
  • Shabnam Kadir
    University of Hertfordshire
  • Sameer Kailasa
    University of Michigan Ann Arbor
  • Roozbeh Kiani
    New York University
  • Zachary Kilpatrick
    University of Colorado Boulder
  • Soon Ho Kim
    Georgia Institute of Technology
  • Maxwell Kreider
    Case Western Reserve University
  • Zelong Li
    Penn State University
  • Yao Li
    University of Massachusetts Amherst
  • Caitlin Lienkaemper
    Boston University
  • Kathryn Lindsey
    Boston College
  • Justin Lines
    Columbia University
  • Vasiliki Liontou
    ICERM
  • Sijing Liu
    Brown University
  • Juliana Londono Alvarez
    Penn State
  • Christian Machens
    Champalimaud Foundation
  • Marissa Masden
    ICERM
  • Sarah Mason
    Wake Forest University
  • Leenoy Meshulam
    University of Washington
  • Nikola Milicevic
    Pennsylvania State University
  • Federica Milinanni
    KTH - Royal Institute of Technology
  • Katie Morrison
    University of Northern Colorado
  • matt nassar
    Brown University
  • Junalyn Navarra-Madsen
    TEXAS WOMAN'S UNIVERSITY
  • Ilya Nemenman
    Emory University
  • Gabe Ocker
    Boston University
  • Caitlyn Parmelee
    Keene State College
  • Cengiz Pehlevan
    Harvard University
  • Isabella Penido
    Brown University
  • Jose Perea
    Northeastern University
  • Rebecca R.G.
    George Mason University
  • Antonio Rieser
    Centro de Investigación en Matemáticas
  • Jason Ritt
    Brown University
  • Horacio Rotstein
    New Jersey Institute of Technology
  • Safaan Sadiq
    Pennsylvania State University
  • Nicole Sanderson
    Penn State University
  • Hannah Santa Cruz
    Penn State
  • Cristina Savin
    NYU
  • Elad Schneidman
    Weizmann Institute of Science
  • Nikolas Schonsheck
    University of Delaware
  • David Schwab
    City University of New York
  • Daniel Scott
    Brown University
  • Thomas Serre
    Brown University
  • Tatyana Sharpee
    Salk Institute
  • Thibaud Taillefumier
    UT Austin
  • Gaia Tavoni
    Washington University in St. Louis
  • Peter Thomas
    Case Western Reserve University
  • Nicholas Tolley
    Brown University
  • Taro Toyoizumi
    Riken Center for Brain Science
  • Ka Nap Tse
    University of Pittsburgh
  • Yuki Tsukada
    Keio University
  • Juan Pablo Vigneaux
    Caltech
  • Bin Wang
    University of California, San Diego
  • Iris Yoon
    Wesleyan University
  • Nora Youngs
    Colby College
  • Zhuojun Yu
    Case Western Reserve University
  • Ling Zhou
    ICERM
  • Robert Zielinski
    Brown University

Workshop Schedule

Monday, October 30, 2023
  • 8:50 - 9:00 am EDT
    Welcome
    11th Floor Lecture Hall
    • Session Chair
    • Brendan Hassett, ICERM/Brown University
  • 9:00 - 9:45 am EDT
    How to perform computations in low-rank excitatory-inhibitory spiking networks: a geometric view
    11th Floor Lecture Hall
    • Speaker
    • Christian Machens, Champalimaud Foundation
    • Session Chair
    • Katie Morrison, University of Northern Colorado
    Abstract
    Models of neural networks can be largely divided into two camps. On one end, mechanistic models such as balanced spiking networks resemble activity regimes observed in data, but are often limited to simple computations. On the other end, functional models like trained deep networks can perform a multitude of computations, but are far removed from experimental physiology. Here, I will introduce a new framework for excitatory-inhibitory spiking networks which retains key properties of both mechanistic and functional models. The principal insight is to cast the problem of spiking dynamics in the low-dimensional space of population modes rather than in the original neural space. Neural thresholds then become convex boundaries in the population space, and population dynamics is either attracted (I population) or repelled (E population) by these boundaries. The combination of E and I populations results in balanced, inhibition-stabilized networks which are capable of universal function approximation. I will illustrate these insights with simple, geometric toy models, and I will argue that need to reconsider the very basics of how we think about neural networks.
  • 10:00 - 10:15 am EDT
    Coffee Break
    11th Floor Collaborative Space
  • 10:15 - 11:00 am EDT
    Structured variability and its roles in neural computation: the hippocampus perspective
    11th Floor Lecture Hall
    • Speaker
    • Cristina Savin, NYU
    • Session Chair
    • Katie Morrison, University of Northern Colorado
    Abstract
    Local circuit interactions play a key role in neural computation and are dynamically shaped by experience. However, measuring and assessing their effects during behavior remains a challenge. Here we combine techniques from statistical physics and machine learning to develop new tools for determining the effects of local network interactions on neural population activity. This approach reveals highly structured local interactions between hippocampal neurons, which make the neural code more precise and easier to read out by downstream circuits, across different levels of experience. More generally, the novel combination of theory and data analysis in the framework of maximum entropy models enables traditional neural coding questions to be asked in a naturalistic setting.
  • 11:15 - 11:45 am EDT
    Open problems Discussion
    Problem Session - 11th Floor Lecture Hall
    • Session Chairs
    • Carina Curto, The Pennsylvania State University
    • Katie Morrison, University of Northern Colorado
  • 11:45 am - 1:30 pm EDT
    Lunch/Free Time
  • 1:30 - 2:15 pm EDT
    The topology, geometry, and combinatorics of feedforward neural networks
    11th Floor Lecture Hall
    • Speaker
    • Julia E Grigsby, Boston College
    • Session Chair
    • Nora Youngs, Colby College
    Abstract
    Deep neural networks are a class of parameterized functions that have proven remarkably successful at making predictions about unseen data from finite labeled data sets. They do so even in settings when classical intuition suggests that they ought to be overfitting (aka memorizing) the data. I will begin by describing the structure of neural networks and how they learn. I will then advertise one of the theoretical questions animating the field: how does the relationship between the number of parameters and the size of the data set impact the dynamics of how they learn? Along the way I will emphasize the many ways in which topology, geometry, and combinatorics play a role in the field.
  • 2:30 - 2:40 pm EDT
    Correlated dense associative memories
    Lightning Talks - 11th Floor Lecture Hall
    • Speaker
    • Thomas Burns, ICERM
    • Session Chair
    • Nora Youngs, Colby College
    Abstract
    Associative memory networks encode memory patterns by establishing dynamic attractors centred on specific states of neurons. These attractors, nonetheless, are not constrained to remain fixed points or singular memory patterns. Through the correlation of these attractors and asymmetry of the network's connections, we can depict sequences or sets of stimuli that are temporally or spatially connected via mathematical graphs. By further modulating these correlations using inhibitory (anti-Hebbian) learning rules, we show how structures may be hierarchically-segmented at multiple scales. Such structures can also be used to conduct 'computations' where sequences of (quasi-)attractors code for an 'associationist' algorithmic syntax. This therefore illustrates how auto- and hetero-associative recall processes can form as a basis for executing more complex network behaviours, which is aided by the highly nonlinear energy landscape inherent in dense associative memory networks (also known as modern Hopfield networks).
  • 2:40 - 2:50 pm EDT
    Using spherical coordinates and Stiefel manifolds to decode neural data
    Lightning Talks - 11th Floor Collaborative Space
    • Speaker
    • Nikolas Schonsheck, University of Delaware
    • Session Chair
    • Nora Youngs, Colby College
    Abstract
    A central challenge in modern computational neuroscience is decoding behaviors and stimuli from the activity of the neural populations that encode them. In this talk, I will describe a few examples of how one can do this using novel techniques from algebraic topology. I will describe a method that is well-suited to stimuli with spherical geometry, and another method that can be used on Stiefel manifolds. For the latter, I will discuss an application to simulated data on a partially sampled circular stimulus space where standard persistence techniques fail.
  • 2:50 - 3:00 pm EDT
    Active sensing and switching in neural population activity
    Lightning Talks - 11th Floor Collaborative Space
    • Speaker
    • Soon Ho Kim, Georgia Institute of Technology
    • Session Chair
    • Nora Youngs, Colby College
    Abstract
    While sensory processing and motor control are often studied in isolation, perception and action are fundamentally intertwined. Here we study electrophysiological recordings of the barrel cortex of mice during a shape discrimination task in which mice actively whisk their surroundings to identify and discriminate objects. We find significant changes in the intra- and interlaminar functional connectivity during whisking trials when compared to resting state, with information flow from superficial to deep layers more significant. We further use a novel generalized linear model developed for spiking neural activity coupled with a hidden-Markov model to analyze state transitions in neural activity as well as in behavior. The results shed light into the neural activity underpinning active whisking and sensory perception.
  • 3:00 - 3:10 pm EDT
    On the Convexity of Certain 4-Maximal Neural Codes
    Lightning Talks - 11th Floor Collaborative Space
    • Speaker
    • Natasha Crepeau, University of Washington
    • Session Chair
    • Nora Youngs, Colby College
    Abstract
    A convex neural code describes the regions of an arrangement of convex open sets in Euclidean space, where each set corresponds to the place field of a neuron in an animal's environment. The convexity of neural codes with up to three maximal codewords is completely characterized by the lack of local obstructions, introduced by Giusti and Itskov. Another indicator of non-convexity introduced by Perez, Matusevich, and Shiu are wheels. It is conjectured by Jeffs that a 4-maximal neural code is convex if and only if it has no local obstructions and no wheels. By studying the nerve of the maximal codewords of a given code, we resolve this conjecture for certain classes of 4-maximal neural codes. Additionally, we describe a type of wheel always contained in a family of 4-maximal neural codes, with a goal of identifying more.
  • 3:30 - 4:00 pm EDT
    Coffee Break
    11th Floor Collaborative Space
  • 4:00 - 4:45 pm EDT
    TBD
    11th Floor Lecture Hall
    • Speaker
    • Leenoy Meshulam, University of Washington
    • Session Chair
    • Nora Youngs, Colby College
  • 5:00 - 6:30 pm EDT
    Reception
    11th Floor Collaborative Space
Tuesday, October 31, 2023
  • 9:00 - 9:45 am EDT
    Hyperbolic geometry and power law adaptation in neural circuits
    11th Floor Lecture Hall
    • Speaker
    • Tatyana Sharpee, Salk Institute
    • Session Chair
    • Zachary Kilpatrick, University of Colorado Boulder
  • 10:00 - 10:15 am EDT
    Coffee Break
    11th Floor Collaborative Space
  • 10:15 - 11:00 am EDT
    Where can a place cell put its fields? Let us count the ways
    11th Floor Lecture Hall
    • Speaker
    • Thibaud Taillefumier, UT Austin
    • Session Chair
    • Zachary Kilpatrick, University of Colorado Boulder
    Abstract
    A hippocampal place cell exhibits multiple firing fields within and across environments. What factors determine the configuration of these fields, and could they be set down in arbitrary locations? We conceptualize place cells as perceptrons performing evidence combination across many inputs, including grid-cell drives, and selecting a threshold to fire. Grid-cell drives represent geometrically organized inputs in the form of multiscale periodic grid-cell drive. We characterize and count which field arrangements a place cell can realize with such structured inputs. The number of realizable place-field arrangements with grid-like inputs is much larger than with one-hot coded inputs of the same input dimension. However, the realizable place-field arrangements make up a vanishing fraction of all possible arrangements. We show that the “separating capacity” or spatial range over which all field arrangements are realizable is given by the rank of the grid-like input matrix, and this rank equals the sum of distinct grid periods, a small fraction of the coding range, which scales as the product of periods. Compared to random inputs over the same range, grid-structured inputs generate larger margins, conferring stability to place fields. Finally, the realizable arrangements are determined by the input geometry, thus the model predicts that place fields should lie in constrained arrangements within and across environments.
  • 11:15 - 11:45 am EDT
    Open Problems Discussion
    Problem Session - 11th Floor Lecture Hall
    • Session Chairs
    • Zachary Kilpatrick, University of Colorado Boulder
    • Tatyana Sharpee, Salk Institute
  • 11:50 am - 12:00 pm EDT
    Group Photo (Immediately After Talk)
    11th Floor Lecture Hall
  • 12:00 - 1:30 pm EDT
    Networking Lunch
    Working Lunch - 11th Floor Collaborative Space
  • 1:30 - 2:15 pm EDT
    Toward a unifying theory of context-dependent efficient coding of sensory spaces
    11th Floor Lecture Hall
    • Speaker
    • Gaia Tavoni, Washington University in St. Louis
    • Session Chair
    • Tatyana Sharpee, Salk Institute
    Abstract
    Contextual information can powerfully influence the neural representation and perception of stimuli across the senses: multimodal cues, stimulus history, novelty, rewards, and behavioral goals can all affect how sensory inputs are encoded in the brain. Experimental findings are scattered and a top down overarching interpretation is lacking. Our goal is to develop a unifying theory of context-dependent sensory coding, beginning with the olfactory system. We use an approach based on the information-theoretic hypothesis that optimal codes strive to maximize the overall entropy (decodability) of sensory neural representations while minimizing neural costs (e.g., in energetic terms). A novel feature of our theory is that it incorporates contextual feedback: this allows us to predict how optimal odor representations are modulated by top-down signals that represent different types of context, including the overall multisensory environment and behavioral goals. Our theory reproduces (and provides a unifying interpretation of) a large number of experimental observations. These include adaptation to familiar stimuli, background suppression and detection of novel odors in mixtures, pattern separation between similar odors after a single sniff, increased responsiveness of neurons to behaviorally salient stimuli, figure-ground segregation of salient odor targets. It also makes novel predictions, such as the amplification of some of these effects in ambiguous multisensory contexts, and the emergence of olfactory illusions in specific environments. Our predictions generalize to a broad class of canonical microcircuits, suggesting that the efficient coding principles uncovered here may also apply to the building blocks of other sensory systems. Finally, we show that our optimal-coding solutions can be learned in neural circuits through Hebbian synaptic plasticity. This result connects our normative findings (Marr's computational level of analysis) to biologically plausible processes (Marr's implementational level of analysis). In conclusion, we have taken significant steps towards developing a context-dependent efficient coding theory that is biologically interpretable, is broadly applicable across sensory systems, and establishes a conceptual foundation for studying sensory coding associated with behavior.
  • 2:30 - 3:15 pm EDT
    Visual coding shaped by anatomical and functional connectivity structures
    11th Floor Lecture Hall
    • Speaker
    • Hannah Choi, Georgia Institute of Technology
    • Session Chair
    • Tatyana Sharpee, Salk Institute
    Abstract
    Visual cortical neurons encode diverse context-dependent information of visual inputs. For example, neuronal populations encode specific features of visual stimuli such as orientation, direction of movement, or object identities, while also encoding prior experience and expectations. This talk will focus on understanding how such different neural codes are shaped by both anatomical and functional connectivity of neuronal populations in the mouse visual cortex across multiple regions. In a recent experimental study, we found that lower cortical areas such as the primary visual cortex and the posterior medial higher order visual area primarily encode image identities from both expected and unexpected sequences of natural images, while neural responses in the retrosplenial cortex strongly represent expectation, in accordance with predictive coding theory. Motivated by this, we study how inter-areal layer-specific connectivity modulates the representation of task-relevant information such as input identity and expectation violation by performing representational analyses on recurrent neural networks with systematically altered structural motifs. The second part of the talk will focus on how visual stimuli of varying complexity drive functional connectivity of neurons in the mouse visual cortex. Our analyses of electrophysiological data across multiple areas of visual cortex reveal that the frequencies of different low-order connectivity motifs are preserved across a range of stimulus complexity, suggesting the role of specific motifs as local computational units of visual information.
  • 3:30 - 4:00 pm EDT
    Coffee Break
    11th Floor Collaborative Space
  • 4:00 - 4:45 pm EDT
    Restructuring of olfactory representations in the fly brain around odor relationships in natural sources
    11th Floor Lecture Hall
    • Speaker
    • Betty Hong, California Institute of Technology
    • Session Chair
    • Tatyana Sharpee, Salk Institute
    Abstract
    A core challenge of olfactory neuroscience is to understand how neural representations of odor are generated and transformed through successive layers of the olfactory circuit into formats that support perception and behavior. The encoding of odor by odorant receptors in the input layer of the olfactory system reflects, at least in part, the chemical relationships between odor compounds. Neural representations of odor in higher order associative olfactory areas, generated by random feedforward networks, are expected to largely preserve these input odor relationships. We evaluated these ideas by examining how odors are represented at different stages of processing in the olfactory circuit of the vinegar fly D. melanogaster. We found that representations of odor in the mushroom body (MB), a third-order associative olfactory area in the fly brain, are indeed structured and invariant across flies. However, the structure of MB representational space diverged significantly from what is expected in a randomly connected network. In addition, odor relationships encoded in the MB were better correlated with a metric of the similarity of their distribution across natural sources compared to their similarity with respect to chemical features, and the converse was true for odor relationships encoded in primary olfactory receptor neurons (ORNs). Comparison of odor coding at primary, secondary, and tertiary layers of the circuit revealed that odors were significantly regrouped with respect to their representational similarity across successive stages of olfactory processing, with the largest changes occurring in the MB. The non-linear reorganization of odor relationships in the MB indicates that unappreciated structure exists in the fly olfactory circuit, and this structure may facilitate the generalization of odors with respect to their co-occurence in natural sources.
Wednesday, November 1, 2023
  • 9:00 - 9:45 am EDT
    Information theoretical approaches to model synaptic plasticity
    11th Floor Lecture Hall
    • Speaker
    • Taro Toyoizumi, Riken Center for Brain Science
    • Session Chair
    • Horacio Rotstein, New Jersey Institute of Technology
    Abstract
    We adjust our behavior adaptively, based on experience, to thrive in our environment. Activity-dependent synaptic plasticity within neural circuits is believed to be a fundamental mechanism that enables such adaptive behavior. In this talk, I will introduce a top-down approach to modeling synaptic plasticity. Specifically, recognizing the brain as an information-processing organ, I posit that synaptic plasticity mechanisms have evolved to transmit information across synapses efficiently. It suggests a method to identify hidden independent sources behind sensory scenes. I will demonstrate that it's feasible to reconstruct even nonlinearly mixed sources that underlie sensory inputs when sensors of sufficiently high dimensions are employed. Furthermore, the theory also helps in interpreting experimentally observed results: it reproduces the distinct outcomes of synaptic plasticity observed in up and down states during non-rapid eye movement sleep, shedding light on how memory consolidation might be influenced by the states and spatial scale of slow waves.
  • 10:00 - 10:30 am EDT
    Coffee Break
    11th Floor Collaborative Space
  • 10:30 - 11:15 am EDT
    TBD
    11th Floor Lecture Hall
    • Speaker
    • Nora Youngs, Colby College
    • Session Chair
    • Horacio Rotstein, New Jersey Institute of Technology
    Abstract
    Neural codes allow the brain to represent, process, and store information about the world. Combinatorial codes, comprised of binary patterns of neural activity, encode information via the collective behavior of populations of neurons. A code is called convex if its codewords correspond to regions defined by an arrangement of convex open sets in Euclidean space. What makes a neural code convex? That is, how can we tell from the intrinsic structure of a code if there exists a corresponding arrangement of convex open sets? In this talk, we will exhibit topological, algebraic, and geometric approaches to answering this question.
  • 11:30 am - 12:00 pm EDT
    Open Problems Discussion
    Problem Session - 11th Floor Lecture Hall
    • Session Chairs
    • Katie Morrison, University of Northern Colorado
    • Nora Youngs, Colby College
  • 12:00 - 2:00 pm EDT
    Lunch/Free Time
  • 2:00 - 2:45 pm EDT
    Intrinsic Geometry of a Combinatorial Sensory Neural Code for Birdsong
    11th Floor Lecture Hall
    • Speaker
    • Tim Gentner, University of California, San Diego
    • Session Chair
    • Nora Youngs, Colby College
    Abstract
    To understand neural representation, researchers commonly compute receptive fields by correlating neural activity with external variables drawn from sensory signals. These receptive fields are only meaningful to the experimenter, however, because only the experimenter has access to both the neural activity and the external variables. To examine representation more directly, recent methodological advances have sought to capture the intrinsic geometry of sensory driven neural responses without external reference. To date, this approach has largely been restricted to low-dimensional stimuli as in spatial navigation. Here, we examined the intrinsic geometry of sensory representations in a model vocal communication system, songbirds. From the assumption that sensory systems represent invariant relationships among stimulus features, we conceptualized the space of natural birdsongs to lie on the surface of an n-dimensional hypersphere. We computed composite receptive field models for large populations of simultaneously recorded single neurons in the caudal medial neostriatum (NCM) of the auditory forebrain and show that solutions to these models define convex regions of response probability in the spherical stimulus space. We then define a combinatorial code over the set of receptive fields, realized in the moment-to-moment spiking and non-spiking patterns across the population, and show that this code can be used to reconstruct high-fidelity spectrographic representations of natural songs from evoked neural responses. Notably, relationships among combinatorial codewords directly mirror acoustic relationships among songs in the spherical stimulus space. That is, the time-varying pattern of co-activity among small groups of neurons expresses an intrinsic representational geometry of natural, extrinsic stimulus space. This combinatorial sensory code for representing vocal communication signals does not require computation of receptive fields and is in a form, spike time coincidences, amenable to biophysical mechanisms of neural information propagation.
  • 3:00 - 3:30 pm EDT
    Coffee Break
    11th Floor Collaborative Space
  • 3:30 - 4:15 pm EDT
    Decoding a compact neural circuit of Caenorhabditis elegans
    11th Floor Lecture Hall
    • Speaker
    • Yuki Tsukada, Keio University
    • Session Chair
    • Nora Youngs, Colby College
    Abstract
    Caenorhabditis elegans provides a compact neural circuit consisting of 302 neurons and simple behavioral experiments for dissecting neural code. We discuss our modeling approach based on our quantitative measurements of behavior and neural activity, and systems identification framework. We are particularly focusing on thermotaxis behavior as a simple behavioral model, yet including environmental sensing, memory, learning, and decision-making.
Thursday, November 2, 2023
  • 9:00 - 9:45 am EDT
    Inhibitory neurons control cortical auditory processing
    11th Floor Lecture Hall
    • Speaker
    • Maria Geffen, University of Pennsylvania
    • Session Chair
    • Tatyana Sharpee, Salk Institute
    Abstract
    Sparse coding can support different forms of population-level codes including localist and distributed representations. In localist representations, a feature is represented by activity of a specific neuronal subpopulation. By contrast, in a distributed representation, a sensory code is represented by the relative activity of neuronal populations. These codes trade off advantages in terms of information transmission. I will present our recent findings that different inhibitory neurons differentially control these forms of information coding, by shifting the coding scheme in the auditory cortex between localist and distributed representations.
  • 10:00 - 10:30 am EDT
    Coffee Break
    11th Floor Collaborative Space
  • 10:30 - 11:15 am EDT
    Maximizing Mutual Information in Mosquito Olfaction
    11th Floor Lecture Hall
    • Speaker
    • Caitlin Lienkaemper, Boston University
    • Session Chair
    • Tatyana Sharpee, Salk Institute
    Abstract
    Across species, the olfactory system follows a conserved organization: each olfactory receptor neuron expresses a single type of olfactory receptor, and responses of olfactory sensory neurons which express the same receptor are pooled before they are sent to higher regions of the brain. Mosquitoes have recently been shown to violate this organization: olfactory sensory neurons express multiple receptor types, thus mixing information about activation of different receptors from the start. Because mosquitoes are olfactory predators, it is reasonable to assume that this pattern of coexpression makes the mosquito olfactory system more effective. Under which conditions and assumptions does coexpression of multiple receptors make sense, and how do the statistics of olfactory stimuli shape the optimal pattern of receptor expression? In a linear, feedforward model of the olfactory system with gaussian noise, we compute the level and pattern of coexpression which maximizes the mutual information between olfactory stimulus and the neural response. We find that coexpressing receptors with correlated activity maximizes the mutual information when neurons are reliable but olfactory stimuli are noisy. We then look at how the geometry of the receptor correlations interacts with the sign constraints to shape the pattern of optimal receptor expression.
  • 11:30 am - 1:30 pm EDT
    Open Problems Lunch
    Working Lunch - 11th Floor Collaborative Space
  • 1:30 - 2:15 pm EDT
    Emergent properties of large population codes
    11th Floor Lecture Hall
    • Speaker
    • Ilya Nemenman, Emory University
    • Session Chair
    • Zachary Kilpatrick, University of Colorado Boulder
  • 2:30 - 3:00 pm EDT
    Coffee Break
    11th Floor Collaborative Space
  • 3:00 - 3:45 pm EDT
    Musings on mesoscale structures, brain states, and visual art, through a topological lens
    11th Floor Lecture Hall
    • Speaker
    • Shabnam Kadir, University of Hertfordshire
    • Session Chair
    • Zachary Kilpatrick, University of Colorado Boulder
    Abstract
    The neural code is sufficiently spatially and temporally smooth with respect to neural activity to enable meaningful neuroscientific recording on a large, coarse whole-brain scale, such as fMRI and EEG. We investigate the structural and functional connectome using methods from applied topology, namely persistent homology. We reveal differences in the white matter structural connectome in schizophrenia using the publicly available COBRE dataset. We also develop a method for exploring dynamic functional connectomics in fMRI which enables analysis and the derivation of brain states from a single recording and a single trial, whereas traditional fMRI analysis techniques rely on averaging over trials and subjects, disregarding individual idiosyncrasies. Finally, we explore questions of visual perception and appreciation of art in an experiment measuring EEG, eye movement, as well as conscious perception/appreciation of abstract paintings generated by both a human artist and by BigGAN.
  • 4:00 - 4:45 pm EDT
    Learning large neural population codes, accurately, efficiently, and in a biologically-plausible way using sparse random projections
    11th Floor Lecture Hall
    • Virtual Speaker
    • Elad Schneidman, Weizmann Institute of Science
    • Session Chair
    • Zachary Kilpatrick, University of Colorado Boulder
    Abstract
    I will present a new class of highly accurate, scalable, and efficient models of the activity of large neural populations. Moreover, I will show that these models have a biologically-plausible implementation by neural circuits that rely on random, sparse, and non-linear projections. I will further show that homeostatic synaptic scaling makes the learning of such models for very large neural populations even more efficient and accurate. Finally, I will discuss how such models can allow the brain to perform Bayesian decoding and the learning of metrics on the space of neural codes and of external stimuli.
Friday, November 3, 2023
  • 9:00 - 9:45 am EDT
    Representational geometry of perceptual decisions
    11th Floor Lecture Hall
    • Speaker
    • Roozbeh Kiani, New York University
    • Session Chair
    • Katie Morrison, University of Northern Colorado
    Abstract
    I will explore two core principles of circuit models for perceptual decisions. In these models, neural ensembles that encode actions compete to form decisions. Consequently, representation and readout of the decision variables (DVs) in these models are similar for decisions in which the same actions compete, irrespective of input and task context differences. Further, DVs are encoded as partially potentiated action plans through balance of activity of action-selective ensembles. I show that the firing rates of neurons in the posterior parietal cortex of monkeys performing motion and face discrimination tasks violate these principles. Instead, neural responses suggest a mechanism in which decisions form along curved population-response manifolds misaligned with action representations. These manifolds rotate in state space for different task contexts, making optimal readout of the DV task dependent. Similar manifolds exist in lateral and medial prefrontal cortex, suggesting common representational geometries across decision-making circuits.
  • 10:00 - 10:30 am EDT
    Coffee Break
    11th Floor Collaborative Space
  • 10:30 - 11:15 am EDT
    Sensory input to cortex encoded on low-dimensional periphery-correlated subspaces
    11th Floor Lecture Hall
    • Speaker
    • Andrea Barreiro, Southern Methodist University
    • Session Chair
    • Katie Morrison, University of Northern Colorado
    Abstract
    As information about the world is conveyed from the sensory periphery to central neural circuits, it mixes with complex ongoing cortical activity. How do neural populations keep track of sensory signals, separating them from noisy ongoing activity? I will talk about our recent work demonstrating that sensory signals are encoded more reliably in low-dimensional subspaces defined by correlations between neural activity in primary sensory cortex and upstream sensory brain regions. We analytically show that these subspaces can reach optimal limits (without an ideal observer) as noise correlations between cortex and upstream regions are reduced, and that this principle generalizes across diverse sensory stimuli in the olfactory system and the visual system of awake mice. Finally, I will talk about the neural observations that originally motivated our thinking in this area: the difference in the olfactory response between inhale and exhale. This difference is evident early in the olfactory pathway, and we hypothesize that it arises in part because of fluid mechanical forces in the nasal cavity. I will show how we are constructing a phase preference map for mechanical forcing. Our goal is to combine this map with emerging research on receptor zones to produce a unified view of the sensory inputs underlying directional selectivity.
  • 11:30 am - 12:30 pm EDT
    Final Open Problems Discussion
    Problem Session - 11th Floor Lecture Hall
  • 12:30 - 2:00 pm EDT
    Lunch/Free Time
  • 3:30 - 4:00 pm EDT
    Coffee Break
    11th Floor Collaborative Space

All event times are listed in ICERM local time in Providence, RI (Eastern Daylight Time / UTC-4).

All event times are listed in .

Request Reimbursement

This section is for general purposes only and does not indicate that all attendees receive funding. Please refer to your personalized invitation to review your offer.

ORCID iD
As this program is funded by the National Science Foundation (NSF), ICERM is required to collect your ORCID iD if you are receiving funding to attend this program. Be sure to add your ORCID iD to your Cube profile as soon as possible to avoid delaying your reimbursement.
Acceptable Costs
  • 1 roundtrip between your home institute and ICERM
  • Flights on U.S. or E.U. airlines – economy class to either Providence airport (PVD) or Boston airport (BOS)
  • Ground Transportation to and from airports and ICERM.
Unacceptable Costs
  • Flights on non-U.S. or non-E.U. airlines
  • Flights on U.K. airlines
  • Seats in economy plus, business class, or first class
  • Change ticket fees of any kind
  • Multi-use bus passes
  • Meals or incidentals
Advance Approval Required
  • Personal car travel to ICERM from outside New England
  • Multiple-destination plane ticket; does not include layovers to reach ICERM
  • Arriving or departing from ICERM more than a day before or day after the program
  • Multiple trips to ICERM
  • Rental car to/from ICERM
  • Flights on a Swiss, Japanese, or Australian airlines
  • Arriving or departing from airport other than PVD/BOS or home institution's local airport
  • 2 one-way plane tickets to create a roundtrip (often purchased from Expedia, Orbitz, etc.)
Travel Maximum Contributions
  • New England: $350
  • Other contiguous US: $850
  • Asia & Oceania: $2,000
  • All other locations: $1,500
  • Note these rates were updated in Spring 2023 and superseded any prior invitation rates. Any invitations without travel support will still not receive travel support.
Reimbursement Requests

Request Reimbursement with Cube

Refer to the back of your ID badge for more information. Checklists are available at the front desk and in the Reimbursement section of Cube.

Reimbursement Tips
  • Scanned original receipts are required for all expenses
  • Airfare receipt must show full itinerary and payment
  • ICERM does not offer per diem or meal reimbursement
  • Allowable mileage is reimbursed at prevailing IRS Business Rate and trip documented via pdf of Google Maps result
  • Keep all documentation until you receive your reimbursement!
Reimbursement Timing

6 - 8 weeks after all documentation is sent to ICERM. All reimbursement requests are reviewed by numerous central offices at Brown who may request additional documentation.

Reimbursement Deadline

Submissions must be received within 30 days of ICERM departure to avoid applicable taxes. Submissions after thirty days will incur applicable taxes. No submissions are accepted more than six months after the program end.

Associated Semester Workshops

Topology and Geometry in Neuroscience
Image for "Topology and Geometry in Neuroscience"