Organizing Committee
Abstract

The goal of this Semester Program is to bring together a variety of mathematicians with researchers working in theoretical and computational neuroscience as well as some theory-friendly experimentalists. However, unlike programs in neuroscience that emphasize connections between theory and experiment, this program will focus on building bridges between theory and mathematics. This is motivated in part by the observation that theoretical developments in neuroscience are often limited not only by lack of data but also by the need to better develop the relevant mathematics. For example, theorists often rely on linear or near-linear modeling frameworks for neural networks simply because the mathematics of nonlinear network dynamics is still poorly understood. Conversely, just as in the history of physics, neuroscience problems give rise to new questions in mathematics. In recent years, these questions have touched on a rich variety of fields including geometry, topology, combinatorics, dynamical systems, and algebra. We believe the time has come to deepen these connections and foster new interactions and collaborations between neuroscientists who think deeply about theory and mathematicians who are looking for new problems inspired by science. In addition to collaborative research between theorists and mathematicians, an explicit goal of the program will be to produce an “open problems” document. This document will present a series of well-formulated open math problems together with explanations of their neuroscience motivation, partial progress, and the potential significance of their solutions.

Image for "Math + Neuroscience: Strengthening the Interplay Between Theory and Mathematics"

Confirmed Speakers & Participants

Talks will be presented virtually or in-person as indicated in the schedule below.

  • Speaker
  • Poster Presenter
  • Attendee
  • Virtual Attendee
  • Arman Afrasiyabi
    Yale University
    Oct 30-Nov 3, 2023
  • Yashar Ahmadian
    Cambridge University
    Sep 18-22, 2023
  • Asohan Amarasingham
    City College, CUNY
    Oct 30-Nov 3, 2023
  • Daniele Avitabile
    Vrije Universiteit Amsterdam
    Sep 20-Dec 15, 2023
  • Huseyin Ayhan
    Florida State University
    Oct 16-20, 2023
  • Demba Ba
    Harvard University
    Sep 18-22, 2023
  • Aishwarya Balwani
    Georgia Institute of Technology
    Oct 16-20, 2023
  • Andrea Barreiro
    Southern Methodist University
    Sep 1-Dec 22, 2023
  • Robin Belton
    Smith College
    Sep 18-22, 2023
  • Marcus Benna
    UC San Diego
    Sep 18-22, 2023
  • Leonid Berlyand
    Penn State
    Oct 5-6, 2023
  • Dhananjay Bhaskar
    Yale University
    Oct 16-20, 2023
  • Ginestra Bianconi
    Queen Mary University of London
    Oct 16-20, 2023
  • Prianka Bose
    New Jersey Institute of Technology
    Sep 18-22, 2023
  • Amitabha Bose
    New Jersey Institute of Technology
    Sep 15-Nov 17, 2023
  • Felipe Branco de Paiva
    University of Wisconsin-Madison
    Oct 16-20, 2023
  • Robyn Brooks
    University of Utah
    Sep 1-Dec 31, 2023
  • Peter Bubenik
    University of Florida
    Sep 18-22, 2023; Oct 16-20, 2023
  • Michael Buice
    Allen Institute
    Sep 18-22, 2023
  • Thomas Burns
    ICERM
    Aug 31-Dec 31, 2023
  • Johnathan Bush
    University of Florida
    Oct 16-20, 2023
  • Carlos Castañeda Castro
    Brown University
    Sep 6-Dec 8, 2023
  • Francesca Cavallini
    Vrije Universiteit Amsterdam
    Oct 16-20, 2023
  • Teressa Chambers
    Brown University
    Oct 30-Nov 3, 2023
  • Rishidev Chaudhuri
    University of California, Davis
    Sep 18-22, 2023
  • Dmitri Chklovskii
    Flatiron Institute & NYU Neuroscience Institute
    Oct 16-20, 2023
  • Hannah Choi
    Georgia Institute of Technology
    Oct 30-Nov 3, 2023
  • Sehun Chun
    Yonsei University
    Sep 18-22, 2023
  • Heather Cihak
    University of Colorado Boulder
    Sep 18-22, 2023
  • Giovanna Citti
    university of Bologna
    Sep 17-Nov 3, 2023
  • Natasha Crepeau
    University of Washington
    Oct 30-Nov 3, 2023
  • Justin Curry
    University at Albany SUNY
    Oct 16-20, 2023
  • Julie Curtis
    University of Washington
    Oct 30-Nov 3, 2023
  • Carina Curto
    The Pennsylvania State University
    Sep 1-Dec 9, 2023
  • Rodica Curtu
    The University of Iowa
    Sep 18-22, 2023; Oct 29-Nov 3, 2023
  • Steve Damelin
    Mathematical Scientist, Ann Arbor MI
    Sep 17-22, 2023; Oct 30-Nov 3, 2023
  • Maria Dascalu
    University of Massachusetts Amherst
    Oct 29-Nov 4, 2023
  • Anda Degeratu
    University of Stuttgart
    Sep 17-Oct 7, 2023
  • Juan Carlos Díaz-Patiño
    Universidad Nacional Autónoma de México
    Oct 16-20, 2023
  • Darcy Diesburg
    Brown University
    Sep 18-22, 2023
  • Fatih Dinc
    Stanford University
    Sep 18-22, 2023
  • Brent Doiron
    University of Chicago
    Sep 17-23, 2023
  • Benjamin Dunn
    Norwegian University of Science and Technology
    Oct 16-20, 2023
  • Julia E Grigsby
    Boston College
    Oct 30-Nov 3, 2023
  • Ahmed elhady
    Konstanz Center for Advanced Study of Collective Behavior
    Nov 1-30, 2023
  • Sophia Epstein
    University of Texas at Austin
    Oct 16-20, 2023
  • Aysel Erey
    Utah State University
    Oct 30-Nov 3, 2023
  • Sean Escola
    Columbia University
    Sep 18-22, 2023
  • Julio Esparza Ibanez
    Instituto Cajal - CSIC (Spanish National Research Council)
    Oct 16-26, 2023
  • Ashkan Faghiri
    Georgia state university
    Oct 16-20, 2023
  • Matthew Farrell
    Harvard University
    Sep 18-22, 2023
  • Richard Foster
    Virginia Commonwealth University
    Sep 18-22, 2023
  • Michael Frank
    Brown University
    Sep 6-Dec 8, 2023
  • Halley Fritze
    University of Oregon
    Oct 16-20, 2023
  • Marcio Gameiro
    Rutgers University
    Sep 6-Dec 8, 2023
  • Tomas Gedeon
    Montana State University
    Sep 11-Nov 1, 2023
  • Maria Geffen
    University of Pennsylvania
    Nov 1-3, 2023
  • Tim Gentner
    University of California, San Diego
    Oct 30-Nov 3, 2023
  • Juliann Geraci
    University of Nebraska- Lincoln
    Oct 30-Nov 10, 2023
  • Robert Ghrist
    University of Pennsylvania
    Oct 19-20, 2023
  • Chad Giusti
    University of Delaware
    Sep 17-30, 2023; Oct 15-Nov 6, 2023
  • Harold Xavier Gonzalez
    Stanford University
    Sep 6-22, 2023
  • Anna Grim
    Allen Institute
    Oct 16-20, 2023
  • Robert Gütig
    Charité Medical School Berlin
    Oct 16-20, 2023
  • Todd Hagen
    Bernstein Center for Computational Neuroscience
    Oct 16-20, 2023
  • Erik Hermansen
    Norwegian University of Scienc
    Oct 16-20, 2023
  • Abigail Hickok
    Columbia University
    Oct 15-20, 2023
  • Christian Hirsch
    Aarhus University
    Oct 16-20, 2023
  • Betty Hong
    California Institute of Technology
    Oct 30-Nov 3, 2023
  • Iris Horng
    University of Pennsylvania
    Oct 15-21, 2023
  • Chengcheng Huang
    University of Pittsburgh
    Sep 18-22, 2023
  • Ching-Peng Huang
    UKE
    Oct 16-20, 2023
  • Vladimir Itskov
    The Pennsylvania State University
    Sep 5-Dec 8, 2023
  • Jonathan Jaquette
    New Jersey Institute of Technology
    Sep 18-22, 2023
  • Yuchen Jiang
    Australian National University
    Oct 16-20, 2023
  • Alvin Jin
    Berkeley
    Oct 15-21, 2023
  • Kresimir Josic
    University of Houston
    Sep 18-22, 2023
  • Shabnam Kadir
    University of Hertfordshire
    Oct 30-Nov 3, 2023
  • Sameer Kailasa
    University of Michigan Ann Arbor
    Sep 5-Dec 9, 2023
  • Lida Kanari
    EPFL/Blue Brain
    Oct 16-20, 2023
  • Gabriella Keszthelyi
    Alfréd Rényi Institute of Mathematics
    Sep 16-22, 2023
  • Roozbeh Kiani
    New York University
    Oct 30-Nov 3, 2023
  • Zachary Kilpatrick
    University of Colorado Boulder
    Sep 18-22, 2023; Oct 30-Nov 3, 2023
  • Soon Ho Kim
    Georgia Institute of Technology
    Sep 18-22, 2023; Oct 30-Nov 3, 2023
  • Hyunjoong Kim
    University of Houston
    Sep 17-23, 2023
  • Christopher Kim
    National Institutes of Health
    Sep 18-22, 2023
  • Kevin Knudson
    University of Florida
    Oct 16-20, 2023
  • Leo Kozachkov
    Massachusetts Institute of Technology
    Sep 18-22, 2023
  • Maxwell Kreider
    Case Western Reserve University
    Sep 6-Dec 8, 2023
  • Kishore Kuchibhotla
    Johns Hopkins University
    Oct 16-20, 2023
  • Ankit Kumar
    UC Berkeley
    Sep 18-22, 2023
  • Alexander Kunin
    Creighton University
    Oct 8-13, 2023
  • Giancarlo La Camera
    Stony Brook University
    Oct 16-20, 2023
  • Kang-Ju Lee
    Seoul National University
    Oct 15-21, 2023
  • Ran Levi
    University of Aberdeen
    Oct 16-20, 2023
  • Noah Lewis
    Georgia Institute of Technology
    Oct 16-20, 2023
  • Zelong Li
    Penn State University
    Sep 5-Dec 9, 2023
  • Yao Li
    University of Massachusetts Amherst
    Sep 6-Dec 8, 2023
  • Johnny Li
    UCSD
    Oct 16-20, 2023
  • Caitlin Lienkaemper
    Boston University
    Sep 15-Nov 4, 2023
  • Kathryn Lindsey
    Boston College
    Sep 6-Dec 8, 2023
  • Justin Lines
    Columbia University
    Oct 30-Nov 3, 2023
  • Vasiliki Liontou
    ICERM
    Sep 6-Dec 8, 2023
  • David Lipshutz
    Flatiron Institute
    Sep 18-22, 2023
  • Sijing Liu
    Brown University
    Sep 1, 2023-May 31, 2024
  • Jessica Liu
    CUNY Graduate Center
    Sep 18-22, 2023
  • Simon Locke
    Johns Hopkins University
    Sep 18-22, 2023
  • Laureline Logiaco
    Massachusetts Institute of Technology
    Sep 18-22, 2023
  • Juliana Londono Alvarez
    Penn State
    Sep 6-Dec 8, 2023
  • Caio Lopes
    École Polytechnique Fédérale de Lausanne
    Oct 16-20, 2023
  • Christian Machens
    Champalimaud Foundation
    Oct 30-Nov 3, 2023
  • James MACLAURIN
    New Jersey Institute of Technology
    Sep 18-22, 2023
  • Matilde Marcolli
    California Institute of Technology
    Sep 6-Dec 8, 2023; Sep 18-22, 2023; Oct 16-20, 2023
  • Marissa Masden
    ICERM
    Sep 6, 2023-May 31, 2024
  • Sarah Mason
    Wake Forest University
    Oct 30-Nov 3, 2023
  • Leenoy Meshulam
    University of Washington
    Oct 30-Nov 3, 2023
  • Nikola Milicevic
    Pennsylvania State University
    Sep 1-Dec 10, 2023
  • Federica Milinanni
    KTH - Royal Institute of Technology
    Sep 17-Nov 5, 2023
  • Konstantin Mischaikow
    Rutgers University
    Sep 17-23, 2023; Sep 18-22, 2023; Oct 5-6, 2023
  • Katie Morrison
    University of Northern Colorado
    Sep 1-Dec 10, 2023
  • Noga Mudrik
    The Johns Hopkins University
    Sep 18-22, 2023
  • Audrey Nash
    Florida State University
    Sep 18-22, 2023
  • matt nassar
    Brown University
    Sep 6-Dec 8, 2023
  • Junalyn Navarra-Madsen
    TEXAS WOMAN'S UNIVERSITY
    Oct 30-Nov 3, 2023
  • Ilya Nemenman
    Emory University
    Oct 30-Nov 3, 2023
  • Fernando Nobrega Santos
    University of Amsterdam
    Oct 15-21, 2023
  • Gabe Ocker
    Boston University
    Sep 6-Dec 8, 2023
  • Choongseok Park
    NC A&T State University
    Sep 18-22, 2023
  • Ross Parker
    Center for Communications Research – Princeton
    Sep 18-22, 2023; Oct 16-20, 2023
  • Caitlyn Parmelee
    Keene State College
    Sep 5-Dec 9, 2023
  • alice patania
    University of Vermont
    Oct 16-20, 2023
  • Cengiz Pehlevan
    Harvard University
    Sep 6-Dec 8, 2023
  • Isabella Penido
    Brown University
    Sep 6-Dec 8, 2023
  • Jose Perea
    Northeastern University
    Sep 6-Dec 8, 2023
  • Giovanni Petri
    CENTAI Institute
    Oct 16-20, 2023
  • Mason Porter
    UCLA
    Sep 18-22, 2023
  • Rebecca R.G.
    George Mason University
    Oct 29-Nov 3, 2023
  • Allan Raventos
    Stanford University
    Sep 18-22, 2023
  • Niloufar Razmi
    Brown University
    Sep 6-Dec 8, 2023
  • Alex Reyes
    New York University
    Oct 16-20, 2023
  • Antonio Rieser
    Centro de Investigación en Matemáticas
    Sep 5-Dec 9, 2023
  • Dmitry Rinberg
    New York University
    Oct 16-20, 2023
  • Dario Ringach
    University of California, Los Angeles
    Oct 16-20, 2023
  • Jason Ritt
    Brown University
    Sep 6-Dec 8, 2023
  • Robert Rosenbaum
    University of Notre Dame
    Sep 18-22, 2023
  • Horacio Rotstein
    New Jersey Institute of Technology
    Sep 5-Nov 3, 2023
  • Jennifer Rozenblit
    University of Texas, Austin
    Oct 16-20, 2023
  • Safaan Sadiq
    Pennsylvania State University
    Sep 5-Dec 9, 2023
  • Nicole Sanderson
    Penn State University
    Sep 1-Dec 31, 2023
  • Hannah Santa Cruz
    Penn State
    Sep 5-Dec 9, 2023
  • Alessandro Sarti
    National Center of Scientific Research, EHESS, Paris
    Oct 13-22, 2023
  • Cristina Savin
    NYU
    Oct 30-Nov 3, 2023
  • Elad Schneidman
    Weizmann Institute of Science
    Oct 30-Nov 4, 2023
  • Nikolas Schonsheck
    University of Delaware
    Oct 15-Nov 4, 2023
  • David Schwab
    City University of New York
    Sep 5-Dec 9, 2023
  • Daniel Scott
    Brown University
    Sep 6-Dec 8, 2023
  • Thomas Serre
    Brown University
    Sep 6-Dec 8, 2023
  • Tatyana Sharpee
    Salk Institute
    Oct 16-20, 2023; Oct 30-Nov 3, 2023
  • Sage Shaw
    University of Colorado Boulder
    Sep 18-22, 2023
  • Nimrod Sherf
    University of Houston
    Sep 18-22, 2023
  • Patrick Shipman
    Colorado State University
    Oct 17-20, 2023
  • Farshad Shirani
    Georgia Institute of Technology
    Sep 18-22, 2023
  • Michael Shub
    The City College of New York
    Sep 18-22, 2023
  • Paramjeet Singh
    Thapar Institute of Engineering & Technology
    Sep 18-22, 2023
  • Bernadette Stolz
    EPFL
    Oct 16-20, 2023
  • Thibaud Taillefumier
    UT Austin
    Oct 30-Nov 3, 2023
  • Evelyn Tang
    Rice University
    Oct 16-20, 2023
  • Gaia Tavoni
    Washington University in St. Louis
    Oct 30-Nov 3, 2023
  • Dane Taylor
    University of Wyoming
    Sep 17-22, 2023; Oct 15-20, 2023
  • Peter Thomas
    Case Western Reserve University
    Sep 5-Dec 9, 2023
  • Tobias Timofeyev
    University of Vermont
    Sep 18-22, 2023; Oct 16-20, 2023
  • Nicholas Tolley
    Brown University
    Sep 6-Dec 8, 2023
  • Magnus Tournoy
    Flatiron Institute
    Sep 17-Oct 21, 2023
  • Taro Toyoizumi
    Riken Center for Brain Science
    Oct 30-Nov 3, 2023
  • Wilson Truccolo
    Brown University
    Sep 6-Dec 8, 2023
  • Ka Nap Tse
    University of Pittsburgh
    Sep 10-Dec 9, 2023
  • Yuki Tsukada
    Keio University
    Oct 30-Nov 3, 2023
  • Junyi Tu
    Salisbury University
    Oct 16-20, 2023
  • Srinivas Turaga
    HHMI - Janelia Research Campus
    Oct 16-20, 2023
  • Melvin Vaupel
    Norwegian Institute of Science and Technology
    Oct 16-20, 2023
  • Jonathan Victor
    Weill Cornell Medical College
    Oct 16-20, 2023
  • Elizabeth Vidaurre
    Molloy University
    Oct 16-20, 2023
  • Bradley Vigil
    Texas Tech University
    Oct 16-20, 2023
  • Juan Pablo Vigneaux
    Caltech
    Oct 30-Nov 3, 2023
  • Zhengchao Wan
    University of California San Diego
    Oct 16-20, 2023
  • Xinyi Wang
    Michigan State University
    Sep 10-Oct 27, 2023
  • Bin Wang
    University of California, San Diego
    Sep 6-Dec 22, 2023
  • Yangyang Wang
    Brandeis University
    Sep 18-22, 2023
  • Zhuo-Cheng Xiao
    New York University
    Sep 18-22, 2023; Oct 16-20, 2023
  • Iris Yoon
    Wesleyan University
    Sep 6-Dec 8, 2023
  • Ryeongkyung Yoon
    University of Houston
    Sep 18-22, 2023
  • Kei Yoshida
    Brown University
    Sep 18-22, 2023
  • Kisung You
    City University of New York
    Oct 16-20, 2023
  • Lai-Sang Young
    Courant Institute
    Sep 18-22, 2023
  • Nora Youngs
    Colby College
    Sep 5-Dec 10, 2023
  • Zhuojun Yu
    Case Western Reserve University
    Sep 5-Dec 9, 2023
  • Gexin Yu
    College of William and Mary
    Sep 17-23, 2023
  • Wenhao Zhang
    UT Southwestern Medical Center
    Sep 18-22, 2023; Oct 16-20, 2023
  • Ling Zhou
    ICERM
    Sep 6-Dec 8, 2023
  • Robert Zielinski
    Brown University
    Sep 6-Dec 8, 2023

Visit dates listed on the participant list may be tentative and subject to change without notice.

Semester Schedule

Wednesday, September 6, 2023
Math + Neuroscience: Strengthening the Interplay Between Theory and Mathematics
  • 9:00 am - 3:00 pm EDT
    Check In
    11th Floor Collaborative Space
  • 10:00 - 11:00 am EDT
    Oranizer/Directorate Meeting
    Meeting - 11th Floor Conference Room
  • 4:00 - 5:00 pm EDT
    Informal Coffee/Tea Welcome
    Coffee Break - 11th Floor Collaborative Space
Thursday, September 7, 2023
Math + Neuroscience: Strengthening the Interplay Between Theory and Mathematics
  • 9:00 - 9:30 am EDT
    ICERM Welcome
    Welcome - 11th Floor Lecture Hall
  • 9:30 - 11:30 am EDT
    Organizer Welcome and Introductions
    Opening Remarks - 11th Floor Lecture Hall
  • 3:00 - 3:30 pm EDT
    Coffee Break
    11th Floor Collaborative Space
Friday, September 8, 2023
Math + Neuroscience: Strengthening the Interplay Between Theory and Mathematics
  • 10:00 - 11:00 am EDT
    Grad Student/Postdoc Meeting with ICERM Directorate
    Meeting - 11th Floor Lecture Hall
  • 12:00 - 2:00 pm EDT
    Planning Lunch
    Working Lunch - 11th Floor Collaborative Space
  • 3:00 - 3:30 pm EDT
    Coffee Break
    11th Floor Collaborative Space
Monday, September 11, 2023
Math + Neuroscience: Strengthening the Interplay Between Theory and Mathematics
  • 10:00 - 11:30 am EDT
    Journal Club & Neuro 101 Planning
    Meeting - 11th Floor Lecture Hall
  • 1:45 - 1:50 pm EDT
    Xavier Gonzalez Introduction
    Lightning Talks - 11th Floor Lecture Hall
    • Harold Xavier Gonzalez, Stanford University
  • 1:50 - 1:55 pm EDT
    Maxwell Kreider Introduction
    Lightning Talks - 11th Floor Lecture Hall
    • Maxwell Kreider, Case Western Reserve University
  • 1:55 - 2:00 pm EDT
    Juliana Londono Alvarez Introduction
    Lightning Talks - 11th Floor Lecture Hall
    • Juliana Londono Alvarez, Penn State
  • 2:00 - 2:05 pm EDT
    Safaan Sadiq Introduction
    Lightning Talks - 11th Floor Lecture Hall
    • Safaan Sadiq, Pennsylvania State University
  • 2:05 - 2:10 pm EDT
    Hannah Santa Cruz Introduction
    Lightning Talks - 11th Floor Lecture Hall
    • Hannah Santa Cruz, Penn State
  • 2:10 - 2:15 pm EDT
    Nicholas Tolley Introduction
    Lightning Talks - 11th Floor Lecture Hall
    • Nicholas Tolley, Brown University
  • 2:15 - 2:20 pm EDT
    Ka Nap Tse Introduction
    Lightning Talks - 11th Floor Lecture Hall
    • Ka Nap Tse, University of Pittsburgh
  • 2:20 - 2:25 pm EDT
    Bin Wang Introduction
    Lightning Talks - 11th Floor Lecture Hall
    • Bin Wang, University of California, San Diego
  • 2:25 - 2:30 pm EDT
    Zhuojun Yu Introduction
    Lightning Talks - 11th Floor Lecture Hall
    • Zhuojun Yu, Case Western Reserve University
  • 2:30 - 2:35 pm EDT
    Robert Zielinkski Introduction
    Lightning Talks - 11th Floor Lecture Hall
    • Robert Zielinski, Brown University
  • 2:35 - 2:40 pm EDT
    Zelong Li Intorduction
    Lightning Talks - 11th Floor Lecture Hall
    • Zelong Li, Penn State University
  • 2:40 - 2:45 pm EDT
    Sameer Kailasa Introduction
    Lightning Talks - 11th Floor Lecture Hall
    • Sameer Kailasa, University of Michigan Ann Arbor
  • 2:45 - 2:50 pm EDT
    Elena Wang Introduction
    Lightning Talks - 11th Floor Lecture Hall
    • Xinyi Wang, Michigan State University
  • 3:00 - 3:30 pm EDT
    Coffee Break
    11th Floor Collaborative Space
  • 3:30 - 3:40 pm EDT
    Robyn Brooks Introduction
    Lightning Talks - 11th Floor Lecture Hall
    • Robyn Brooks, University of Utah
  • 3:40 - 3:50 pm EDT
    Thomas Burns Introduction
    Lightning Talks - 11th Floor Lecture Hall
    • Thomas Burns, ICERM
  • 3:50 - 4:00 pm EDT
    Caitlin Leinkaemper Introduction
    Lightning Talks - 11th Floor Lecture Hall
    • Caitlin Lienkaemper, Boston University
  • 4:00 - 4:10 pm EDT
    Vasiliki Liontou Introduction
    Lightning Talks - 11th Floor Lecture Hall
    • Vasiliki Liontou, ICERM
  • 4:10 - 4:20 pm EDT
    Sijing Liu Introduction
    Lightning Talks - 11th Floor Lecture Hall
    • Sijing Liu, Brown University
  • 4:20 - 4:30 pm EDT
    Marissa Masden Introduction
    Lightning Talks - 11th Floor Lecture Hall
    • Marissa Masden, ICERM
  • 4:30 - 4:40 pm EDT
    Nikola Milicevic Introduction
    Lightning Talks - 11th Floor Lecture Hall
    • Nikola Milicevic, Pennsylvania State University
  • 4:40 - 4:50 pm EDT
    Nicole Sanderson
    Lightning Talks - 11th Floor Lecture Hall
    • Nicole Sanderson, Penn State University
  • 4:50 - 5:00 pm EDT
    Ling Zhou Introduction
    Lightning Talks - 11th Floor Lecture Hall
    • Ling Zhou, ICERM
  • 5:00 - 6:30 pm EDT
    Welcome Reception
    Reception - 11th Floor Collaborative Space
Tuesday, September 12, 2023
Math + Neuroscience: Strengthening the Interplay Between Theory and Mathematics
  • 10:30 am - 12:00 pm EDT
    TDA 101
    Tutorial - 11th Floor Lecture Hall
    • Nicole Sanderson, Penn State University
  • 3:00 - 3:30 pm EDT
    Coffee Break
    11th Floor Collaborative Space
Wednesday, September 13, 2023
Math + Neuroscience: Strengthening the Interplay Between Theory and Mathematics
  • 10:30 am - 12:00 pm EDT
    Network Dynamics & Modeling
    Tutorial - 11th Floor Lecture Hall
    • Horacio Rotstein, New Jersey Institute of Technology
  • 3:00 - 3:30 pm EDT
    Coffee Break
    11th Floor Collaborative Space
  • 3:30 - 4:30 pm EDT
    Closed-loop neuromechanical motor control models (or) On the importance of taking the body into account when modeling neuronal dynamics.
    11th Floor Lecture Hall
    • Peter Thomas, Case Western Reserve University
    Abstract
    The central nervous system is strongly coupled to the body. Through peripheral receptors and effectors, it is also coupled to the constantly changing outside world. A chief function of the brain is to close the loop between sensory inputs and motor output. It is through the brain's effectiveness as a control mechanism for the body, embedded in the external world, that it facilitates long-term survival. Thus to understand brain circuits (one might argue) one must also understand their behavioral and ecological context. However, studying closed-loop brain-body interactions is challenging experimentally, conceptually, and mathematically. In order to make progress, we focus on systems that generate rhythmic behaviors in order to accomplish a quantifiable goal, such as maintaining different forms of homeostasis. Time permitting, I'll mention two such systems, 1. control of feeding motions in the marine mollusk Aplysia californica, and 2. rhythm generation and control in the mammalian breathing system. In both of these systems, we propose that robustness in the face of variable metabolic or external demands arises from the interplay of multiple layers of control involving biomechanics, central neural dynamics, and sensory feedback.
Thursday, September 14, 2023
Math + Neuroscience: Strengthening the Interplay Between Theory and Mathematics
  • 9:00 - 10:30 am EDT
    Network Dynamics & Modeling (Part 2)
    Tutorial - 11th Floor Lecture Hall
    • Horacio Rotstein, New Jersey Institute of Technology
  • 12:00 - 1:30 pm EDT
    Open Problems fo TLNs (Bring Your Own Lunch)
    Problem Session - 11th Floor Lecture Hall
    • Session Chairs
    • Carina Curto, The Pennsylvania State University
    • Katie Morrison, University of Northern Colorado
  • 2:00 - 2:30 pm EDT
    TDA software
    Tutorial - 11th Floor Lecture Hall
    • Nicole Sanderson, Penn State University
  • 3:00 - 3:30 pm EDT
    Coffee Break/ Neuro 101
    Coffee Break - 11th Floor Collaborative Space
Friday, September 15, 2023
Math + Neuroscience: Strengthening the Interplay Between Theory and Mathematics
  • 9:30 - 10:30 am EDT
    Journal Club
    11th Floor Lecture Hall
    • Moderators
    • Harold Xavier Gonzalez, Stanford University
    • Sameer Kailasa, University of Michigan Ann Arbor
  • 11:00 - 11:30 am EDT
    Mathematical Challenges in Neuronal Network Dynamics
    Post Doc/Graduate Student Seminar - 11th Floor Lecture Hall
    • Marissa Masden, ICERM
    Abstract
    I will introduce a straightforward construction of the canonical polyhedral complex given by the activation patterns of a ReLU neural network. Then, I will describe how labeling the vertices of this polyhedral complex with sign vectors is (almost always) enough information to generate a cellular (co)chain complex labeling all of the polyhedral cells, and how this allows us to extract information about the decision boundary of the network.
  • 11:30 am - 12:00 pm EDT
    Detecting danger in gridworlds using Gromov's Link Condition
    Post Doc/Graduate Student Seminar - 11th Floor Lecture Hall
    • Thomas Burns, ICERM
    Abstract
    Gridworlds have been long-utilised in AI research, particularly in reinforcement learning, as they provide simple yet scalable models for many real-world applications such as robot navigation, emergent behaviour, and operations research. We initiate a study of gridworlds using the mathematical framework of reconfigurable systems and state complexes due to Abrams, Ghrist & Peterson. State complexes represent all possible configurations of a system as a single geometric space, thus making them conducive to study using geometric, topological, or combinatorial methods. The main contribution of this work is a modification to the original Abrams, Ghrist & Peterson setup which we introduce to capture agent braiding and thereby more naturally represent the topology of gridworlds. With this modification, the state complexes may exhibit geometric defects (failure of Gromov's Link Condition). Serendipitously, we discover these failures occur exactly where undesirable or dangerous states appear in the gridworld. Our results therefore provide a novel method for seeking guaranteed safety limitations in discrete task environments with single or multiple agents and offer useful safety information (in geometric and topological forms) for incorporation in or analysis of machine learning systems. More broadly, our work introduces tools from geometric group theory and combinatorics to the AI community and demonstrates a proof-of-concept for this geometric viewpoint of the task domain through the example of simple gridworld environments.
  • 1:30 - 3:00 pm EDT
    Topology + Neuroscience Working Groups
    Group Work - 10th Floor Classroom
  • 3:00 - 3:30 pm EDT
    Coffee Break
    11th Floor Collaborative Space
Monday, September 18, 2023
  • 8:50 - 9:00 am EDT
    Welcome
    11th Floor Lecture Hall
    • Session Chair
    • Brendan Hassett, ICERM/Brown University
  • 9:00 - 9:45 am EDT
    Neural dynamics on sparse networks—pruning, error correction, and signal reconstruction
    11th Floor Lecture Hall
    • Speaker
    • Rishidev Chaudhuri, University of California, Davis
    • Session Chair
    • Carina Curto, The Pennsylvania State University
    Abstract
    Many networks in the brain are sparsely connected, and the brain eliminates connections during development and learning. This talk will focus on questions related to computation and dynamics on these sparse networks. We will first focus on pruning redundant network connections while preserving dynamics and function. In a recurrent network, determining the importance of a connection between two neurons is a difficult computational problem, depending on the role that both neurons play and on all possible pathways of information flow between them. Noise is ubiquitous in neural systems, and often considered an irritant to be overcome. We suggest that noise could instead play a functional role in pruning, allowing the brain to probe network structure and determine which connections are redundant. We construct a simple, local, unsupervised rule that either strengthens or prunes synapses using only connection weight and the noise-driven covariance of the neighboring neurons. For a subset of linear and rectified-linear networks, we adapt matrix concentration of measure arguments from the field of graph sparsification to prove that this rule preserves the spectrum of the original matrix and hence preserves network dynamics even when the fraction of pruned connections asymptotically approaches 1. The plasticity rule is biologically-plausible and may suggest a new role for noise in neural computation. Time permitting, we will then discuss the application of sparse expander graphs to modeling dynamics on neural networks. Expander graphs combine the seemingly contradictory properties of being sparse and well-connected. Among other remarkable properties, they allow efficient communication, credit assignment and error correction with simple greedy dynamical rules. We suggest that these applications might provide new ways of thinking about neural dynamics, and provide several proofs of principle.
  • 10:00 - 10:15 am EDT
    Coffee Break
    11th Floor Collaborative Space
  • 10:15 - 11:00 am EDT
    Local breakdown of the balance of excitation and inhibition accounts for divisive normalization
    11th Floor Lecture Hall
    • Speaker
    • Yashar Ahmadian, Cambridge University
    • Session Chair
    • Carina Curto, The Pennsylvania State University
    Abstract
    Excitatory and inhibitory (E & I) inputs to cortical neurons remain balanced across different conditions. This is captured in the balanced network model in which neural populations dynamically adjust their rates to yield tightly balanced E and I inputs and a state in which all neurons are active at levels observed in cortex. But global tight E-I balance predicts linear stimulus dependence for population responses, and does not account for systematic cortical response nonlinearities such as divisive normalization, a canonical brain computation. However, when necessary connectivity conditions for global balance fail, states arise in which a subset of neurons are inhibition dominated and inactive. Here, we show analytically that the emergence of such localized balance states robustly leads to normalization, including sublinear integration and winner-take-all behavior. An alternative model that exhibits normalization is the Stabilized Supralinear Network (SSN), in which the E-I balance is generically loose, but becomes tight asymptotically for strong inputs. However, an understanding of the causal relationship between E-I balance and normalization in SSN are lacking. Here we show that when tight E-I balance in the asymptotic, strongly driven regime of SSN is not global, the network does not exhibit normalization at any input strength; thus, in SSN too, significant normalization requires the breakdown of global balance. In summary, we causally and quantitatively connect a fundamental feature of cortical dynamics with a canonical brain computation.
  • 11:15 - 11:45 am EDT
    Open Problems Discussion
    Problem Session - 11th Floor Lecture Hall
    • Session Chairs
    • Carina Curto, The Pennsylvania State University
    • Katie Morrison, University of Northern Colorado
  • 11:45 am - 1:30 pm EDT
    Lunch/Free Time
  • 1:30 - 2:15 pm EDT
    Discovering dynamical patterns of activity from single-trial neural data
    11th Floor Lecture Hall
    • Speaker
    • Rodica Curtu, The University of Iowa
    • Session Chair
    • Carina Curto, The Pennsylvania State University
    Abstract
    In this talk I will discuss a data-driven method that leverages time-delayed coordinates, diffusion maps, and dynamic mode decomposition, to identify neural features in large scale brain recordings that correlate with subject-reported perception. The method captures the dynamics of perception at multiple timescales and distinguishes attributes of neural encoding of the stimulus from those encoding the perceptual states. Our analysis reveals a set of latent variables that exhibit alternating dynamics along a low-dimensional manifold, like trajectories of attractor-based models. I will conclude by proposing a phase-amplitude-coupling-based model that illustrates the dynamics of data.
  • 2:30 - 2:35 pm EDT
    Synaptic mechanisms for resisting distractors in neural fields
    Lightning Talks - 11th Floor Lecture Hall
    • Speaker
    • Heather Cihak, University of Colorado Boulder
    • Session Chair
    • Carina Curto, The Pennsylvania State University
    Abstract
    Persistent neural activity has been observed in the non-human primate cortex when making delayed estimates. Organized activity patterns according to cell feature preference reveals "bumps" that represent analog variables during the delay. Continuum neural field models support bump attractors whose stochastic dynamics can be linked to response statistics (estimate bias and error). Models often ignore the distinct dynamics of bumps in both excitatory/inhibitory population activity, but recent neural and behavioral recordings suggest both play a role in delayed estimate codes and responses. In past work, we developed new methods in asymptotic and multiscale analyses for stochastic and spatiotemporal systems to understand how network architecture determines bump dynamics in networks with distinct E/I populations and short term plasticity. The inhibitory bump dynamics as well as facilitation and diffusion impact the stability and wandering motion of the excitatory bump. Our current work moves beyond studying ensemble statistics like variance to examine potential mechanisms underlying the robustness of working memory to distractors (irrelevant information) presented during the maintenance period wherein the relative timescales of the E/I populations, synaptic vs activity dynamics, as well as short term plasticity may play an important role.
  • 2:35 - 2:40 pm EDT
    Convex optimization of recurrent neural networks for rapid inference of neural dynamics
    Lightning Talks - 11th Floor Lecture Hall
    • Speaker
    • Fatih Dinc, Stanford University
    • Session Chair
    • Carina Curto, The Pennsylvania State University
    Abstract
    Advances in optical and electrophysiological recording technologies have made it possible to record the dynamics of thousands of neurons, opening up new possibilities for interpreting and controlling large neural populations. A promising way to extract computational principles from these large datasets is to train data-constrained recurrent neural networks (dRNNs). However, existing training algorithms for dRNNs are inefficient and have limited scalability, making it a challenge to analyze large neural recordings even in offline scenarios. To address these issues, we introduce a training method termed Convex Optimization of Recurrent Neural Networks (CORNN). In studies of simulated recordings of hundreds of cells, CORNN attained training speeds ~ 100-fold faster than traditional optimization approaches while maintaining or enhancing modeling accuracy. We further validated CORNN on simulations with thousands of cells that performed simple computations such as those of a 3-bit flip-flop or the execution of a timed response. Finally, we showed that CORNN can robustly reproduce network dynamics and underlying attractor structures despite mismatches between generator and inference models, severe subsampling of observed neurons, or mismatches in neural time-scales. Overall, by training dRNNs with millions of parameters in subminute processing times on a standard computer, CORNN constitutes a first step towards real-time network reproduction constrained on large-scale neural recordings and a powerful computational tool for advancing the understanding of neural computation. My talk focuses on how dRNNs enabled by CORNN can help us reverse engineer the neural code in the mammalian brain.
  • 2:40 - 2:45 pm EDT
    Recall tempo of Hebbian sequences depends on the interplay of Hebbian kernel with tutor signal timing
    Lightning Talks - 11th Floor Lecture Hall
    • Speaker
    • Matthew Farrell, Harvard University
    • Session Chair
    • Carina Curto, The Pennsylvania State University
    Abstract
    Understanding how neural circuits generate sequential activity is a longstanding challenge. While foundational theoretical models have shown how sequences can be stored as memories with Hebbian plasticity rules, these models considered only a narrow range of Hebbian rules. In this talk I introduce a model for arbitrary Hebbian plasticity rules, capturing the diversity of spike-timing-dependent synaptic plasticity seen in experiments, and show how the choice of these rules and of neural activity patterns influences sequence memory formation and retrieval. In particular, I will present a general theory that predicts the speed of sequence replay. This theory lays a foundation for explaining how cortical tutor signals might give rise to motor actions that eventually become ``automatic''. This theory also captures the impact of changing the speed of the tutor signal. Beyond shedding light on biological circuits, this theory has relevance in artificial intelligence by laying a foundation for frameworks whereby slow and computationally expensive deliberation can be stored as memories and eventually replaced by inexpensive recall.
  • 2:45 - 2:50 pm EDT
    Modeling human temporal EEG responses subject to VR visual stimuli
    Lightning Talks - 11th Floor Lecture Hall
    • Speaker
    • Richard Foster, Virginia Commonwealth University
    • Session Chair
    • Carina Curto, The Pennsylvania State University
    Abstract
    When subject to visual stimuli flashing at a constant temporal frequency, it is well-known that the EEG response has a sharp peak in the power spectrum at the driving frequency. But the EEG response with random frequency stimuli and corresponding biophysical mechanisms are largely unknown. We present a phenomenological model framework in hopes of eventually capturing these EEG responses and unveiling the biophysical mechanisms. Based on observed heterogeneous temporal frequency selectivity curves in V1 cells (Hawken et al. ‘96, Camillo et al ‘20, Priebe et al. ‘06), we endow individual units with these response properties. Preliminary simulation results show that particular temporal frequency selectivity curves can be more indicative of the EEG response. Future directions include the construction of network architecture with interacting units to faithfully model the EEG response.
  • 2:50 - 2:55 pm EDT
    RNNs of RNNs: Recursive Construction of Stable Assemblies of Recurrent Neural Networks
    Lightning Talks - 11th Floor Lecture Hall
    • Speaker
    • Leo Kozachkov, Massachusetts Institute of Technology
    • Session Chair
    • Carina Curto, The Pennsylvania State University
    Abstract
    Recurrent neural networks (RNNs) are widely used throughout neuroscience as models of local neural activity. Many properties of single RNNs are well characterized theoretically, but experimental neuroscience has moved in the direction of studying multiple interacting areas, and RNN theory needs to be likewise extended. We take a constructive approach towards this problem, leveraging tools from nonlinear control theory and machine learning to characterize when combinations of stable RNNs will themselves be stable. Importantly, we derive conditions which allow for massive feedback connections between interacting RNNs. We parameterize these conditions for easy optimization using gradient-based techniques, and show that stability-constrained "networks of networks" can perform well on challenging sequential-processing benchmark tasks. Altogether, our results provide a principled approach towards understanding distributed, modular function in the brain.
  • 3:15 - 3:45 pm EDT
    Coffee Break
    11th Floor Collaborative Space
  • 3:45 - 4:30 pm EDT
    Universal Properties of Strongly Coupled Recurrent Networks
    11th Floor Lecture Hall
    • Speaker
    • Robert Rosenbaum, University of Notre Dame
    • Session Chair
    • Carina Curto, The Pennsylvania State University
    Abstract
    Balanced excitation and inhibition is widely observed in cortex. How does this balance shape neural computations and stimulus representations? This question is often studied using computational models of neuronal networks in a dynamically balanced state. But balanced network models predict a linear relationship between stimuli and population responses. So how do cortical circuits implement nonlinear representations and computations? We show that every balanced network architecture admits stimuli that break the balanced state and these breaks in balance push the network into a “semi-balanced state” characterized by excess inhibition to some neurons, but an absence of excess excitation. The semi-balanced state produces nonlinear stimulus representations and nonlinear computations, is unavoidable in networks driven by multiple stimuli, is consistent with cortical recordings, and has a direct mathematical relationship to artificial neural networks.
  • 4:30 - 6:00 pm EDT
    Reception
    11th Floor Collaborative Space
Tuesday, September 19, 2023
  • 9:00 - 9:45 am EDT
    Multilayer Networks in Neuroscience
    11th Floor Lecture Hall
    • Speaker
    • Mason Porter, UCLA
    • Session Chair
    • Brent Doiron, University of Chicago
    Abstract
    I will discuss multilayer networks in neuroscience. I will introduce the idea of multilayer networks and discuss some uses of multilayer networks in dneuroscience. I will present some interesting challenges.
  • 10:00 - 10:15 am EDT
    Coffee Break
    11th Floor Collaborative Space
  • 10:15 - 11:00 am EDT
    State modulation in spatial networks of multiple interneuron subtypes
    11th Floor Lecture Hall
    • Speaker
    • Chengcheng Huang, University of Pittsburgh
    • Session Chair
    • Brent Doiron, University of Chicago
    Abstract
    Neuronal responses to sensory stimuli can be strongly modulated by animal's brain state. Three distinct subtypes of inhibitory interneurons, parvalbumin (PV), somatostatin (SOM), and vasoactive intestinal peptide (VIP) expressing cells, have been identified as key players of flexibly modulating network activity. The three interneuron populations have specialized local microcircuit motifs and are targeted differentially by neuromodulators and top-down inputs from higher-order cortical areas. In this work, we systematically study the function of each interneuron cell type at modulating network dynamics in a spatially ordered spiking neuron network. We analyze the changes in firing rates and network synchrony as we apply static current to each cell population. We find that the modulation pattern by activating E or PV cells is distinct from that by activating SOM or VIP cells. In particular, we identify SOM cells as the main driver of network synchrony.
  • 11:15 - 11:45 am EDT
    Open Problems Discussion
    Problem Session - 11th Floor Lecture Hall
    • Session Chairs
    • Brent Doiron, University of Chicago
    • Zachary Kilpatrick, University of Colorado Boulder
  • 11:50 am - 12:00 pm EDT
    Group Photo (Immediately After Talk)
    11th Floor Lecture Hall
  • 12:00 - 1:30 pm EDT
    Working Lunch
    11th Floor Collaborative Space
  • 1:30 - 2:15 pm EDT
    Plasticity in balanced neuronal networks
    11th Floor Lecture Hall
    • Speaker
    • Kresimir Josic, University of Houston
    • Session Chair
    • Brent Doiron, University of Chicago
    Abstract
    I will first describe how to extend the theory of balanced networks to account for synaptic plasticity. This theory can be used to show when a plastic network will maintain balance, and when it will be driven into an unbalanced state. I will next discuss how this approach provides evidence for a novel form of rapid compensatory inhibitory plasticity using experimental evidence obtained using optogenetic activation of excitatory neurons in primate visual cortex (area V1). The theory explains how such activation induces a population-wide dynamic reduction in the strength of neuronal interactions over the timescale of minutes during the awake state, but not during rest. I will shift gears in the final part of the talk, and discuss how community detection algorithms can help uncover the large scale organization of neuronal networks from connectome data, using the Drosophila hemibrain dataset as an example.
  • 2:35 - 2:40 pm EDT
    Q-Phase reduction of multi-dimensional stochastic Ornstein-Uhlenbeck process networks
    Lightning Talks - 11th Floor Lecture Hall
    • Speaker
    • Maxwell Kreider, Case Western Reserve University
    • Session Chair
    • Brent Doiron, University of Chicago
    Abstract
    Phase reduction is an effective tool to study the network dynamics of deterministic limit-cycle oscillators. The recent introduction of stochastic phase concepts allows us to extend these tools to stochastic oscillators; of particular utility is the asymptotic stochastic phase, derived from the eigenfunction decomposition of the system's probability density. Here, we study networks of coupled oscillatory two-dimensional Ornstein-Uhlenbeck processes (OUPs) with complex eigenvalues. We characterize system dynamics by providing an exact expression for the asymptotic stochastic phase for OUP networks of any dimension and arbitrary coupling structure. Furthermore, we introduce an order parameter quantifying the synchrony of networks of stochastic oscillators, and apply it to our OUP model. We argue that the OUP network provides a new, analytically tractable approach to analysis of large scale electrophysiological recordings.
  • 2:40 - 2:45 pm EDT
    Feedback Controllability as a Normative Theory of Neural Dynamics
    Lightning Talks - 11th Floor Lecture Hall
    • Speaker
    • Ankit Kumar, UC Berkeley
    • Session Chair
    • Brent Doiron, University of Chicago
    Abstract
    Brain computations emerge from the collective dynamics of distributed neural populations. Behaviors including reaching and speech are explained by principles of optimal feedback control, yet if and how this normative description shapes neural population dynamics is unknown. We created dimensionality reduction methods that identify subspaces of dynamics that are most feedforward controllable (FFC) vs. feedback controllable (FBC). We show that FBC and FFC subspaces diverge for dynamics generated by non-normal connectivity. In neural recordings from monkey M1 and S1 during reaching, FBC subspaces are better decoders of reach velocity, particularly during reach acceleration, and that FBC provides a first principles account of the observation of rotational dynamics. Overall, our results demonstrate feedback controllability is a novel, normative theory of neural population dynamics, and reveal how the structure of high dynamical systems shape their ability to be controlled.
  • 2:45 - 2:50 pm EDT
    Adaptive whitening with fast gain modulation and slow synaptic plasticity
    Lightning Talks - 11th Floor Lecture Hall
    • Speaker
    • David Lipshutz, Flatiron Institute
    • Session Chair
    • Brent Doiron, University of Chicago
    Abstract
    Neurons in early sensory areas rapidly adapt to changing sensory statistics, both by normalizing the variance of their individual responses and by reducing correlations between their responses. Together, these transformations may be viewed as an adaptive form of statistical whitening. In this talk, I will present a normative multi-timescale mechanistic model of adaptive whitening with complementary computational roles for gain modulation and synaptic plasticity. Gains are modified on a fast timescale to adapt to the current statistical context, whereas synapses are modified on a slow timescale to learn structural properties of the input statistics that are invariant across contexts.
  • 2:50 - 2:55 pm EDT
    The combinatorial code and the graph rules of Dale networks
    Lightning Talks - 11th Floor Lecture Hall
    • Speaker
    • Nikola Milicevic, Pennsylvania State University
    • Session Chair
    • Brent Doiron, University of Chicago
    Abstract
    We describe the combinatorics of equilibria and steady states of neurons in threshold-linear networks that satisfy the Dale’s law. The combinatorial code of a Dale network is characterized in terms of two conditions: (i) a condition on the network connectivity graph, and (ii) a spectral condition on the synaptic matrix. We find that in the weak coupling regime the combinatorial code depends only on the connectivity graph, and not on the particulars of the synaptic strengths. Moreover, we prove that the combinatorial code of a weakly coupled network is a sublattice, and we provide a learning rule for encoding a sublattice in a weakly coupled excitatory network. In the strong coupling regime we prove that the combinatorial code of a generic Dale network is intersection-complete and is therefore a convex code, as is common in some sensory systems in the brain.
  • 2:55 - 3:00 pm EDT
    Decomposed Linear Dynamical Systems for Studying Inter and Intra-Region Neural Dynamics
    Lightning Talks - 11th Floor Lecture Hall
    • Speaker
    • Noga Mudrik, The Johns Hopkins University
    • Session Chair
    • Brent Doiron, University of Chicago
    Abstract
    Understanding the intricate relationship between recorded neural activity and behavior is a pivotal pursuit in neuroscience. However, existing models frequently overlook the non-linear and non-stationary behavior evident in neural data, opting instead to center their focus on simplified projections or overt dynamical systems. We introduce a Decomposed Linear Dynamical Systems (dLDS) approach to capture these complex dynamics by representing them as a sparse time-varying linear combination of interpretable linear dynamical components. dLDS is trained using an expectation maximization procedure where the obscured dynamical components are iteratively inferred using dictionary learning. This approach enables the identification of overlapping circuits, while the sparsity applied during the training maintains the model interpretability. We demonstrate that dLDS successfully recovers the underlying linear components and their time-varying coefficients in both synthetic and neural data examples, and show that it can learn efficient representations of complex data. By leveraging the rich data from the International Brain Laboratory’s Brain Wide Map dataset, we extend dLDS to model communication among ensembles within and between brain regions, drawing insights from multiple non-simultaneous recording sessions.
  • 3:00 - 3:05 pm EDT
    Characterizing Neural Spike Train Data for Chemosensory Coding Analysis
    Lightning Talks - 11th Floor Lecture Hall
    • Speaker
    • Audrey Nash, Florida State University
    • Session Chair
    • Brent Doiron, University of Chicago
    Abstract
    In this presentation, we explore neural spike train data to discern a neuron's ability to distinguish between various stimuli. By examining both the spiking rate and the temporal distribution of spikes (phase of spiking), we aim to unravel the intricacies of chemosensory coding in neurons. We will provide a concise overview of our methodology for identifying chemosensory coding neurons and delve into the application of metric-based analysis techniques in conjunction with optimal transport methods. This combined approach allows us to uncover emerging patterns in tastant coding across multiple neurons and quantify the respective impacts of spiking rate and temporal phase in taste decoding.
  • 3:05 - 3:10 pm EDT
    Infinite-dimensional Dynamics in a Model of EEG Activity in the Neocortex
    Lightning Talks - 11th Floor Lecture Hall
    • Speaker
    • Farshad Shirani, Georgia Institute of Technology
    • Session Chair
    • Brent Doiron, University of Chicago
    Abstract
    I present key analytical and computational results on a mean field model of electroencephalographic activity in the neocortex, which is composed of a system of coupled ODEs and PDEs. I show that for some sets of biophysical parameter values the equilibrium set of the model is not compact, which further implies that the global attracting set of the model is infinite-dimensional. I also present computational results on generation and spatial propagation of transient gamma oscillations in the solutions of the model. The results identify important challenges in interpreting and modelling the temporal pattern of EEG recordings, caused by low spatial resolution of EEG electrodes.
  • 3:10 - 3:15 pm EDT
    What is the optimal topology of setwise connections for a memory network?
    Lightning Talks - 11th Floor Lecture Hall
    • Speaker
    • Thomas Burns, ICERM
    • Session Chair
    • Brent Doiron, University of Chicago
    Abstract
    Simplicial Hopfield networks (Burns & Fukai, 2023) explicitly model setwise connections between neurons based on a simplicial complex to store memory patterns. Randomly diluted networks -- where only a randomly chosen fraction of the simplices, i.e., setwise connections, have non-zero weights -- show performance above traditional associative memory networks with only pairwise connections between neurons but the same total number of non-zero weighted connections. However, could there be a cleverer choice of connections to weight given known memory patterns we want to store? I suspect so, and in this talk I will to formally pose the problem for others to consider.
  • 3:30 - 4:00 pm EDT
    Coffee Break
    11th Floor Collaborative Space
  • 4:00 - 4:45 pm EDT
    Reliability and robustness of oscillations in some slow-fast chaotic systems
    11th Floor Lecture Hall
    • Speaker
    • Jonathan Jaquette, New Jersey Institute of Technology
    • Session Chair
    • Brent Doiron, University of Chicago
    Abstract
    A variety of nonlinear models of biological systems generate complex chaotic behaviors that contrast with biological homeostasis, the observation that many biological systems prove remarkably robust in the face of changing external or internal conditions. Motivated by the subtle dynamics of cell activity in a crustacean central pattern generator, we propose a refinement of the notion of chaos that reconciles homeostasis and chaos in systems with multiple timescales. We show that systems displaying relaxation cycles going through chaotic attractors generate chaotic dynamics that are regular at macroscopic timescales, thus consistent with physiological function. We further show that this relative regularity may break down through global bifurcations of chaotic attractors such as crises, beyond which the system may generate erratic activity also at slow timescales. We analyze in detail these phenomena in the chaotic Rulkov map, a classical neuron model known to exhibit a variety of chaotic spike patterns. This leads us to propose that the passage of slow relaxation cycles through a chaotic attractor crisis is a robust, general mechanism for the transition between such dynamics, and we validate this numerically in other models.
  • 5:30 - 7:00 pm EDT
    Networking event with Carney Institute for Brain Science
    External Event - Carney Institute for Brain Science - 164 Angell St, Providence RI, 02906
Wednesday, September 20, 2023
  • 9:00 - 9:45 am EDT
    Modeling in neuroscience: the challenges of biological realism and computability
    11th Floor Lecture Hall
    • Speaker
    • Lai-Sang Young, Courant Institute
    • Session Chair
    • Katie Morrison, University of Northern Colorado
    Abstract
    Biologically realistic models of the brain have the potential to offer insight into neural mechanisms; they have predictive power, the ultimate goal of biological modeling. These benefits, however, come at considerable costs: network models that involve hundreds of thousands of neurons and many (unknown) parameters are unwieldy to build and to test, let alone to simulate and to analyze. Reduced models have obvious advantages, but the farther removed from biology a model is, the harder it is to draw meaningful inferences. In this talk, I propose a modeling strategy that aspires to be both realistic and computable. Two crucial ingredients are (i) we track neuronal dynamics on two spatial scales: coarse-grained dynamics informed by local activity, and (ii) we compute a family of potential local responses in advance, eliminating the need to perform similar computations at each spatial location in each update. I will illustrate this computational strategy using a model of the monkey visual cortex, which is very similar to that of humans.
  • 10:00 - 10:15 am EDT
    Coffee Break
    11th Floor Collaborative Space
  • 10:15 - 11:00 am EDT
    Uncertainty Quantification for Neurobiological Networks.
    11th Floor Lecture Hall
    • Speaker
    • Daniele Avitabile, Vrije Universiteit Amsterdam
    • Session Chair
    • Katie Morrison, University of Northern Colorado
    Abstract
    This talk presents a framework for forward uncertainty quantification problems in spatially-extended neurobiological networks. We will consider networks in which the cortex is represented as a continuum domain, and local neuronal activity evolves according to an integro-differential equation, collecting inputs nonlocally, from the whole cortex. These models are sometimes referred to as neural field equations. Large-scale brain simulations of such models are currently performed heuristically, and the numerical analysis of these problems is largely unexplored. In the first part of the talk I will summarise recent developments for the rigorous numerical analysis of projection schemes for deterministic neural fields, which sets the foundation for developing Finite-Element and Spectral schemes for large-scale problems. The second part of the talk will discuss the case of networks in the presence of uncertainties modelled with random data, in particular: random synaptic connections, external stimuli, neuronal firing rates, and initial conditions. Such problems give rise to random solutions, whose mean, variance, or other quantities of interest have to be estimated using numerical simulations. This so-called forward uncertainty quantification problem is challenging because it couples spatially nonlocal, nonlinear problems to large-dimensional random data. I will present a family of schemes that couple a spatial projector for the spatial discretisation, to stochastic collocation for the random data. We will analyse the time- dependent problem with random data and the schemes from a functional analytic viewpoint, and show that the proposed methods can achieve spectral accuracy, provided the random data is sufficiently regular. We will showcase the schemes using several examples. Acknowledgements This talk presents joint work with Francesca Cavallini (VU Amsterdam), Svetlana Dubinkina (VU Amsterdam), and Gabriel Lord (Radboud University).
  • 11:15 am - 12:00 pm EDT
    Open Problems Discussion
    Problem Session - 11th Floor Lecture Hall
    • Session Chairs
    • Konstantin Mischaikow, Rutgers University
    • Katie Morrison, University of Northern Colorado
  • 12:00 - 2:00 pm EDT
    Lunch/Free Time
  • 2:00 - 2:45 pm EDT
    Dynamics of stochastic integrate-and-fire networks
    11th Floor Lecture Hall
    • Speaker
    • Gabe Ocker, Boston University
    • Session Chair
    • Katie Morrison, University of Northern Colorado
  • 3:00 - 3:05 pm EDT
    A Step Towards Uncovering The Structure of Multistable Neural Networks
    Lightning Talks - 11th Floor Lecture Hall
    • Speaker
    • Magnus Tournoy, Flatiron Institute
    • Session Chair
    • Katie Morrison, University of Northern Colorado
    Abstract
    With the experimental advances in the recording of large populations of neurons, theorists are in the humbling position of making sense of a staggering amount of data. One question that will become more into reach is how network structure relates to function. But going beyond explanatory models and becoming more predictive will require a fundamental approach. In this talk we’ll take the view of a physicist and formulate exact results within a simple, yet general, toy model called Glass networks. Named after its originator Leon Glass, they are the infinite gain limit of well-known circuit models like continuous-time Hopfield networks. We’ll show that, within this limit, stability conditions reduce to semipositivity constraints on the synaptic weight matrix. Having a clear link between structure and function in possession, the consequences of multistability on the network architecture can be explored. One finding is the factorization of the weight matrix in terms of nonnegative matrices. Interestingly this factorization completely identifies the existence of stable states. Another result is the reduction of allowed sign patterns for the connections. A consequence hereof are lower bounds on the number of excitatory and inhibitory connections. At last we will discuss the special case of “sign stability”, where stability is guaranteed by the topology of the network. Derivations of these results will be supplemented by a number of examples.
  • 3:05 - 3:10 pm EDT
    Clustering and Distribution of the Adaptation Variable
    Lightning Talks - 11th Floor Lecture Hall
    • Speaker
    • Ka Nap Tse, University of Pittsburgh
    • Session Chair
    • Katie Morrison, University of Northern Colorado
    Abstract
    Brain wave is an important phenomenon in neuroscience. Besides synchronous spiking, excitatory cells with adaptation can spike in clusters to cause a rhythmic activity of the network. In previous works, the adaptation variable is usually eliminated for further analysis. In this talk, a way to study this clustering behaviour through the evolution of the distribution of the adaptation variable will be discussed. We then transform the distribution to the time-to-spike coordinate for further explorations.
  • 3:10 - 3:15 pm EDT
    Low-dimensional manifold of neural oscillations revealed by data-driven model reduction
    Lightning Talks - 11th Floor Lecture Hall
    • Speaker
    • Zhuo-Cheng Xiao, New York University
    • Session Chair
    • Katie Morrison, University of Northern Colorado
    Abstract
    Neural oscillations across various frequency bands are believed to underlie essential brain functions, such as information processing and cognitive activities. However, the emergence of oscillatory dynamics from spiking neuronal networks—and the interplay among different cortical rhythms—has seldom been theoretically explored, largely due to the strong nonlinearity and high dimensionality involved. To address this challenge, we have developed a series of data-driven model reduction methods tailored for spiking network dynamics. In this talk I will present nearly two-dimensional manifolds in the reduced coordinates that successfully capture the emergence of gamma oscillations. Specifically, we find that the initiation phases of each oscillation cycle are the most critical. Subsequent cycles are more deterministic and lie on the aforementioned two-dimensional manifold. The Poincaré mappings between these initiation phases reveal the structure of the dynamical system and successfully explain the bifurcation from gamma oscillations to multi-band oscillations.
  • 3:15 - 3:20 pm EDT
    Sensitivity to control signals in triphasic rhythmic neural systems: a comparative mechanistic analysis via infinitesimal local timing response curves
    Lightning Talks - 11th Floor Lecture Hall
    • Speaker
    • Zhuojun Yu, Case Western Reserve University
    • Session Chair
    • Katie Morrison, University of Northern Colorado
    Abstract
    Similar activity patterns may arise from model neural networks with distinct coupling properties and individual unit dynamics. These similar patterns may, however, respond differently to parameter variations and, specifically, to tuning of inputs that represent control signals. In this work, we analyze the responses resulting from modulation of a localized input in each of three classes of model neural networks that have been recognized in the literature for their capacity to produce robust three-phase rhythms: coupled fast-slow oscillators, near-heteroclinic oscillators, and threshold-linear networks. Triphasic rhythms, in which each phase consists of a prolonged activation of a corresponding subgroup of neurons followed by a fast transition to another phase, represent a fundamental activity pattern observed across a range of central pattern generators underlying behaviors critical to survival, including respiration, locomotion, and feeding. To perform our analysis, we extend the recently developed local timing response curve (lTRC), which allows us to characterize the timing effects due to perturbations, and we complement our lTRC approach with model-specific dynamical systems analysis. Interestingly, we observe disparate effects of similar perturbations across distinct model classes. Thus, this work provides an analytical framework for studying control of oscillations in nonlinear dynamical systems, and may help guide model selection in future efforts to study systems exhibiting triphasic rhythmic activity.
  • 3:20 - 3:25 pm EDT
    Modeling the effects of cell-type specific lateral inhibition
    Lightning Talks - 11th Floor Lecture Hall
    • Speaker
    • Soon Ho Kim, Georgia Institute of Technology
    • Session Chair
    • Katie Morrison, University of Northern Colorado
  • 3:30 - 4:00 pm EDT
    Coffee Break
    11th Floor Collaborative Space
  • 4:00 - 4:45 pm EDT
    Computing the Global Dynamics of Parameterized Families of ODEs
    11th Floor Lecture Hall
    • Speaker
    • Marcio Gameiro, Rutgers University
    • Session Chair
    • Katie Morrison, University of Northern Colorado
    Abstract
    We present a combinatorial topological method to compute the dynamics of a parameterized family of ODEs. A discretization of the state space of the systems is used to construct a combinatorial representation from which recurrent versus non-recurrent dynamics is extracted. Algebraic topology is then used to validate and characterize the dynamics of the system. We will discuss the combinatorial description and the algebraic topological computations and will present applications to systems of ODEs arising from gene regulatory networks.
Thursday, September 21, 2023
  • 9:00 - 9:45 am EDT
    Multiple timescale respiratory dynamics and effect of neuromodulation
    11th Floor Lecture Hall
    • Speaker
    • Yangyang Wang, Brandeis University
    • Session Chair
    • Zachary Kilpatrick, University of Colorado Boulder
    Abstract
    Respiration is an involuntary process in all living beings required for our survival. The preBötzinger complex (preBötC) in the mammalian brainstem is a neuronal network that drives inspiratory rhythmogenesis, whose activity is constantly modulated by neuromodulators in response to changes in the environment. In this talk, we will discuss chanllenges involved in the analysis of bursting dynamics in preBötC neurons and how these dynamics change during prenatal development. We will also combine insights from in vitro recordings and dynamical systems modeling to investigate the effect of norepinephrine (NE), an excitatory neuromodulator, on respiratory dynamics. Our investigation employs bifurcation analysis to reveal the mechanisms by which NE differentially modulates different types of preBötC bursting neurons.
  • 10:00 - 10:30 am EDT
    Coffee Break
    11th Floor Collaborative Space
  • 10:30 - 11:15 am EDT
    Enhancing Neuronal Classification Capacity via Nonlinear Parallel Synapses
    11th Floor Lecture Hall
    • Speaker
    • Marcus Benna, UC San Diego
    • Session Chair
    • Zachary Kilpatrick, University of Colorado Boulder
    Abstract
    We discuss models of a neuron that has multiple synaptic contacts with the same presynaptic axon. We show that a diverse set of learned nonlinearities in these parallel synapses leads to a substantial increase in the neuronal classification capacity.
  • 11:30 am - 1:30 pm EDT
    Working Lunch: Open Problems Session
    Working Lunch - 11th Floor Collaborative Space
  • 1:30 - 2:15 pm EDT
    Combinatorial structure of continuous dynamics in gene regulatory networks
    11th Floor Lecture Hall
    • Speaker
    • Tomas Gedeon, Montana State University
    • Session Chair
    • Zachary Kilpatrick, University of Colorado Boulder
    Abstract
    Gene network dynamics and neural network dynamics face similar challenges of high dimensionality of both phase space and parameter space, and a lack of reliable experimental data to infer parameters. We first describe the mathematical foundation of DSGRN (Dynamic Signatures Generated by Regulatory Networks), an approach that provides a combinatorial description of global dynamics of a network over its parameter space. Finite description allows comparison of parameterized dynamics between hundreds of networks to discard networks that are not compatible with experimental data. We also describe a close connection of DSGRN to Boolean network models that allows us to view DSGRN as a connection between parameterized continuous time dynamics and discrete dynamics of Boolean modets. If time allows, we discuss several applications of this methodology to systems biology.
  • 2:30 - 3:15 pm EDT
    A model of the mammalian neural motor architecture elucidates the mechanisms underlying efficient and flexible control of network dynamics
    11th Floor Lecture Hall
    • Speaker
    • Laureline Logiaco, Massachusetts Institute of Technology
    • Session Chair
    • Zachary Kilpatrick, University of Colorado Boulder
    Abstract
    One of the fundamental functions of the brain is to flexibly plan and control movement production at different timescales in order to efficiently shape structured behaviors. I will present research investigating how these complex computations are performed in the mammalian brain, with an emphasis on autonomous motor control. Specifically, I will focus on the mechanisms supporting efficient interfacing between 'higher-level' planning commands and 'lower-level' motor cortical dynamics that ultimately drive muscles. I will take advantage of the fact that the anatomy of the circuits underlying motor control is well known. It notably involves the primary motor cortex, a recurrent network that generates learned commands to drive muscles while interacting through loops with thalamic neurons that lack recurrent excitation. Using an analytically tractable model that incorporates these architectural constraints, I will explain how this motor circuit can implement a form of efficient modularity by combining (i) plastic thalamocortical loops that are movement-specific and (ii) shared hardwired circuits. I will show that this modular architecture can balance two different objectives: first, supporting the flexible recombination of an extensible library of re-usable motor primitives; and second, promoting the efficient use of neural resources by taking advantage of shared connections between modules. I will end by mentioning some open avenues for further mathematical analyses related to this framework.
  • 3:30 - 4:00 pm EDT
    Coffee Break
    11th Floor Collaborative Space
  • 4:00 - 4:45 pm EDT
    Low-rank neural connectivity for the discrimination of temporal patterns.
    11th Floor Lecture Hall
    • Speaker
    • Sean Escola, Columbia University
    • Session Chair
    • Zachary Kilpatrick, University of Colorado Boulder
Friday, September 22, 2023
  • 9:00 - 9:45 am EDT
    Mean-field theory of learning dynamics in deep neural networks
    11th Floor Lecture Hall
    • Speaker
    • Cengiz Pehlevan, Harvard University
    • Session Chair
    • Konstantin Mischaikow, Rutgers University
    Abstract
    Learning dynamics of deep neural networks is complex. While previous approaches made advances in mathematical analysis of the dynamics of two-layer neural networks, addressing deeper networks have been challenging. In this talk, I will present a mean field theory of the learning dynamics of deep networks and discuss its implications.
  • 10:00 - 10:45 am EDT
    Multi-level measures for understanding and comparing biological and artificial neural networks
    11th Floor Lecture Hall
    • Speaker
    • SueYeon Chung, New York University
    • Session Chair
    • Konstantin Mischaikow, Rutgers University
    Abstract
    I will share recent theoretical advances on how representation's population level properties such as high-dimensional geometries and spectral properties can be used to capture (1) the classification capacity of neural manifolds, and (2) prediction error of neural data from network model representations.
  • 11:00 - 11:30 am EDT
    Coffee Break
    11th Floor Collaborative Space
  • 11:30 am - 12:15 pm EDT
    A Sparse-coding Model of Category-specific Functional Organization in IT Cortex
    11th Floor Lecture Hall
    • Speaker
    • Demba Ba, Harvard University
    • Session Chair
    • Konstantin Mischaikow, Rutgers University
    Abstract
    Primary sensory areas in the brain of mammals may have evolved to compute efficient representations of natural scenes. In the late 90s, Olhausen and Field proposed a model that expresses the components of a natural scene, e.g. natural-image patches, as sparse combinations of a common set of patterns. Applied to a dataset of natural images, this so-called sparse coding model learns patterns that resemble the receptive fields of V1 neurons. Recordings from the monkey infero-temporal (IT) cortex suggest the presence, in this region, of a sparse code for natural-image categories. The recordings also suggest that, physically, IT neurons form spatial clusters, each of which preferentially responds to images from certain categories. Taken together, this evidence suggests that neurons in IT cortex form functional groups that reflect the grouping of natural images into categories. My talk will introduce a new sparse-coding model that exhibits this categorical form of functional grouping.
  • 12:30 - 2:00 pm EDT
    Lunch/Free Time
  • 2:00 - 2:45 pm EDT
    Final Open Problems Discussion
    Problem Session - 11th Floor Lecture Hall
    • Session Chairs
    • Carina Curto, The Pennsylvania State University
    • Konstantin Mischaikow, Rutgers University
  • 3:00 - 3:30 pm EDT
    Coffee Break
    11th Floor Collaborative Space
Monday, September 25, 2023
Math + Neuroscience: Strengthening the Interplay Between Theory and Mathematics
  • 10:00 - 11:00 am EDT
    Journal Club
    11th Floor Lecture Hall
  • 3:00 - 3:30 pm EDT
    Coffee Break
    11th Floor Collaborative Space
  • 3:30 - 5:00 pm EDT
    TLN Working Group
    Group Work - 10th Floor Classroom
Tuesday, September 26, 2023
Math + Neuroscience: Strengthening the Interplay Between Theory and Mathematics
  • 9:00 - 10:30 am EDT
    Tutorial
    Tutorial - 11th Floor Lecture Hall
  • 3:00 - 3:30 pm EDT
    Coffee Break
    11th Floor Collaborative Space
Wednesday, September 27, 2023
Math + Neuroscience: Strengthening the Interplay Between Theory and Mathematics
  • 9:00 - 10:00 am EDT
    Professional Development: Ethics I
    Professional Development - 11th Floor Lecture Hall
  • 3:00 - 3:30 pm EDT
    Coffee Break
    11th Floor Collaborative Space
Thursday, September 28, 2023
Math + Neuroscience: Strengthening the Interplay Between Theory and Mathematics
  • 9:00 - 10:30 am EDT
    Tutorial
    Tutorial - 11th Floor Lecture Hall
  • 12:00 - 1:30 pm EDT
    Open Problems Lunch Seminar
    Working Lunch - 11th Floor Lecture Hall
  • 3:00 - 3:30 pm EDT
    Coffee Break
    11th Floor Collaborative Space
Friday, September 29, 2023
Math + Neuroscience: Strengthening the Interplay Between Theory and Mathematics
  • 11:00 - 11:30 am EDT
    Recurrent network models for predictive processing
    Post Doc/Graduate Student Seminar - 10th Floor Classroom
    • Bin Wang, University of California, San Diego
    Abstract
    Predictive responses to sensory stimuli are prevalent across cortical networks and are thought to be important for multi-sensory and sensorimotor learning. It has been hypothesized that predictive processing relies on computations done by two separate functional classes of cortical neurons: one specialized for “faithful” representation of external stimuli, and another for conveying prediction-error signals. It remains unclear how such predictive representations are formed in natural conditions, where stimuli are high-dimensional. In this presentation, I will present some efforts on characterizing how high-dimensional predictive processing can be performed through recurrent networks. I will start with the neuroscience motivations, define the mathematical models and mention some related mathematical questions that we haven't yet solved along the way.
  • 11:30 am - 12:00 pm EDT
    Fishing for beta: uncovering mechanisms underlying cortical oscillations in large-scale biophysical models
    Post Doc/Graduate Student Seminar - 10th Floor Classroom
    • Nicholas Tolley, Brown University
    Abstract
    Beta frequency (13-30 Hz) oscillations are robustly observed across the neocortex, and are strongly predictive of behavior and disease states. While several theories exist regarding their functional significance, the cell and circuit level activity patterns underlying the generation of beta activity remains uncertain. We approach this problem using the Human Neocortical Neurosolver (HNN; hnn.brown.edu), a detailed biophysical model of a cortical column which simulates the microscale activity patterns underlying macroscale field potentials like beta oscillations. Detailed biophysical models potentially offer concrete and biologically interpretable predictions, but their use is challenged by computationally expensive simulations, an overwhelmingly large parameter space, and highly complex relationships between parameters and model outputs. We demonstrate how these challenges can be overcome by combining HNN with simulation based inference (SBI), a deep learning based Bayesian inference framework, and use it to characterize the space of parameters capable of producing beta oscillations. Specifically, we use the HNN-SBI framework to characterize the constraints on network connectivity for producing spontaneous beta. In future work, we plan to compare these predictions to higher level neural models to identify which simplifying assumptions are consistent with detailed models of neural oscillations.
  • 1:30 - 3:00 pm EDT
    Topology+Neuro Working Group
    Group Work - 10th Floor Classroom
  • 3:00 - 3:30 pm EDT
    Coffee Break
    11th Floor Collaborative Space
Monday, October 2, 2023
Math + Neuroscience: Strengthening the Interplay Between Theory and Mathematics
  • 10:00 - 10:45 am EDT
    Nonlinear stimulus representation in neural circuits with approximate excitatory-inhibitory balance
    Journal Club - 10th Floor Classroom
  • 3:00 - 3:30 pm EDT
    Coffee Break
    11th Floor Collaborative Space
  • 3:30 - 5:00 pm EDT
    TLN Working Group
    Group Work - 10th Floor Classroom
Tuesday, October 3, 2023
Math + Neuroscience: Strengthening the Interplay Between Theory and Mathematics
  • 9:00 - 10:30 am EDT
    Computational topology and network dynamics (Part 1 of 2)
    Tutorial - 10th Floor Classroom
    • Marcio Gameiro, Rutgers University
    Abstract
    We will discuss a combinatorial topological method to compute the dynamics of a network. A discretization of the state space of the systems is used to construct a combinatorial representation from which recurrent versus non-recurrent dynamics is extracted. This approach is implemented in the DSGRN (Dynamic Signatures Generated by Regulatory Networks) software, which computes a combinatorial description of parameter space and the global dynamics of a network. Algebraic topology (Conley index) is then used to validate and characterize the dynamics of the system. We will discuss the combinatorial description and the algebraic topological computations and will present software to carry out the computations.
  • 3:00 - 3:30 pm EDT
    Coffee Break
    11th Floor Collaborative Space
Wednesday, October 4, 2023
Math + Neuroscience: Strengthening the Interplay Between Theory and Mathematics
  • 9:00 - 10:00 am EDT
    Professional Development: Ethics II
    Professional Development - 11th Floor Lecture Hall
  • 10:30 am - 12:00 pm EDT
    Computational topology and network dynamics (Part 2 of 2)
    Tutorial - 10th Floor Classroom
    • Marcio Gameiro, Rutgers University
    Abstract
    We will discuss a combinatorial topological method to compute the dynamics of a network. A discretization of the state space of the systems is used to construct a combinatorial representation from which recurrent versus non-recurrent dynamics is extracted. This approach is implemented in the DSGRN (Dynamic Signatures Generated by Regulatory Networks) software, which computes a combinatorial description of parameter space and the global dynamics of a network. Algebraic topology (Conley index) is then used to validate and characterize the dynamics of the system. We will discuss the combinatorial description and the algebraic topological computations and will present software to carry out the computations.
  • 3:00 - 3:30 pm EDT
    Coffee Break
    11th Floor Collaborative Space
  • 3:30 - 4:30 pm EDT
    Human Necortical Neurosolver: A neural modeling software for multi-scale interpretation of human electrophysiology
    11th Floor Lecture Hall
    • Stephanie Jones, Brown Univeristy
Thursday, October 5, 2023
Math + Neuroscience: Strengthening the Interplay Between Theory and Mathematics
  • 9:00 - 10:30 am EDT
    Convex neural codes
    Tutorial - 10th Floor Classroom
    • Nora Youngs, Colby College
    Abstract
    In this tutorial I'll introduce convex neural codes, which are codes that arise from arrangements of convex regions. We will go through some topological, combinatorial, and algebraic methods we have used to study such codes...and also, draw lots of pictures.
  • 12:00 - 1:30 pm EDT
    Mathematics of Neural Nets: Random Matrices, Multiscale and Stability - Lunch Seminar
    11th Floor Lecture Hall
    • Speaker
    • Leonid Berlyand, Penn State
    • Session Chair
    • Carina Curto, The Pennsylvania State University
  • 3:00 - 3:30 pm EDT
    Coffee Break
    11th Floor Collaborative Space
Friday, October 6, 2023
Math + Neuroscience: Strengthening the Interplay Between Theory and Mathematics
  • 9:30 - 10:30 am EDT
    "Something Cool I Know" Seminar
    10th Floor Classroom
  • 11:00 - 11:30 am EDT
    Parameter estimation with uncertainty quantification via Markov Chain Monte Carlo methods
    Post Doc/Graduate Student Seminar - 11th Floor Lecture Hall
    • Federica Milinanni, KTH - Royal Institute of Technology
  • 11:30 am - 12:00 pm EDT
    Harmonic Analysis of Sequences
    Post Doc/Graduate Student Seminar - 11th Floor Lecture Hall
    • Hannah Santa Cruz, Penn State
    Abstract
    The Combinatorial Laplacian is a popular tool in Graph and Network analysis. Recent work has proposed the use of Hodge Laplacians and the Magnetic Laplacian to analyze Simplicial Complexes and Directed Graphs respectively. We continue this work, by interpreting the Hodge Laplacian associated to a weighted simplicial complex, in terms of a weight function which is induced by a probability distribution. In particular, we develop a null hypothesis weighted simplicial complex model, induced by an independent distribution on the vertices, and show that the associated Laplacian is trivial. We extend this work to Sequence Complexes, where we consider the faces to be sequences, allowing for repeated vertices and distinguishing sequences with different orderings. In this setting, we also explore the Laplacian associated to a weight function induced by an independent distribution on the vertices, and completely describe it’s eigen spectrum, which is no longer trivial but still simple. Our analysis and findings contribute to the broader field of spectral graph theory and provide a deeper understanding of Laplacians on simplicial and sequence complexes, paving the way for further exploration and applications of Laplacian operators.
  • 1:30 - 3:00 pm EDT
    Topology+Neuro Working Group
    Group Work - 10th Floor Classroom
  • 3:00 - 3:30 pm EDT
    Coffee Break
    11th Floor Collaborative Space
Monday, October 9, 2023
Math + Neuroscience: Strengthening the Interplay Between Theory and Mathematics
  • 10:00 - 10:45 am EDT
    From the statistics of connectivity to the statistics of spike times in neuronal networks
    Journal Club - 10th Floor Classroom
  • 3:30 - 5:00 pm EDT
    TLN Working Group
    Group Work - 10th Floor Classroom
Tuesday, October 10, 2023
Math + Neuroscience: Strengthening the Interplay Between Theory and Mathematics
  • 9:00 - 10:30 am EDT
    Tutorial on oriented matroids and neural codes
    Tutorial - 10th Floor Classroom
    • Alexander Kunin, Creighton University
    • Caitlin Lienkaemper, Boston University
  • 3:00 - 3:30 pm EDT
    "Ada Lovelace Day" Coffee Break
    Coffee Break - 11th Floor Collaborative Space
Wednesday, October 11, 2023
Math + Neuroscience: Strengthening the Interplay Between Theory and Mathematics
  • 9:00 - 10:30 am EDT
    The cohomology ring and some related persistent invariants
    Tutorial - 10th Floor Classroom
    • Ling Zhou, ICERM
  • 3:00 - 3:30 pm EDT
    Coffee Break
    11th Floor Collaborative Space
  • 3:30 - 4:30 pm EDT
    Network resonance: a framework for dissecting feedback and frequency filtering mechanisms in neuronal systems
    11th Floor Lecture Hall
    • Speaker
    • Horacio Rotstein, New Jersey Institute of Technology
    • Session Chair
    • Peter Thomas, Case Western Reserve University
    Abstract
    Resonance is defined as a maximal amplification of the response of a system to periodic inputs in a limited, intermediate input frequency band. Resonance may serve to optimize inter-neuronal communication, and has been observed at multiple levels of neuronal organization including membrane potential fluctuations, single neuron spiking, postsynaptic potentials, and neuronal networks. However, it is unknown how resonance observed at one level of neuronal organization (e.g., network) depends on the properties of the constituting building blocks, and whether, and if yes how, it affects the resonant and oscillatory properties upstream. One difficulty is the absence of a conceptual framework that facilitates the interrogation of resonant neuronal circuits and organizes the mechanistic investigation of network resonance in terms of the circuit components, across levels of organization. We address these issues by discussing a number of representative case studies. The dynamic mechanisms responsible for the generation of resonance involve disparate processes, including negative feedback effects, history-dependence, spiking discretization combined with subthreshold passive dynamics, combinations of these, and resonance inheritance from lower levels of organization. The band-pass filters associated with the observed resonances are generated by primarily nonlinear interactions of low- and high-pass filters. We identify these filters (and interactions) and we argue that these are the constitutive building blocks of a resonance framework. Finally, we discuss alternative frameworks and we show that different types of models (e.g., spiking neural networks and rate models) can show the same type of resonance by qualitative different mechanisms.
Thursday, October 12, 2023
Math + Neuroscience: Strengthening the Interplay Between Theory and Mathematics
  • 9:00 - 10:30 am EDT
    Tutorial on oriented matroids and neural codes
    Tutorial - 10th Floor Classroom
    • Alexander Kunin, Creighton University
    • Caitlin Lienkaemper, Boston University
  • 12:00 - 1:30 pm EDT
    Open Problems Related to the Stochastic Leaky Integrate-and-fire Model
    Open Problems Seminar - Lunch - 11th Floor Lecture Hall
    • Gabe Ocker, Boston University
  • 3:00 - 3:30 pm EDT
    Coffee Break
    11th Floor Collaborative Space
  • 3:30 - 4:30 pm EDT
    Gabor Frames and Contact Geometry in models of the primary visual cortex
    10th Floor Classroom
    • Vasiliki Liontou, ICERM
    Abstract
    In this talk, I will introduce a model of the primary visual cortex (V1), which allows the compression and decomposition of a signal by a discrete family of orientation and position dependent receptive profiles. In particular, a specific framed sampling set and an associated Gabor system is determined by the Legendrian circle bundle structure of the 3-manifold of contact elements on a surface (which models the V1−cortex), together with the presence of an almost complex structure on the tangent bundle of the surface (which models the retinal surface). Additionally, a maximal area of the signal planes, determined by the retinal surface, that provides a finite number of receptive profiles, sufficient for good encoding and decoding is identified. An extension of this model for receptive fields dependent on position, orientation, frequency and phase will be discussed.
Friday, October 13, 2023
Math + Neuroscience: Strengthening the Interplay Between Theory and Mathematics
  • 9:30 - 10:30 am EDT
    "Something Cool I Know" Seminar
    10th Floor Classroom
  • 11:00 - 11:30 am EDT
    Computing the Rank Invariant and the Matching Distance of Multi-Parameter Persistence Modules (with the help of discrete Morse theory)
    Post Doc/Graduate Student Seminar - 11th Floor Lecture Hall
    • Robyn Brooks, University of Utah
    Abstract
    Persistent Homology is a tool of Computation Topology which is used to determine the topological features of a space from a sample of data points. In this talk, I will introduce the (multi-)persistence pipeline, as well as some basic tools from Discrete Morse Theory which can be used to better understand the multi-parameter persistence module of a filtration. In particular, the addition of a discrete gradient vector field consistent with a multi-filtration allows one to exploit the information contained in the critical cells of that vector field as a means of enhancing geometrical understanding of multi-parameter persistence. I will present results from joint work with Claudia Landi, Asilata Bapat, Barbara Mahler, and Celia Hacker, in which we are able to show that the rank invariant for nD persistence modules can be computed by selecting a small number of values in the parameter space determined by the critical cells of the discrete gradient vector field. These values may be used to reconstruct the rank invariant for all other possible values in the parameter space. Time permitting, I will also introduce results from a subsequent work, in which we provide theoretical results for the computation of the matching distance in two dimensions.
  • 11:30 am - 12:00 pm EDT
    Ephemeral Persistence Features and the Stability of Filtered Chain Complexes
    Post Doc/Graduate Student Seminar - 11th Floor Lecture Hall
    • Ling Zhou, ICERM
    Abstract
    We strengthen the usual stability theorem for Vietoris-Rips persistent homology of finite metric spaces by building upon constructions due to Usher and Zhang in the context of filtered chain complexes. The information present at the level of filtered chain complexes includes ephemeral points, i.e. points with zero persistence, which provide additional information to that present at homology level. The resulting invariant, called verbose barcode, which has a stronger discriminating power than the usual barcode, is proved to be stable under certain metrics which are sensitive to these ephemeral points. In the case of degree zero, we provide an explicit formula to compute this new metric between verbose barcodes.
  • 1:30 - 3:00 pm EDT
    Topology+Neuro Working Group
    Group Work - 10th Floor Classroom
  • 3:00 - 3:30 pm EDT
    Coffee Break
    11th Floor Collaborative Space
Monday, October 16, 2023
  • 8:50 - 9:00 am EDT
    Welcome
    11th Floor Lecture Hall
    • Session Chair
    • Caroline Klivans, Brown University
  • 9:00 - 9:45 am EDT
    The geometry of perceptual spaces of textures and objects
    11th Floor Lecture Hall
    • Speaker
    • Jonathan Victor, Weill Cornell Medical College
    • Session Chair
    • Carina Curto, The Pennsylvania State University
    Abstract
    Recent technological advances allow for massive population-level recordings of neural activity, raising the hope of achieving a detailed understanding of the linkage of neurophysiology and behavior. Achieving this linkage relies on the tenet that, viewed in the right way, the mapping between neural activity and behavior preserves similarities. At the behavioral level, these similarities are captured by the topology and geometry of perceptual spaces. With this motivation, I describe some recent studies of the geometry of several perceptual spaces, including “low-level” spaces of visual features, and “higher-level” spaces dominated by semantic content. The experiments use a new, efficient psychophysical paradigm for collecting similarity judgments, and the analysis methods range from seeking Euclidean embeddings via non-metric multidimensional scaling to strategies that make minimal assumptions about the underlying geometry. With these tools, we characterize how the geometry of the spaces vary with semantic content, and the aspects of these geometries that are task-dependent.
  • 10:00 - 10:15 am EDT
    Coffee Break
    11th Floor Collaborative Space
  • 10:15 - 11:00 am EDT
    Topology protects emergent dynamics and long timescales in biological networks
    11th Floor Lecture Hall
    • Speaker
    • Evelyn Tang, Rice University
    • Session Chair
    • Carina Curto, The Pennsylvania State University
    Abstract
    Long and stable timescales are often observed in complex biochemical networks, such as in emergent oscillations or memory. How these robust dynamics persist remains unclear, given the many stochastic reactions and shorter time scales of the underlying components. We propose a topological model with parsimonious parameters that produces long oscillations around the network boundary, effectively reducing the system dynamics to a lower-dimensional current. I will demonstrate how this can model the circadian clock of cyanobacteria, with efficient properties such as simultaneously increased precision and decreased cost. Our work presents a new mechanism for emergent dynamics that could be useful for various cognitive and biological functions.
  • 11:15 - 11:45 am EDT
    Open Problems Session
    Problem Session - 11th Floor Lecture Hall
    • Session Chair
    • Carina Curto, The Pennsylvania State University
  • 11:45 am - 1:30 pm EDT
    Lunch/Free Time
  • 1:30 - 2:15 pm EDT
    Discovering the geometry of neural representations via topological tools.
    11th Floor Lecture Hall
    • Speaker
    • Vladimir Itskov, The Pennsylvania State University
    • Session Chair
    • Katie Morrison, University of Northern Colorado
    Abstract
    Neural representations of stimulus spaces often comes with a natural geometry. Perhaps the most salient examples of such neural populations are those with convex receptive fields (or tuning curves), such as place cells in hippocampus or neurons in V1. Geometry of neural representations is understood in a very limited number of well-studied neural circuits. It is rather poorly understood in most other parts of the brain. This raises a natural question: can one infer such a geometry, based on the statistics of the neural responses alone? A crucial tool for inferring a geometry is a basis of coordinate functions that "respects" the underlying geometry, while providing meaningful low-dimensional approximations. Eigenfunctions of a Laplacian, derived from the underlying metric, serve as such basis in many scientific fields. However, spike trains, and other derived features of neural activity do not come with a natural metric, while they do come with an "intrinsic" probability distribution of neural activity patterns. Building on the tools from combinatorial topology, we introduce Hodge Laplacians associated with probability distributions on sequential data, such as spike trains. We demonstrate that these Laplacians have desirable properties with respect to the natural null-models, where the underlying neurons are independent. Our results establish a foundation for dimensionality reduction and Fourier analyses of probabilistic models, that are common in theoretical neuroscience and machine-learning.
  • 2:30 - 2:40 pm EDT
    Connections between the topology of tasks, classifying spaces, and learned representations
    Lightning Talks - 11th Floor Lecture Hall
    • Speaker
    • Thomas Burns, ICERM
    • Session Chair
    • Katie Morrison, University of Northern Colorado
    Abstract
    Modified state complexes (Burns & Tang, 2022) extend the mathematical framework of reconfigurable systems and state complexes due to Abrams, Ghrist & Peterson to study gridworlds -- simple 2D environments inhabited of agents, objects, etc.. Such state complexes represent all possible configurations of a system as a single geometric space, thus making them conducive to study using geometric, topological, or combinatorial methods. Modified state complexes exhibit geometric defects (failure of Gromov's Link Condition) exactly where undesirable or dangerous states appear in the gridworld. We hypothesize that the modified state complex should be a classifying space for the n–strand braid group and that social place cell circuits in mammalian hippocampus use similar principles to represent and avoid danger.
  • 2:40 - 2:50 pm EDT
    Emergence of high-order functional hubs in the human brain
    Lightning Talks - 11th Floor Lecture Hall
    • Speaker
    • Fernando Nobrega Santos, University of Amsterdam
    • Session Chair
    • Katie Morrison, University of Northern Colorado
    Abstract
    Network theory is often based on pairwise relationships between nodes, which is not necessarily realistic for modeling complex systems. Importantly, it does not accurately capture non-pairwise interactions in the human brain, often considered one of the most complex systems. In this work, we develop a multivariate signal processing pipeline to build high-order networks from time series and apply it to resting-state functional magnetic resonance imaging (fMRI) signals to characterize high-order communication between brain regions. We also propose connectivity and signal processing rules for building uniform hypergraphs and argue that each multivariate interdependence metric could define weights in a hypergraph. As a proof of concept, we investigate the most relevant three-point interactions in the human brain by searching for high-order “hubs” in a cohort of 100 individuals from the Human Connectome Project. We find that, for each choice of multivariate interdependence, the high-order hubs are compatible with distinct systems in the brain. Additionally, the high-order functional brain networks exhibit simultaneous integration and segregation patterns qualitatively observable from their high-order hubs. Our work hereby introduces a promising heuristic route for hypergraph representation of brain activity and opens up exciting avenues for further research in high-order network neuroscience and complex systems.
  • 2:50 - 3:00 pm EDT
    Topological feature selection for time series: an example with C. elegans neuronal data
    Lightning Talks - 11th Floor Lecture Hall
    • Speaker
    • Johnathan Bush, University of Florida
    • Session Chair
    • Katie Morrison, University of Northern Colorado
    Abstract
    Neurons across the brain of the model organism C. elegans are known to share information by engaging in coordinated dynamic activity that evolves cyclically. Takens' theorem implies that a sliding window embedding of time series, such as neuronal activity, will preserve the topology of an orbit of the underlying dynamical system driving the time series. These orbits are then quantifiable by the persistent homology of the sliding window embedding. In this setting, we will describe a method for topological optimization in which each time series (e.g., a single neuron's activity) is assigned a score of its contribution to the global, coordinated dynamics of a collection of time series (e.g., the brain).
  • 3:00 - 3:10 pm EDT
    The Directed Merge Tree Distance and its Applications
    Lightning Talks - 11th Floor Lecture Hall
    • Speaker
    • Xinyi Wang, Michigan State University
    • Session Chair
    • Katie Morrison, University of Northern Colorado
    Abstract
    Geometric graphs appear in many real-world datasets, such as embedded neurons, sensor networks, and molecules. We investigate the notion of distance between graphs and present a semi-metric to measure the distance between two geometric graphs via the directional transform combined with the labeled merge tree distance. We introduce a way of rotating the sub-level set to obtain the merge trees, and represent the merge trees using a surjective multi-labeling scheme. We then compute the distance between two representative matrices. Our distance is not only reflective of the information from the input graphs, but also can be computed in polynomial time. We illustrate its utility by implementation on a Passiflora leaf dataset.
  • 3:10 - 3:20 pm EDT
    Structure Index: a graph-based method for point cloud data analysis
    Lightning Talks - 11th Floor Lecture Hall
    • Speaker
    • Julio Esparza Ibanez, Instituto Cajal - CSIC (Spanish National Research Council)
    • Session Chair
    • Katie Morrison, University of Northern Colorado
    Abstract
    A point cloud is a prevalent data format found in many fields of science, which involves the definition of points in an arbitrarily high dimensional space. Typically, each of these points is associated with additional values (i.e. features) which require interpretation in the representation space. For instance, in neuroscience, neural activity over time can be pictured as a point cloud in a high-dimensional space. In these so-called neural manifolds, one may project different features onto the point cloud, such as any relevant behavioral variable. In this context, understanding if and how a given feature is structured along a point cloud can provide great insights into the neural representations. Here, I will introduce the Structure Index (SI), a graph-based metric developed to quantify how a given feature is structured along an arbitrarily high-dimensional point cloud. The SI is defined from the overlapping distribution of data points sharing similar feature values in a given neighborhood of the cloud. Using arbitrary data clouds, I will show how the SI provides quantification of the degree of local versus global organization of feature distribution. Moreover, when applied to experimental studies of head-direction cells, the SI is able to retrieve consistent feature structure from both the high- and low-dimensional representations. Overall, the SI provides versatile applications in the neuroscience and data science fields. We look to share the tool with other colleagues in the field, in order to promote community-based testing and implementation.
  • 3:20 - 3:30 pm EDT
    Structure in neural correlations during spontaneous activity: an experimental and topological approach
    Lightning Talks - 11th Floor Lecture Hall
    • Speaker
    • Nicole Sanderson, Penn State University
    • Session Chair
    • Katie Morrison, University of Northern Colorado
    Abstract
    Calcium imaging recordings of ~1000s of neurons in zebrafish larvae optic tectum in the absence of stimulation reveal spontaneous activity of neuronal assemblies that are both functionally coordinated and localized. To understand the functional structure of these assemblies, we study the pairwise correlations of the calcium signals of assembly neurons using techniques from topological data analysis (TDA). TDA can bring new insights when analyzing neural correlations, as many common techniques to do so, like spectral analyses, are sensitive to nonlinear monotonic transformations introduced in measurement. In contrast, a TDA construction called the order complex is invariant under monotonic transformations and can capture higher order structure in a set of pairwise correlations. We find that topological signatures derived from the order complex can identify distinct neural correlation structures during spontaneous activity. Our analyses further suggest a variety of possible assembly dynamics around the onset of spontaneous activation.
  • 3:30 - 4:00 pm EDT
    Coffee Break
    11th Floor Collaborative Space
  • 4:00 - 4:45 pm EDT
    Topology shapes dynamics of higher-order networks
    11th Floor Lecture Hall
    • Speaker
    • Ginestra Bianconi, Queen Mary University of London
    • Session Chair
    • Katie Morrison, University of Northern Colorado
    Abstract
    Higher-order networks capture the interactions among two or more nodes and they are raising increasing interest in the study of brain networks. Here we show that higher-order interactions are responsible for new non-linear dynamical processes that cannot be observed in pairwise networks. We reveal how non-linear topolody shapes dynamics, by defining Topological Kuramoto model and Topological global synchronization. These critical phenomena capture the synchronization of topological signals, i.e. dynamical signal defined not only on nodes but also on links, triangles and higher-dimensional simplices in simplicial complexes. In this novel synchronized states for topological signals the dynamics localizes on the holes of the simplicial complexes. Moreover will discuss how the Dirac operator can be used to couple and process topological signals of different dimensions, formulating Dirac signal processing. Finally we will show how non-linear dynamics can shape topology by formulating triadic percolation. In triadic percolation triadic interactions can turn percolation into a fully-fledged dynamical process in which nodes can turn on and off intermittently in a periodic fashion or even chaotically leading to period doubling and a route to chaos of the percolation order parameter. Triadic percolation changes drastically our understanding of percolation and can describe real systems in which the giant component varies significantly in time such as in brain functional networks and in climate.
  • 5:00 - 6:30 pm EDT
    Reception
    11th Floor Collaborative Space
Tuesday, October 17, 2023
  • 9:00 - 9:45 am EDT
    A power law of cortical adaptation in neural populations
    11th Floor Lecture Hall
    • Speaker
    • Dario Ringach, University of California, Los Angeles
    • Session Chair
    • Matilde Marcolli, California Institute of Technology
    Abstract
    How do neural populations adapt to the time-varying statistics of sensory input? To investigate, we measured the activity of neurons in primary visual cortex adapted to different environments, each associated with a distinct probability distribution over a stimulus set. Within each environment, a stimulus sequence was generated by independently sampling form its distribution. We find that two properties of adaptation capture how the population responses to a given stimulus, viewed as vectors, are linked across environments. First, the ratio between the response magnitudes is a power law of the ratio between the stimulus probabilities. Second, the response directions are largely invariant. These rules can be used to predict how cortical populations adapt to novel, sensory environments. Finally, we show how the power law enables the cortex to signal unexpected stimuli preferentially and to adjust the metabolic cost of its sensory representation to the entropy of the environment.
  • 10:00 - 10:15 am EDT
    Coffee Break
    11th Floor Collaborative Space
  • 10:15 - 11:00 am EDT
    A geometric model of the visual and motor cortex
    11th Floor Lecture Hall
    • Speaker
    • Giovanna Citti, university of Bologna
    • Session Chair
    • Matilde Marcolli, California Institute of Technology
    Abstract
    I'll present a geometric model of the motor cortex, joint work with Alessandro Sarti. Each family of cells in the cortex is sensitive to a specific feature and will be described as a sub-Riemannian space. The propagation of the activity along cortical connectivity will be described as a sub-Riemannian differential equation. The stable states of the equation will describe the perceptual units, allowing to validate the model. It can be applied to selectivity of simple features (as for example direction of movement), or to more complex feautures, defined as perceptual units of the previous family of cells. The same instruments can describe both the visual and the motor cortex.
  • 11:15 - 11:45 am EDT
    Open Problems Session
    Problem Session - 11th Floor Lecture Hall
    • Session Chair
    • Matilde Marcolli, California Institute of Technology
  • 11:50 am - 12:00 pm EDT
    Group Photo (Immediately After Talk)
    11th Floor Lecture Hall
  • 12:00 - 1:30 pm EDT
    Networking Lunch
    Working Lunch - 11th Floor Lecture Hall
  • 1:30 - 2:15 pm EDT
    Topological analysis of sensory-evoked network activity
    11th Floor Lecture Hall
    • Speaker
    • Alex Reyes, New York University
    • Session Chair
    • Peter Thomas, Case Western Reserve University
    Abstract
    Sensory stimuli evoke activity in a population of neurons in cortex. In topographically organized networks, activated neurons with similar receptive fields occur within a relatively confined area, suggesting that the spatial distribution and firing dynamics of the neuron population contribute to processing of sensory information. However, inherent variability in neuronal firing, makes it difficult to determine which neurons encode signal and which represent noise. Here, we use simplicial complexes to identify functionally relevant neurons whose activities are likely to be propagated and to distinguish between multiple populations activated during complex stimuli. Moreover, preliminary analyses suggest that changes in the extent and magnitude of network activity can be described abstractly as the movement of points on the surface of a torus.
  • 2:30 - 2:40 pm EDT
    Analyzing spatiotemporal patterns using geometric scattering and persistent homology
    Lightning Talks - 11th Floor Lecture Hall
    • Speaker
    • Dhananjay Bhaskar, Yale University
    • Session Chair
    • Peter Thomas, Case Western Reserve University
    Abstract
    I will introduce Geometric Scattering Trajectory Homology (GSTH), a general framework for analyzing complex spatiotemporal patterns that emerge from coordinated signaling and communication in a variety of biological contexts, including Ca2+ activity in the prefrontal visual cortex in response to grating stimuli, and entrainment of theta oscillations in the brain during memory encoding and retrieval tasks. We tested this framework by recovering model parameters, drug treatments and stimuli from simulation and experimental data. Additionally, we show that learned representations in GSTH capture the degree of synchrony, phase transitions, and quasi-periodicity of the underlying signaling pattern at multiple scales, showing promise towards uncovering intricate neural communication mechanisms.
  • 2:40 - 2:50 pm EDT
    Multiple Neural Spike Train Data Analysis Using Persistent Homology
    Lightning Talks - 11th Floor Lecture Hall
    • Speaker
    • Huseyin Ayhan, Florida State University
    • Session Chair
    • Peter Thomas, Case Western Reserve University
    Abstract
    A neuronal spike train is the recorded sequence of times when a neuron fires action potentials, also known as spikes. Studying the collective activities of neurons as a network of spike trains can help us gain an understanding of how they function. These networks are well-suited for the application of topological tools. In this lightning talk, I will briefly explain how persistent homology, one of the most powerful tools of TDA, can be applied to understand and compare the topology of these networks.
  • 2:50 - 3:00 pm EDT
    Variability of topological features on brain functional networks in precision resting-state fMRI.
    Lightning Talks - 11th Floor Lecture Hall
    • Speaker
    • Juan Carlos Díaz-Patiño, Universidad Nacional Autónoma de México
    • Session Chair
    • Peter Thomas, Case Western Reserve University
    Abstract
    Nowadays, much scientific literature discusses Topological Data Analysis (TDA) applications in Neuroscience. Nevertheless, a fundamental question in the field is, how different are fMRI in one individual over a short time? Are they similar? What are the changes between individuals? This talk presents the approach used to study resting-state functional Magnetic Resonance Images (fMRI) with TDA methods using the Vietoris-Rips filtration over a weighted network and looking for statistical differences between their Betti Curves and also a vectorization method using the Minimum Spanning Tree.
  • 3:00 - 3:10 pm EDT
    Gabor Frames and Contact structures: Signal encoding and decoding in the primary visual cortex
    Lightning Talks - 11th Floor Lecture Hall
    • Speaker
    • Vasiliki Liontou, ICERM
    • Session Chair
    • Peter Thomas, Case Western Reserve University
    Abstract
    Contact structures and Gabor functions have been used, independently, to model the activity of the mammalian primary visual cortex. Gabor functions are also used in signal analysis and in particular in signal encoding and decoding. In particular, a one-dimensional signal, an $L^2$ function of one variable , can be represented in two dimensions, with time and frequency as coordinates. The signal is expanded into a series of Gabor functions (an analog of a Fourier basis), which are constructed from a single seed function by applying time and frequency translations. This talk summarizes the construction of a framework of signal analysis on models of $V_1$, determined by its contact structure and suggests a mathematical model of $V_1$ which allows the encoding and decoding of a signal by a discrete family of orientation and position dependent receptive profiles.
  • 3:10 - 3:20 pm EDT
    Harmonic Analysis of Sequences
    Lightning Talks - 11th Floor Lecture Hall
    • Speaker
    • Hannah Santa Cruz, Penn State
    • Session Chair
    • Peter Thomas, Case Western Reserve University
    Abstract
    The Combinatorial Laplacian is a popular tool in Graph and Network analysis. Recent work has proposed the use of Hodge Laplacians and the Magnetic Laplacian to analyze Simplicial Complexes and Directed Graphs respectively. We continue this work, by interpreting the Hodge Laplacian associated to a weighed simplicial complex, in terms of a weight function which is induced by a probability distribution. In particular, we develop a null hypothesis weighed simplicial complex model, induced by an independent distribution on the vertices, and show that the associated Laplacian is trivial. We extend this work to Sequence Complexes, where we consider the faces to be sequences, allowing for repeated vertices and distinguishing sequences with different orderings. In this setting, we also explore the Laplacian associated to a weight function induced by an independent distribution on the vertices, and completely describe it’s eigen spectrum, which is no longer trivial but still simple. Our analysis and findings contribute to the broader field of spectral graph theory and provide a deeper understanding of Laplacians on simplicial and sequence complexes, paving the way for further exploration and applications of Laplacian operators.
  • 3:20 - 3:30 pm EDT
    Group symmetry: a designing principle of recurrent neural circuits in the brain
    Lightning Talks - 11th Floor Lecture Hall
    • Speaker
    • Wenhao Zhang, UT Southwestern Medical Center
    • Session Chair
    • Peter Thomas, Case Western Reserve University
    Abstract
    Equivariant representation is necessary for the brain and artificial perceptual systems to faithfully represent the stimulus under some (Lie) group transformations. However, it remains unknown how recurrent neural circuits in the brain represent the stimulus equivariantly, nor the neural representation of abstract group operators. In this talk, I will present my recent attempts to narrow down this gap. We recently used the one-dimensional translation group and the temporal scaling group as examples to explore the general recurrent neural circuit mechanism of the equivariant stimulus representation. We found that a continuous attractor network (CAN), a canonical neural circuit model, self-consistently generates a continuous family of stationary population responses (attractors) that represents the stimulus equivariantly. We rigorously derived the representation of group operators in the circuit dynamics. The derived circuits are comparable with concrete neural circuits discovered in the brain and can reproduce neuronal responses that are consistent with experimental data. Our model for the first time analytically demonstrates how recurrent neural circuitry in the brain achieves equivariant stimulus representation.
  • 3:30 - 4:00 pm EDT
    Coffee Break
    11th Floor Collaborative Space
  • 4:00 - 4:45 pm EDT
    A Neuron as a Direct Data-Driven Controller
    11th Floor Lecture Hall
    • Speaker
    • Dmitri Chklovskii, Flatiron Institute & NYU Neuroscience Institute
    • Session Chair
    • Peter Thomas, Case Western Reserve University
    Abstract
    "Efficient coding theories have elucidated the properties of neurons engaged in early sensory processing. However, their applicability to downstream brain areas, whose activity is strongly correlated with behavior, remains limited. Here we present an alternative viewpoint, casting neurons as feedback controllers in closed loops comprising fellow neurons and the external environment. Leveraging the novel Direct Data-Driven Control (DD-DC) framework, we model neurons as biologically plausible controllers which implicitly identify loop dynamics, infer latent states and optimize control. Our DD-DC neuron model accounts for multiple neurophysiological observations, including the transition from potentiation to depression in Spike-Timing-Dependent Plasticity (STDP) with its asymmetry, the temporal extent of feed-forward and feedback neuronal filters and their adaptation to input statistics, imprecision of the neuronal spike-generation mechanism under constant input, and the prevalence of operational variability and noise in the brain. The DD-DC neuron contrasts with the conventional, feedforward, instantaneously responding McCulloch-Pitts-Rosenblatt unit, thus offering an alternative foundational building block for the construction of biologically-inspired neural networks.
Wednesday, October 18, 2023
  • 9:00 - 9:45 am EDT
    Object representation in the brain
    11th Floor Lecture Hall
    • Speaker
    • Dmitry Rinberg, New York University
    • Session Chair
    • Tatyana Sharpee, Salk Institute
    Abstract
    Animals can recognize sensory objects that are relevant to their behavior, such as familiar sounds, faces, or the smell of specific fruits. This ability relies on the sensory system performing two key computational tasks: first, distinguishing a particular object from all other objects, and second, generalizing across some range of stimuli. The latter implies that objects have some range of variability in the stimulus space - a smell of an apple may be attributed to multiple different apple varieties with similar chemical composition. Additionally, as the presented stimuli become more different from what's expected or familiar, the ability to correctly identify them decreases. Such computational requirements set up constrains for the geometry of the neural space of object representation in the brain. In this presentation, I will delve into our efforts to investigate object representation in the brain, employing optogenetic pattern stimulation of the peripheral olfactory system to create highly controllable synthetic odor stimuli. We have developed a behavioral paradigm that enables us to address both essential computational prerequisites: discriminating between and generalizing across stimuli. Furthermore, we have quantified both behavioral responses and neural activity. Our findings have revealed that the neural space governing stimulus responses conforms closely to the criteria for effective object representation, closely mirroring behavioral outcomes.
  • 10:00 - 10:15 am EDT
    Coffee Break
    11th Floor Collaborative Space
  • 10:15 - 11:00 am EDT
    The developmental timeline of the grid cell torus and how we are studying it
    11th Floor Lecture Hall
    • Speaker
    • Benjamin Dunn, Norwegian University of Science and Technology
    • Session Chair
    • Tatyana Sharpee, Salk Institute
  • 11:15 - 11:45 am EDT
    Open Problems Sessions
    Problem Session - 11th Floor Lecture Hall
    • Session Chair
    • Tatyana Sharpee, Salk Institute
  • 12:00 - 1:30 pm EDT
    Lunch/Free Time
  • 1:45 - 2:30 pm EDT
    Informational and topological signatures of individuality and age
    11th Floor Lecture Hall
    • Speaker
    • Giovanni Petri, CENTAI Institute
    • Session Chair
    • Tatyana Sharpee, Salk Institute
    Abstract
    Network neuroscience is a dominant paradigm for understanding brain function.Functional Connectivity (FC) encodes neuroimaging signals in terms of the pairwise correlation patterns of coactivations between brain regions. However, FC is by construction limited to such pairwise relations. In this seminar, we explore functional activations as a topological space via tools from topological data analysis. In particular, we analyze the resting fMRI data of populations of healthy subjects across ages, and demonstrate that algebraic-topological features extracted from brain activity are effective for brain fingerprinting. By computing persistent homology and constructing topological scaffolds, we show that these features outperform FC in discriminating between individuals and ages. That is, the topological structures are more similar for the same individual across different recording sessions than across individuals. Similarly, we find that topological observables improve discrimination of individuals of different ages. Finally, we show that the regions highlighted by our topological methods are characterized by characteristic patterns of information redundancy and synergy which are not share by regions that are topologically unimportant, hence establishing a first direct link between topology and information theory in neuroscience.
  • 2:45 - 2:55 pm EDT
    Ephemeral Persistence Features and the Stability of Filtered Chain Complexes
    Lightning Talks - 11th Floor Lecture Hall
    • Speaker
    • Ling Zhou, ICERM
    • Session Chair
    • Tatyana Sharpee, Salk Institute
    Abstract
    We strengthen the usual stability theorem for Vietoris-Rips persistent homology of finite metric spaces by building upon constructions due to Usher and Zhang in the context of filtered chain complexes. The information present at the level of filtered chain complexes includes ephemeral points, i.e. points with zero persistence, which provide additional information to that present at homology level. The resulting invariant, called verbose barcode, which has a stronger discriminating power than the usual barcode, is proved to be stable under certain metrics which are sensitive to these ephemeral points. In the case of degree zero, we provide an explicit formula to compute this new metric between verbose barcodes.
  • 2:55 - 3:05 pm EDT
    Homotopy and singular homology groups of finite graphs
    Lightning Talks - 11th Floor Lecture Hall
    • Speaker
    • Nikola Milicevic, Pennsylvania State University
    • Session Chair
    • Tatyana Sharpee, Salk Institute
    Abstract
    We verify analogues of classical results for higher homotopy groups and singular homology groups of (Cech) closure spaces. Closure spaces are a generalization of topological spaces that also include graphs and directed graphs and are thus a bridge that connects classical algebraic topology with the more applied side of topology, such as digital topology. More specifically, we show the existence of a long exact sequence for homotopy groups of pairs of closure spaces and that a weak homotopy equivalence induces isomorphisms for homology groups. Our main result is the construction of a weak homotopy equivalences between the geometric realizations of (directed) clique complexes and their underlying (directed) graphs. This implies that singular homology groups of finite graphs can be efficiently calculated from finite combinatorial structures, despite their associated chain groups being infinite dimensional. This work is similar to the work McCord did for finite topological spaces, but in the context of closure spaces. Our results also give a novel approach for studying (higher) homotopy groups of discrete mathematical structures such as digital images.
  • 3:05 - 3:15 pm EDT
    Hebbian learning of cyclic structures of neural code
    Lightning Talks - 11th Floor Lecture Hall
    • Speaker
    • Nikolas Schonsheck, University of Delaware
    • Session Chair
    • Tatyana Sharpee, Salk Institute
    Abstract
    Cyclic structures are a class of mesoscale features ubiquitous in both experimental stimuli and the activity of neural populations encoding them. Important examples include encoding of head direction, grid cells in spatial navigation, and orientation tuning in visual cortex. The central question of this short talk is: how does the brain faithfully transmit cyclic structures between regions? Is this a generic feature of neural circuits, or must this be learned? If so, how? While cyclic structures are difficult to detect and analyze with classical methods, tools from algebraic topology have proven to be particularly effective in understanding cyclic structures. Recently, work of Yoon et al. develops a topological framework to match cyclic coding patterns in distinct populations that encode the same information. We leverage this framework to show that, beginning with a random initialization, Hebbian learning robustly supports the propagation of cyclic structures through feedforward networks. This is joint work with Chad Giusti.
  • 3:15 - 3:25 pm EDT
    The bifiltration of a relation and unsupervised inference of neural representations
    Lightning Talks - 11th Floor Lecture Hall
    • Speaker
    • Melvin Vaupel, Norwegian Institute of Science and Technology
    • Session Chair
    • Tatyana Sharpee, Salk Institute
    Abstract
    To neural activity one may associate a space of correlations and a space of population vectors. These can provide complementary information. Assume the goal is to infer properties of a covariate space, represented by the recorded neurons. Then the correlation space is better suited if multiple neural modules are present, while the population vector space is preferable if neurons have non-convex receptive fields. In this talk I will explain how to coherently combine both pieces of information in a bifiltration using Dowker complexes and their total weight filtrations.
  • 3:30 - 4:00 pm EDT
    Coffee Break
    11th Floor Collaborative Space
  • 4:00 - 4:45 pm EDT
    An application of neighbourhoods in directed graphs in the classification of binary dynamics
    11th Floor Lecture Hall
    • Speaker
    • Ran Levi, University of Aberdeen
    • Session Chair
    • Tatyana Sharpee, Salk Institute
    Abstract
    A binary state on a graph means an assignment of binary values to its vertices. For example, if one encodes a network of spiking neurons as a directed graph, then the spikes produced by the neurons at an instant of time is a binary state on the encoding graph. Allowing time to vary and recording the spiking patterns of the neurons in the network produces an example of a binary dynamics on the encoding graph, namely a one-parameter family of binary states on it. The central object of study in this talk is the neighbourhood of a vertex v in a graph G, namely the subgraph of G that is generated by v and all its direct neighbours in G. We present a topological/graph theoretic method for extracting information out of binary dynamics on a graph, based on a selection of a relatively small number of vertices and their neighbourhoods. As a test case we demonstrate an application of the method to binary dynamics that arises from sample activity on the Blue Brain Project reconstruction of cortical tissue of a rat.
Thursday, October 19, 2023
  • 9:00 - 9:45 am EDT
    How to simulate a connectome?
    11th Floor Lecture Hall
    • Speaker
    • Srinivas Turaga, HHMI - Janelia Research Campus
    • Session Chair
    • Carina Curto, The Pennsylvania State University
    Abstract
    We can now measure the connectivity of every neuron in a neural circuit, but we are still blind to other biological details, including the dynamical characteristics of each neuron. The degree to which connectivity measurements alone can inform understanding of neural computation is an open question. We show that with only measurements of the connectivity of a biological neural network, we can predict the neural activity underlying neural computation. Our mechanistic model makes detailed experimentally testable predictions for each neuron in the connectome. We found that model predictions agreed with experimental measurements of neural activity across 24 studies. Our work demonstrates a strategy for generating detailed hypotheses about the mechanisms of neural circuit function from connectivity measurements. https://www.biorxiv.org/content/10.1101/2023.03.11.532232
  • 10:00 - 10:15 am EDT
    Coffee Break
    11th Floor Collaborative Space
  • 10:15 - 11:00 am EDT
    From single neurons to complex networks using algebraic topology
    11th Floor Lecture Hall
    • Speaker
    • Lida Kanari, EPFL/Blue Brain
    • Session Chair
    • Carina Curto, The Pennsylvania State University
    Abstract
    Topological Data Analysis has been successfully used in a variety of applications including protein study, cancer detection, and study of porous materials. Based on algebraic topology, we created a robust topological descriptor of neuronal morphologies and used it to classify and cluster neurons and microglia. But what can topology tell us about the functional roles of neurons in the brain? In this talk, I will focus on focus on the study of the human brain, delving deeper into the fundamental question of neuroscience “whether dendritic structures hold the key to enhanced cognitive abilities”. Starting from the topological differences of mouse and human neurons, we create artificial networks for both species. We show that topological complexity leads to highly inter-connected pyramidal-to-pyramidal and higher-order networks, which is unexpected in view of reduced neuronal density in humans compared to the mouse neocortex. We thus present robust evidence that increased topological complexity in human neurons ultimately leads to highly interconnected cortical networks despite reduced neuronal density. https://www.biorxiv.org/content/10.1101/2023.09.11.557170v1
  • 11:30 am - 1:30 pm EDT
    Open Problems Lunch
    Working Lunch
  • 1:30 - 2:15 pm EDT
    Rapid emergence of latent knowledge in the sensory cortex drives learning
    11th Floor Lecture Hall
    • Speaker
    • Kishore Kuchibhotla, Johns Hopkins University
    • Session Chair
    • Horacio Rotstein, New Jersey Institute of Technology
    Abstract
    Large-scale neural recordings provide an opportunity to better understand how the brain implements critical behavioral computations related to goal-directed learning. Here, I will argue that re-visiting our understanding of the shape of the learning curve and its underlying cognitive drivers is essential for uncovering its neural basis. Rather than thinking about learning as either ‘slow’ or ‘sudden’, I will argue that learning is better interpreted as a combination of the two. I will provide behavioral evidence that goal-directed learning can be dissociated into two parallel processes: knowledge acquisition which is rapid with step-like improvements and behavioral expression, which is slower and more variable, with animals exhibiting rudimentary forms of hypothesis testing. This behavioral approach has allowed us to isolate the associative (knowledge-related) and non-associative (performance-related) components that influence learning. I will present probabilistic optogenetic and longitudinal two-photon imaging results that neural dynamics in the auditory cortex are crucial for auditory guided, goal-directed learning. Conjoint representations of sensory and non-sensory variables in the same auditory cortical network evolve in a structured and dynamic manner, actively integrating multimodal signals via dissociable neural ensembles. Our data suggest that the sensory cortex is an associative engine with the cortical network shifting from being largely stimulus-driven to one that is optimized for behavioral needs.
  • 2:30 - 3:15 pm EDT
    Margin learning in spiking neurons
    11th Floor Lecture Hall
    • Speaker
    • Robert Gütig, Charité Medical School Berlin
    • Session Chair
    • Horacio Rotstein, New Jersey Institute of Technology
    Abstract
    Learning novel sensory features from few examples is a remarkable ability of humans and other animals. For example, we can recognize unfamiliar faces or words after seeing or hearing them only a few times, even in different contexts and noise levels. Previous work has shown that spiking neural networks can learn to detect unknown features in unsegmented input streams using multi-spike tempotron learning. However, this method requires many training patterns and the learned solutions can be sensitive to noise. In this work, we use multi-spike tempotron learning to implement margin learning in spiking neurons. Specifically, we introduce regularization terms that enable leaky-integrate-and-fire neurons to learn to detect recurring features using orders of magnitude less training data and converge to robust solutions. We test the novel learning rule on unsegmented spoken digit sequences contained in the TIDIGITS speech data set and find a twofold improvement in detection probability over the original learning algorithm. Our work shows how neurons can learn to detect embedded features from a limited number of unsegmented samples, provides fundamental bounds for the noise robustness of the leaky integrate-and-fire model and ties mathematically principled gradient-based optimization to biologically plausible learning in spiking neurons.
  • 3:30 - 4:00 pm EDT
    Coffee Break
    11th Floor Collaborative Space
  • 4:00 - 4:45 pm EDT
    Metastable dynamics in cortical circuits
    11th Floor Lecture Hall
    • Speaker
    • Giancarlo La Camera, Stony Brook University
    • Session Chair
    • Horacio Rotstein, New Jersey Institute of Technology
    Abstract
    I will discuss recent results on metastable dynamics in cortical circuits, characterized by seemingly random switching among a finite number of discrete states. Single states and their metastable dynamics can reflect abstract features of external stimuli as well as internal deliberations, and have been proposed as supporting a role in a variety of functions including sensory coding, expectation, decision making and behavioral accuracy. Many results in this context have been captured by spiking network models with a clustered architecture. I will review data and models while trying to provide a model-inspired unitary view of the phenomena discussed. If time permits, I will present a model of how this type of dynamics can emerge from (and coexist with) experience-dependent plasticity in a network of spiking neurons.
Friday, October 20, 2023
  • 9:00 - 9:45 am EDT
    Learning topological structure in neural population codes
    11th Floor Lecture Hall
    • Speaker
    • Chad Giusti, Oregon State University
    • Session Chair
    • Vladimir Itskov, The Pennsylvania State University
    Abstract
    The stimulus space model for neural population activity describes the activity of individual neurons as points localized in a metric stimulus space, with firing rate falling off with distance to individual stimuli. We will briefly review this model, and discuss how methods from topological data analysis allow us to extract qualitative structure and coordinate systems for such spaces from measures of neural population activity. We will briefly explore challenges that arise when studying whether and how multiple neural populations encode the same topological structure, and discuss recent experiments involving Hebbian learning for circular coordinate systems in feed-forward networks. No prior knowledge of topological methods will be assumed.
  • 10:00 - 10:45 am EDT
    Topological tracing of encoded circular coordinates between neural populations
    11th Floor Lecture Hall
    • Speaker
    • Iris Yoon, Wesleyan University
    • Session Chair
    • Vladimir Itskov, The Pennsylvania State University
    Abstract
    Recent developments in in vivo neuroimaging in animal models have made possible the study of information coding in large populations of neurons and even how that coding evolves in different neural systems. Topological methods, in particular, are effective at detecting periodic, quasi-periodic, or circular features in neural systems. Once we detect the presence of circular structures, we face the problem of assigning semantics: what do the circular structures in a neural population encode? Are they reflections of an underlying physiological activity, or are they driven by an external stimulus? If so, which specific features of the stimulus are encoded by the neurons? To address this problem, we introduced the method of analogous bars (Yoon, Ghrist, Giusti 2023). Given two related systems, say a stimulus system and a neural population, or two related neural populations, we utilize the dissimilarity between the two systems and Dowker complexes to find shared features between the two systems. We then leverage this information to identify related features between the two systems. In this talk, I will briefly explain the mathematics underlying the analogous bars method. I will then present applications of the method in studying neural population coding and propagation on simulated and experimental datasets. This work is joint work with Gregory Henselman-Petrusek, Lori Ziegelmeier, Robert Ghrist, Spencer Smith, Yiyi Yu, and Chad Giusti.
  • 11:00 - 11:30 am EDT
    Coffee Break
    11th Floor Collaborative Space
  • 11:30 am - 12:15 pm EDT
    The neurogeometry of the visual cortex
    11th Floor Lecture Hall
    • Speaker
    • Alessandro Sarti, National Center of Scientific Research, EHESS, Paris
    • Session Chair
    • Vladimir Itskov, The Pennsylvania State University
    Abstract
    I will consider a model of the primary visual cortex in terms of Lie groups equipped with a sub-Riemannian metric. The shape of receptive profiles as well as the patterns of short range and long range connectivity will have a precise geometric meaning. After showing examples of contour completion in the sub-Riemannian structure, I will consider the coupling of heterogeneous cells to model amodal completion (Kanitza triangle) as well as contrast-costancy image reconstruction in V1. The reconstruction involves a new type of Poisson problem with heterogeneous differential operators. (Joint work with Giovanna Citti)
  • 12:30 - 2:00 pm EDT
    Lunch/Free Time
  • 2:00 - 2:45 pm EDT
    Final Open Problems Session
    Problem Session - 11th Floor Lecture Hall
    • Session Chair
    • Carina Curto, The Pennsylvania State University
  • 3:30 - 4:00 pm EDT
    Coffee Break
    11th Floor Collaborative Space
Monday, October 23, 2023
Math + Neuroscience: Strengthening the Interplay Between Theory and Mathematics
  • 10:00 - 11:00 am EDT
    Open Problems
    Journal Club - 10th Floor Classroom
  • 3:00 - 3:30 pm EDT
    Coffee Break
    10th Floor Collaborative Space
  • 3:30 - 5:00 pm EDT
    A modern tour of Hopfield networks: from ferromagnetism to ChatGPT - Tom Burns
    Group Work - 10th Floor Classroom
Tuesday, October 24, 2023
Math + Neuroscience: Strengthening the Interplay Between Theory and Mathematics
  • 3:00 - 3:30 pm EDT
    Coffee Break
    10th Floor Collaborative Space
Wednesday, October 25, 2023
Math + Neuroscience: Strengthening the Interplay Between Theory and Mathematics
  • 9:00 - 10:00 am EDT
    Professional Development: Job Applications
    Professional Development - 11th Floor Lecture Hall
  • 12:00 - 1:30 pm EDT
    TDA Tutorial
    Tutorial - 10th Floor Classroom
  • 3:00 - 3:30 pm EDT
    Coffee Break
    10th Floor Collaborative Space
  • 3:30 - 4:30 pm EDT
    Bump attractors and waves in networks of leaky integrate-and-fire neurons
    11th Floor Lecture Hall
    • Speaker
    • Daniele Avitabile, Vrije Universiteit Amsterdam
    • Session Chair
    • Peter Thomas, Case Western Reserve University
Thursday, October 26, 2023
Math + Neuroscience: Strengthening the Interplay Between Theory and Mathematics
  • 9:00 - 10:30 am EDT
    TDA Tutorial
    Tutorial - 10th Floor Classroom
  • 3:00 - 3:30 pm EDT
    Coffee Break
    10th Floor Collaborative Space
Friday, October 27, 2023
Math + Neuroscience: Strengthening the Interplay Between Theory and Mathematics
  • 9:30 - 10:30 am EDT
    "Something Cool I Know" Seminar
    10th Floor Classroom
  • 11:00 - 11:30 am EDT
    On the Relation Between Infinitesimal Shape Response Curves and Phase-Amplitude Reduction for Single and Coupled Limit-Cycle Oscillators
    Post Doc/Graduate Student Seminar - 10th Floor Classroom
    • Maxwell Kreider, Case Western Reserve University
    Abstract
    Phase reduction is a well-established method to study weakly driven and weakly perturbed oscillators. Traditional phase-reduction approaches characterize the perturbed system dynamics solely in terms of the timing of the oscillations. In the case of large perturbations, the introduction of amplitude (isostable) coordinates improves the accuracy of the phase description by providing a sense of distance from the underlying limit cycle. Importantly, phase-amplitude coordinates allow for the study of both the timing and shape of system oscillations. A parallel tool is the infinitesimal shape response curve (iSRC), a variational method that characterizes the shape change of a limit-cycle oscillator under sustained perturbation. Despite the importance of oscillation amplitude in a wide range of physical systems, systematic studies on the shape change of oscillations remain scarce. Both phase-amplitude coordinates and the iSRC represent methods to analyze oscillation shape change, yet a relationship between the two has not been previously explored. In this work, we establish the iSRC and phase-amplitude coordinates as complementary tools to study oscillation amplitude. We extend existing iSRC theory and specify conditions under which a general class of systems can be analyzed by the joint iSRC phase-amplitude approach. We show that the iSRC takes on a dramatically simple form in phase-amplitude coordinates, and directly relate the phase and isostable response curves to the iSRC. We apply our theory to weakly perturbed single oscillators, and to study the synchronization and entrainment of coupled oscillators.
  • 11:30 am - 12:00 pm EDT
    Structure in neural correlations during spontaneous activity: an experimental and topological approach
    Post Doc/Graduate Student Seminar - 10th Floor Classroom
    • Nicole Sanderson, Penn State University
    Abstract
    Calcium imaging recordings of ~1000s of neurons in zebrafish larvae optic tectum in the absence of stimulation reveal spontaneous activity of neuronal assemblies that are both functionally coordinated and localized. To understand the functional structure of these assemblies, we study the pairwise correlations of the calcium signals of assembly neurons using techniques from topological data analysis (TDA). TDA can bring new insights when analyzing neural correlations, as many common techniques to do so, like spectral analyses, are sensitive to nonlinear monotonic transformations introduced in measurement. In contrast, a TDA construction called the order complex is invariant under monotonic transformations and can capture higher order structure in a set of pairwise correlations. We find that topological signatures derived from the order complex can identify distinct neural correlation structures during spontaneous activity. Our analyses further suggest a variety of possible assembly dynamics around the onset of spontaneous activation.
  • 1:30 - 3:00 pm EDT
    Topology+Neuro Working Group
    Group Work - 10th Floor Classroom
  • 3:00 - 3:30 pm EDT
    Coffee Break
    10th Floor Collaborative Space
Monday, October 30, 2023
  • 8:50 - 9:00 am EDT
    Welcome
    11th Floor Lecture Hall
    • Session Chair
    • Brendan Hassett, ICERM/Brown University
  • 9:00 - 9:45 am EDT
    How to perform computations in low-rank excitatory-inhibitory spiking networks: a geometric view
    11th Floor Lecture Hall
    • Speaker
    • Christian Machens, Champalimaud Foundation
    • Session Chair
    • Katie Morrison, University of Northern Colorado
    Abstract
    Models of neural networks can be largely divided into two camps. On one end, mechanistic models such as balanced spiking networks resemble activity regimes observed in data, but are often limited to simple computations. On the other end, functional models like trained deep networks can perform a multitude of computations, but are far removed from experimental physiology. Here, I will introduce a new framework for excitatory-inhibitory spiking networks which retains key properties of both mechanistic and functional models. The principal insight is to cast the problem of spiking dynamics in the low-dimensional space of population modes rather than in the original neural space. Neural thresholds then become convex boundaries in the population space, and population dynamics is either attracted (I population) or repelled (E population) by these boundaries. The combination of E and I populations results in balanced, inhibition-stabilized networks which are capable of universal function approximation. I will illustrate these insights with simple, geometric toy models, and I will argue that need to reconsider the very basics of how we think about neural networks.
  • 10:00 - 10:15 am EDT
    Coffee Break
    11th Floor Collaborative Space
  • 10:15 - 11:00 am EDT
    Structured variability and its roles in neural computation: the hippocampus perspective
    11th Floor Lecture Hall
    • Speaker
    • Cristina Savin, NYU
    • Session Chair
    • Katie Morrison, University of Northern Colorado
    Abstract
    Local circuit interactions play a key role in neural computation and are dynamically shaped by experience. However, measuring and assessing their effects during behavior remains a challenge. Here we combine techniques from statistical physics and machine learning to develop new tools for determining the effects of local network interactions on neural population activity. This approach reveals highly structured local interactions between hippocampal neurons, which make the neural code more precise and easier to read out by downstream circuits, across different levels of experience. More generally, the novel combination of theory and data analysis in the framework of maximum entropy models enables traditional neural coding questions to be asked in a naturalistic setting.
  • 11:15 - 11:45 am EDT
    Open problems Discussion
    Problem Session - 11th Floor Lecture Hall
    • Session Chairs
    • Carina Curto, The Pennsylvania State University
    • Katie Morrison, University of Northern Colorado
  • 11:45 am - 1:30 pm EDT
    Lunch/Free Time
  • 1:30 - 2:15 pm EDT
    The topology, geometry, and combinatorics of feedforward neural networks
    11th Floor Lecture Hall
    • Speaker
    • Julia E Grigsby, Boston College
    • Session Chair
    • Nora Youngs, Colby College
    Abstract
    Deep neural networks are a class of parameterized functions that have proven remarkably successful at making predictions about unseen data from finite labeled data sets. They do so even in settings when classical intuition suggests that they ought to be overfitting (aka memorizing) the data. I will begin by describing the structure of neural networks and how they learn. I will then advertise one of the theoretical questions animating the field: how does the relationship between the number of parameters and the size of the data set impact the dynamics of how they learn? Along the way I will emphasize the many ways in which topology, geometry, and combinatorics play a role in the field.
  • 2:30 - 2:40 pm EDT
    Correlated dense associative memories
    Lightning Talks - 11th Floor Lecture Hall
    • Speaker
    • Thomas Burns, ICERM
    • Session Chair
    • Nora Youngs, Colby College
    Abstract
    Associative memory networks encode memory patterns by establishing dynamic attractors centred on specific states of neurons. These attractors, nonetheless, are not constrained to remain fixed points or singular memory patterns. Through the correlation of these attractors and asymmetry of the network's connections, we can depict sequences or sets of stimuli that are temporally or spatially connected via mathematical graphs. By further modulating these correlations using inhibitory (anti-Hebbian) learning rules, we show how structures may be hierarchically-segmented at multiple scales. Such structures can also be used to conduct 'computations' where sequences of (quasi-)attractors code for an 'associationist' algorithmic syntax. This therefore illustrates how auto- and hetero-associative recall processes can form as a basis for executing more complex network behaviours, which is aided by the highly nonlinear energy landscape inherent in dense associative memory networks (also known as modern Hopfield networks).
  • 2:40 - 2:50 pm EDT
    Using spherical coordinates and Stiefel manifolds to decode neural data
    Lightning Talks - 11th Floor Collaborative Space
    • Speaker
    • Nikolas Schonsheck, University of Delaware
    • Session Chair
    • Nora Youngs, Colby College
    Abstract
    A central challenge in modern computational neuroscience is decoding behaviors and stimuli from the activity of the neural populations that encode them. In this talk, I will describe a few examples of how one can do this using novel techniques from algebraic topology. I will describe a method that is well-suited to stimuli with spherical geometry, and another method that can be used on Stiefel manifolds. For the latter, I will discuss an application to simulated data on a partially sampled circular stimulus space where standard persistence techniques fail.
  • 2:50 - 3:00 pm EDT
    Active sensing and switching in neural population activity
    Lightning Talks - 11th Floor Collaborative Space
    • Speaker
    • Soon Ho Kim, Georgia Institute of Technology
    • Session Chair
    • Nora Youngs, Colby College
    Abstract
    While sensory processing and motor control are often studied in isolation, perception and action are fundamentally intertwined. Here we study electrophysiological recordings of the barrel cortex of mice during a shape discrimination task in which mice actively whisk their surroundings to identify and discriminate objects. We find significant changes in the intra- and interlaminar functional connectivity during whisking trials when compared to resting state, with information flow from superficial to deep layers more significant. We further use a novel generalized linear model developed for spiking neural activity coupled with a hidden-Markov model to analyze state transitions in neural activity as well as in behavior. The results shed light into the neural activity underpinning active whisking and sensory perception.
  • 3:00 - 3:10 pm EDT
    On the Convexity of Certain 4-Maximal Neural Codes
    Lightning Talks - 11th Floor Collaborative Space
    • Speaker
    • Natasha Crepeau, University of Washington
    • Session Chair
    • Nora Youngs, Colby College
    Abstract
    A convex neural code describes the regions of an arrangement of convex open sets in Euclidean space, where each set corresponds to the place field of a neuron in an animal's environment. The convexity of neural codes with up to three maximal codewords is completely characterized by the lack of local obstructions, introduced by Giusti and Itskov. Another indicator of non-convexity introduced by Perez, Matusevich, and Shiu are wheels. It is conjectured by Jeffs that a 4-maximal neural code is convex if and only if it has no local obstructions and no wheels. By studying the nerve of the maximal codewords of a given code, we resolve this conjecture for certain classes of 4-maximal neural codes. Additionally, we describe a type of wheel always contained in a family of 4-maximal neural codes, with a goal of identifying more.
  • 3:30 - 4:00 pm EDT
    Coffee Break
    11th Floor Collaborative Space
  • 4:00 - 4:45 pm EDT
    TBD
    11th Floor Lecture Hall
    • Speaker
    • Leenoy Meshulam, University of Washington
    • Session Chair
    • Nora Youngs, Colby College
  • 5:00 - 6:30 pm EDT
    Reception
    11th Floor Collaborative Space
Tuesday, October 31, 2023
  • 9:00 - 9:45 am EDT
    Hyperbolic geometry and power law adaptation in neural circuits
    11th Floor Lecture Hall
    • Speaker
    • Tatyana Sharpee, Salk Institute
    • Session Chair
    • Zachary Kilpatrick, University of Colorado Boulder
  • 10:00 - 10:15 am EDT
    Coffee Break
    11th Floor Collaborative Space
  • 10:15 - 11:00 am EDT
    Where can a place cell put its fields? Let us count the ways
    11th Floor Lecture Hall
    • Speaker
    • Thibaud Taillefumier, UT Austin
    • Session Chair
    • Zachary Kilpatrick, University of Colorado Boulder
    Abstract
    A hippocampal place cell exhibits multiple firing fields within and across environments. What factors determine the configuration of these fields, and could they be set down in arbitrary locations? We conceptualize place cells as perceptrons performing evidence combination across many inputs, including grid-cell drives, and selecting a threshold to fire. Grid-cell drives represent geometrically organized inputs in the form of multiscale periodic grid-cell drive. We characterize and count which field arrangements a place cell can realize with such structured inputs. The number of realizable place-field arrangements with grid-like inputs is much larger than with one-hot coded inputs of the same input dimension. However, the realizable place-field arrangements make up a vanishing fraction of all possible arrangements. We show that the “separating capacity” or spatial range over which all field arrangements are realizable is given by the rank of the grid-like input matrix, and this rank equals the sum of distinct grid periods, a small fraction of the coding range, which scales as the product of periods. Compared to random inputs over the same range, grid-structured inputs generate larger margins, conferring stability to place fields. Finally, the realizable arrangements are determined by the input geometry, thus the model predicts that place fields should lie in constrained arrangements within and across environments.
  • 11:15 - 11:45 am EDT
    Open Problems Discussion
    Problem Session - 11th Floor Lecture Hall
    • Session Chairs
    • Zachary Kilpatrick, University of Colorado Boulder
    • Tatyana Sharpee, Salk Institute
  • 11:50 am - 12:00 pm EDT
    Group Photo (Immediately After Talk)
    11th Floor Lecture Hall
  • 12:00 - 1:30 pm EDT
    Networking Lunch
    Working Lunch - 11th Floor Collaborative Space
  • 1:30 - 2:15 pm EDT
    Toward a unifying theory of context-dependent efficient coding of sensory spaces
    11th Floor Lecture Hall
    • Speaker
    • Gaia Tavoni, Washington University in St. Louis
    • Session Chair
    • Tatyana Sharpee, Salk Institute
    Abstract
    Contextual information can powerfully influence the neural representation and perception of stimuli across the senses: multimodal cues, stimulus history, novelty, rewards, and behavioral goals can all affect how sensory inputs are encoded in the brain. Experimental findings are scattered and a top down overarching interpretation is lacking. Our goal is to develop a unifying theory of context-dependent sensory coding, beginning with the olfactory system. We use an approach based on the information-theoretic hypothesis that optimal codes strive to maximize the overall entropy (decodability) of sensory neural representations while minimizing neural costs (e.g., in energetic terms). A novel feature of our theory is that it incorporates contextual feedback: this allows us to predict how optimal odor representations are modulated by top-down signals that represent different types of context, including the overall multisensory environment and behavioral goals. Our theory reproduces (and provides a unifying interpretation of) a large number of experimental observations. These include adaptation to familiar stimuli, background suppression and detection of novel odors in mixtures, pattern separation between similar odors after a single sniff, increased responsiveness of neurons to behaviorally salient stimuli, figure-ground segregation of salient odor targets. It also makes novel predictions, such as the amplification of some of these effects in ambiguous multisensory contexts, and the emergence of olfactory illusions in specific environments. Our predictions generalize to a broad class of canonical microcircuits, suggesting that the efficient coding principles uncovered here may also apply to the building blocks of other sensory systems. Finally, we show that our optimal-coding solutions can be learned in neural circuits through Hebbian synaptic plasticity. This result connects our normative findings (Marr's computational level of analysis) to biologically plausible processes (Marr's implementational level of analysis). In conclusion, we have taken significant steps towards developing a context-dependent efficient coding theory that is biologically interpretable, is broadly applicable across sensory systems, and establishes a conceptual foundation for studying sensory coding associated with behavior.
  • 2:30 - 3:15 pm EDT
    Visual coding shaped by anatomical and functional connectivity structures
    11th Floor Lecture Hall
    • Speaker
    • Hannah Choi, Georgia Institute of Technology
    • Session Chair
    • Tatyana Sharpee, Salk Institute
    Abstract
    Visual cortical neurons encode diverse context-dependent information of visual inputs. For example, neuronal populations encode specific features of visual stimuli such as orientation, direction of movement, or object identities, while also encoding prior experience and expectations. This talk will focus on understanding how such different neural codes are shaped by both anatomical and functional connectivity of neuronal populations in the mouse visual cortex across multiple regions. In a recent experimental study, we found that lower cortical areas such as the primary visual cortex and the posterior medial higher order visual area primarily encode image identities from both expected and unexpected sequences of natural images, while neural responses in the retrosplenial cortex strongly represent expectation, in accordance with predictive coding theory. Motivated by this, we study how inter-areal layer-specific connectivity modulates the representation of task-relevant information such as input identity and expectation violation by performing representational analyses on recurrent neural networks with systematically altered structural motifs. The second part of the talk will focus on how visual stimuli of varying complexity drive functional connectivity of neurons in the mouse visual cortex. Our analyses of electrophysiological data across multiple areas of visual cortex reveal that the frequencies of different low-order connectivity motifs are preserved across a range of stimulus complexity, suggesting the role of specific motifs as local computational units of visual information.
  • 3:30 - 4:00 pm EDT
    Coffee Break
    11th Floor Collaborative Space
  • 4:00 - 4:45 pm EDT
    Restructuring of olfactory representations in the fly brain around odor relationships in natural sources
    11th Floor Lecture Hall
    • Speaker
    • Betty Hong, California Institute of Technology
    • Session Chair
    • Tatyana Sharpee, Salk Institute
    Abstract
    A core challenge of olfactory neuroscience is to understand how neural representations of odor are generated and transformed through successive layers of the olfactory circuit into formats that support perception and behavior. The encoding of odor by odorant receptors in the input layer of the olfactory system reflects, at least in part, the chemical relationships between odor compounds. Neural representations of odor in higher order associative olfactory areas, generated by random feedforward networks, are expected to largely preserve these input odor relationships. We evaluated these ideas by examining how odors are represented at different stages of processing in the olfactory circuit of the vinegar fly D. melanogaster. We found that representations of odor in the mushroom body (MB), a third-order associative olfactory area in the fly brain, are indeed structured and invariant across flies. However, the structure of MB representational space diverged significantly from what is expected in a randomly connected network. In addition, odor relationships encoded in the MB were better correlated with a metric of the similarity of their distribution across natural sources compared to their similarity with respect to chemical features, and the converse was true for odor relationships encoded in primary olfactory receptor neurons (ORNs). Comparison of odor coding at primary, secondary, and tertiary layers of the circuit revealed that odors were significantly regrouped with respect to their representational similarity across successive stages of olfactory processing, with the largest changes occurring in the MB. The non-linear reorganization of odor relationships in the MB indicates that unappreciated structure exists in the fly olfactory circuit, and this structure may facilitate the generalization of odors with respect to their co-occurence in natural sources.
Wednesday, November 1, 2023
  • 9:00 - 9:45 am EDT
    Information theoretical approaches to model synaptic plasticity
    11th Floor Lecture Hall
    • Speaker
    • Taro Toyoizumi, Riken Center for Brain Science
    • Session Chair
    • Horacio Rotstein, New Jersey Institute of Technology
    Abstract
    We adjust our behavior adaptively, based on experience, to thrive in our environment. Activity-dependent synaptic plasticity within neural circuits is believed to be a fundamental mechanism that enables such adaptive behavior. In this talk, I will introduce a top-down approach to modeling synaptic plasticity. Specifically, recognizing the brain as an information-processing organ, I posit that synaptic plasticity mechanisms have evolved to transmit information across synapses efficiently. It suggests a method to identify hidden independent sources behind sensory scenes. I will demonstrate that it's feasible to reconstruct even nonlinearly mixed sources that underlie sensory inputs when sensors of sufficiently high dimensions are employed. Furthermore, the theory also helps in interpreting experimentally observed results: it reproduces the distinct outcomes of synaptic plasticity observed in up and down states during non-rapid eye movement sleep, shedding light on how memory consolidation might be influenced by the states and spatial scale of slow waves.
  • 10:00 - 10:30 am EDT
    Coffee Break
    11th Floor Collaborative Space
  • 10:30 - 11:15 am EDT
    TBD
    11th Floor Lecture Hall
    • Speaker
    • Nora Youngs, Colby College
    • Session Chair
    • Horacio Rotstein, New Jersey Institute of Technology
    Abstract
    Neural codes allow the brain to represent, process, and store information about the world. Combinatorial codes, comprised of binary patterns of neural activity, encode information via the collective behavior of populations of neurons. A code is called convex if its codewords correspond to regions defined by an arrangement of convex open sets in Euclidean space. What makes a neural code convex? That is, how can we tell from the intrinsic structure of a code if there exists a corresponding arrangement of convex open sets? In this talk, we will exhibit topological, algebraic, and geometric approaches to answering this question.
  • 11:30 am - 12:00 pm EDT
    Open Problems Discussion
    Problem Session - 11th Floor Lecture Hall
    • Session Chairs
    • Katie Morrison, University of Northern Colorado
    • Nora Youngs, Colby College
  • 12:00 - 2:00 pm EDT
    Lunch/Free Time
  • 2:00 - 2:45 pm EDT
    Intrinsic Geometry of a Combinatorial Sensory Neural Code for Birdsong
    11th Floor Lecture Hall
    • Speaker
    • Tim Gentner, University of California, San Diego
    • Session Chair
    • Nora Youngs, Colby College
    Abstract
    To understand neural representation, researchers commonly compute receptive fields by correlating neural activity with external variables drawn from sensory signals. These receptive fields are only meaningful to the experimenter, however, because only the experimenter has access to both the neural activity and the external variables. To examine representation more directly, recent methodological advances have sought to capture the intrinsic geometry of sensory driven neural responses without external reference. To date, this approach has largely been restricted to low-dimensional stimuli as in spatial navigation. Here, we examined the intrinsic geometry of sensory representations in a model vocal communication system, songbirds. From the assumption that sensory systems represent invariant relationships among stimulus features, we conceptualized the space of natural birdsongs to lie on the surface of an n-dimensional hypersphere. We computed composite receptive field models for large populations of simultaneously recorded single neurons in the caudal medial neostriatum (NCM) of the auditory forebrain and show that solutions to these models define convex regions of response probability in the spherical stimulus space. We then define a combinatorial code over the set of receptive fields, realized in the moment-to-moment spiking and non-spiking patterns across the population, and show that this code can be used to reconstruct high-fidelity spectrographic representations of natural songs from evoked neural responses. Notably, relationships among combinatorial codewords directly mirror acoustic relationships among songs in the spherical stimulus space. That is, the time-varying pattern of co-activity among small groups of neurons expresses an intrinsic representational geometry of natural, extrinsic stimulus space. This combinatorial sensory code for representing vocal communication signals does not require computation of receptive fields and is in a form, spike time coincidences, amenable to biophysical mechanisms of neural information propagation.
  • 3:00 - 3:30 pm EDT
    Coffee Break
    11th Floor Collaborative Space
  • 3:30 - 4:15 pm EDT
    Decoding a compact neural circuit of Caenorhabditis elegans
    11th Floor Lecture Hall
    • Speaker
    • Yuki Tsukada, Keio University
    • Session Chair
    • Nora Youngs, Colby College
    Abstract
    Caenorhabditis elegans provides a compact neural circuit consisting of 302 neurons and simple behavioral experiments for dissecting neural code. We discuss our modeling approach based on our quantitative measurements of behavior and neural activity, and systems identification framework. We are particularly focusing on thermotaxis behavior as a simple behavioral model, yet including environmental sensing, memory, learning, and decision-making.
Thursday, November 2, 2023
  • 9:00 - 9:45 am EDT
    Inhibitory neurons control cortical auditory processing
    11th Floor Lecture Hall
    • Speaker
    • Maria Geffen, University of Pennsylvania
    • Session Chair
    • Tatyana Sharpee, Salk Institute
    Abstract
    Sparse coding can support different forms of population-level codes including localist and distributed representations. In localist representations, a feature is represented by activity of a specific neuronal subpopulation. By contrast, in a distributed representation, a sensory code is represented by the relative activity of neuronal populations. These codes trade off advantages in terms of information transmission. I will present our recent findings that different inhibitory neurons differentially control these forms of information coding, by shifting the coding scheme in the auditory cortex between localist and distributed representations.
  • 10:00 - 10:30 am EDT
    Coffee Break
    11th Floor Collaborative Space
  • 10:30 - 11:15 am EDT
    Maximizing Mutual Information in Mosquito Olfaction
    11th Floor Lecture Hall
    • Speaker
    • Caitlin Lienkaemper, Boston University
    • Session Chair
    • Tatyana Sharpee, Salk Institute
    Abstract
    Across species, the olfactory system follows a conserved organization: each olfactory receptor neuron expresses a single type of olfactory receptor, and responses of olfactory sensory neurons which express the same receptor are pooled before they are sent to higher regions of the brain. Mosquitoes have recently been shown to violate this organization: olfactory sensory neurons express multiple receptor types, thus mixing information about activation of different receptors from the start. Because mosquitoes are olfactory predators, it is reasonable to assume that this pattern of coexpression makes the mosquito olfactory system more effective. Under which conditions and assumptions does coexpression of multiple receptors make sense, and how do the statistics of olfactory stimuli shape the optimal pattern of receptor expression? In a linear, feedforward model of the olfactory system with gaussian noise, we compute the level and pattern of coexpression which maximizes the mutual information between olfactory stimulus and the neural response. We find that coexpressing receptors with correlated activity maximizes the mutual information when neurons are reliable but olfactory stimuli are noisy. We then look at how the geometry of the receptor correlations interacts with the sign constraints to shape the pattern of optimal receptor expression.
  • 11:30 am - 1:30 pm EDT
    Open Problems Lunch
    Working Lunch - 11th Floor Collaborative Space
  • 1:30 - 2:15 pm EDT
    Emergent properties of large population codes
    11th Floor Lecture Hall
    • Speaker
    • Ilya Nemenman, Emory University
    • Session Chair
    • Zachary Kilpatrick, University of Colorado Boulder
  • 2:30 - 3:00 pm EDT
    Coffee Break
    11th Floor Collaborative Space
  • 3:00 - 3:45 pm EDT
    Musings on mesoscale structures, brain states, and visual art, through a topological lens
    11th Floor Lecture Hall
    • Speaker
    • Shabnam Kadir, University of Hertfordshire
    • Session Chair
    • Zachary Kilpatrick, University of Colorado Boulder
    Abstract
    The neural code is sufficiently spatially and temporally smooth with respect to neural activity to enable meaningful neuroscientific recording on a large, coarse whole-brain scale, such as fMRI and EEG. We investigate the structural and functional connectome using methods from applied topology, namely persistent homology. We reveal differences in the white matter structural connectome in schizophrenia using the publicly available COBRE dataset. We also develop a method for exploring dynamic functional connectomics in fMRI which enables analysis and the derivation of brain states from a single recording and a single trial, whereas traditional fMRI analysis techniques rely on averaging over trials and subjects, disregarding individual idiosyncrasies. Finally, we explore questions of visual perception and appreciation of art in an experiment measuring EEG, eye movement, as well as conscious perception/appreciation of abstract paintings generated by both a human artist and by BigGAN.
  • 4:00 - 4:45 pm EDT
    Learning large neural population codes, accurately, efficiently, and in a biologically-plausible way using sparse random projections
    11th Floor Lecture Hall
    • Virtual Speaker
    • Elad Schneidman, Weizmann Institute of Science
    • Session Chair
    • Zachary Kilpatrick, University of Colorado Boulder
    Abstract
    I will present a new class of highly accurate, scalable, and efficient models of the activity of large neural populations. Moreover, I will show that these models have a biologically-plausible implementation by neural circuits that rely on random, sparse, and non-linear projections. I will further show that homeostatic synaptic scaling makes the learning of such models for very large neural populations even more efficient and accurate. Finally, I will discuss how such models can allow the brain to perform Bayesian decoding and the learning of metrics on the space of neural codes and of external stimuli.
Friday, November 3, 2023
  • 9:00 - 9:45 am EDT
    Representational geometry of perceptual decisions
    11th Floor Lecture Hall
    • Speaker
    • Roozbeh Kiani, New York University
    • Session Chair
    • Katie Morrison, University of Northern Colorado
    Abstract
    I will explore two core principles of circuit models for perceptual decisions. In these models, neural ensembles that encode actions compete to form decisions. Consequently, representation and readout of the decision variables (DVs) in these models are similar for decisions in which the same actions compete, irrespective of input and task context differences. Further, DVs are encoded as partially potentiated action plans through balance of activity of action-selective ensembles. I show that the firing rates of neurons in the posterior parietal cortex of monkeys performing motion and face discrimination tasks violate these principles. Instead, neural responses suggest a mechanism in which decisions form along curved population-response manifolds misaligned with action representations. These manifolds rotate in state space for different task contexts, making optimal readout of the DV task dependent. Similar manifolds exist in lateral and medial prefrontal cortex, suggesting common representational geometries across decision-making circuits.
  • 10:00 - 10:30 am EDT
    Coffee Break
    11th Floor Collaborative Space
  • 10:30 - 11:15 am EDT
    Sensory input to cortex encoded on low-dimensional periphery-correlated subspaces
    11th Floor Lecture Hall
    • Speaker
    • Andrea Barreiro, Southern Methodist University
    • Session Chair
    • Katie Morrison, University of Northern Colorado
    Abstract
    As information about the world is conveyed from the sensory periphery to central neural circuits, it mixes with complex ongoing cortical activity. How do neural populations keep track of sensory signals, separating them from noisy ongoing activity? I will talk about our recent work demonstrating that sensory signals are encoded more reliably in low-dimensional subspaces defined by correlations between neural activity in primary sensory cortex and upstream sensory brain regions. We analytically show that these subspaces can reach optimal limits (without an ideal observer) as noise correlations between cortex and upstream regions are reduced, and that this principle generalizes across diverse sensory stimuli in the olfactory system and the visual system of awake mice. Finally, I will talk about the neural observations that originally motivated our thinking in this area: the difference in the olfactory response between inhale and exhale. This difference is evident early in the olfactory pathway, and we hypothesize that it arises in part because of fluid mechanical forces in the nasal cavity. I will show how we are constructing a phase preference map for mechanical forcing. Our goal is to combine this map with emerging research on receptor zones to produce a unified view of the sensory inputs underlying directional selectivity.
  • 11:30 am - 12:30 pm EDT
    Final Open Problems Discussion
    Problem Session - 11th Floor Lecture Hall
  • 12:30 - 2:00 pm EDT
    Lunch/Free Time
  • 3:30 - 4:00 pm EDT
    Coffee Break
    11th Floor Collaborative Space
Monday, November 6, 2023
Math + Neuroscience: Strengthening the Interplay Between Theory and Mathematics
  • 3:00 - 3:30 pm EST
    Coffee Break
    11th Floor Collaborative Space
  • 3:30 - 5:00 pm EST
    TLN Working Group
    Group Work - 10th Floor Classroom
Tuesday, November 7, 2023
Math + Neuroscience: Strengthening the Interplay Between Theory and Mathematics
  • 9:00 - 10:30 am EST
    Introduction to machine learning and deep networks
    Tutorial - 10th Floor Classroom
    • Cenghiz Pehlevan, Harvard
  • 3:00 - 3:30 pm EST
    Coffee Break
    11th Floor Collaborative Space
Wednesday, November 8, 2023
Math + Neuroscience: Strengthening the Interplay Between Theory and Mathematics
  • 9:00 - 10:00 am EST
    Professional Development: Papers
    Professional Development - 11th Floor Lecture Hall
  • 10:30 am - 12:00 pm EST
    Analysis of biologically plausible learning rules and contrasts with common machine learning rules
    Tutorial - 10th Floor Classroom
    • Cenghiz Pehlevan, Harvard
  • 3:00 - 3:30 pm EST
    Coffee Break
    11th Floor Collaborative Space
  • 3:30 - 4:30 pm EST
    Fluid dynamics as a driver of retronasal olfaction
    11th Floor Lecture Hall
    • Speaker
    • Andrea Barreiro, Southern Methodist University
    • Session Chair
    • Peter Thomas, Case Western Reserve University
    Abstract
    Flavor perception is a fundamental governing factor of feeding behaviors and associated diseases such as obesity. Smells that enter the nose retronasally, i.e. from the back of the nasal cavity, play an essential role in flavor perception. Previous studies have demonstrated that orthonasal olfaction (nasally inhaled smells) and retronasal olfaction involve distinctly different brain activation, even for identical odors. Differences are evident at the glomerular layer in the olfactory bulb (Gautam et al. 2012, Sanganahalli et al. 2020) and can even be identified in the synaptic inputs to the bulb (Furudono et al. 2013). Why does the bulb receive different input based on the direction of the air flow? We hypothesize that this difference originates from fluid mechanical forces at the periphery: olfactory receptor neurons respond to mechanical, as well as chemical stimuli (Grosmaitre et al, 2007, Iwata et al, 2017). To investigate this, we use computational fluid dynamics to simulate and analyze shear stress patterns during natural inhalation and sniffing. We will show preliminary results demonstrating that shear stress forces differ for orthonasal vs. retronasal air flow; i.e. inspiration vs. exhalation, in a model of the nasal cavity. We quantify this difference with a phase preference map for mechanical forcing, analogous to the orientation preference maps used in V1. If time permits I will connect these findings to our earlier work on directional selectivity in neural network models of the olfactory bulb (Craft et al. 2021).
  • 4:30 - 4:35 pm EST
    Long-term participant group photos
    Group Photo (Immediately After Talk) - 11th Floor Lecture Hall
Thursday, November 9, 2023
Math + Neuroscience: Strengthening the Interplay Between Theory and Mathematics
  • 9:00 - 10:30 am EST
    TDA Tutorial
    Tutorial - 10th Floor Classroom
    • Sameer Kailasa, University of Michigan Ann Arbor
  • 3:00 - 3:30 pm EST
    Coffee Break
    11th Floor Collaborative Space
Friday, November 10, 2023
Math + Neuroscience: Strengthening the Interplay Between Theory and Mathematics
  • 9:30 - 10:30 am EST
    Neuroscience knowledge required for a specific open problem
    10th Floor Classroom
  • 11:00 - 11:30 am EST
    The Effects of Adaptation on Network Oscillation Frequency and Clustering
    Post Doc/Graduate Student Seminar - 10th Floor Classroom
    • Ka Nap Tse, University of Pittsburgh
    Abstract
    In this presentation, I will explore the influence of various parameters on the oscillation frequencies of neuronal networks, with a focus on a network model consisting of excitatory and inhibitory theta neurons. Central to the discussion will be the impact of adaptation currents in excitatory neurons on oscillation frequency, particularly through inducing clustered firing patterns. The talk will also introduce a community detection algorithm and a discrete map to analyze and visualize the clustering behaviors and adaptation distribution within the network. This examination aims to provide a deeper understanding of the complex dynamics governing neuronal network function.
  • 11:30 am - 12:00 pm EST
    A very slow introduction to Gabor Expansions
    Post Doc/Graduate Student Seminar - 10th Floor Classroom
    • Vasiliki Liontou, ICERM
  • 1:30 - 3:00 pm EST
    Topology+Neuro Working Group
    Group Work - 10th Floor Classroom
  • 3:00 - 3:30 pm EST
    Coffee Break
    11th Floor Collaborative Space
Monday, November 13, 2023
Math + Neuroscience: Strengthening the Interplay Between Theory and Mathematics
  • 9:00 - 10:30 am EST
    TDA Tutorial
    Tutorial - 10th Floor Classroom
  • 3:00 - 3:30 pm EST
    Coffee Break
    11th Floor Collaborative Space
  • 3:00 - 3:30 pm EST
    What Determines the Frequency of Fast Network Oscillations With Irregular Neural Discharges?
    Journal Club - 10th Floor Classroom
    • Sameer Kailasa, University of Michigan Ann Arbor
    • Safaan Sadiq, Pennsylvania State University
  • 3:30 - 5:00 pm EST
    TLN Working Group
    Group Work - 10th Floor Classroom
Tuesday, November 14, 2023
Math + Neuroscience: Strengthening the Interplay Between Theory and Mathematics
  • 9:00 - 10:30 am EST
    Synchronization and phase-locking in small neuronal networks
    Tutorial - 10th Floor Classroom
    • Amitabha Bose, New Jersey Institute of Technology
    Abstract
    In this tutorial, using phase-space analysis, I'll go through some of the basic concepts of important network behaviors such as synchronization and anti-phase oscillations. I'll show how to use geometric singular perturbation theory (GSPT) to derive lower dimensional "reduced" equations to analyze network behavior. In this context, I'll discuss some important concepts such as Fast Threshold Modulation, post-inhibitory rebound and the synaptic mechanisms of escape and release. Finally, using GSPT, I'll show how to incorporate ionic currents that operate on disparate time scales in network models. Underlying all of this is an interesting exchange of information between Euclidean and time metrics which I plan to explain.
  • 3:00 - 3:30 pm EST
    Coffee Break
    11th Floor Collaborative Space
Wednesday, November 15, 2023
Math + Neuroscience: Strengthening the Interplay Between Theory and Mathematics
  • 9:00 - 10:00 am EST
    Professional Development: Grants
    Professional Development - 11th Floor Lecture Hall
  • 3:00 - 3:30 pm EST
    Coffee Break
    11th Floor Collaborative Space
  • 3:30 - 4:30 pm EST
    A neuromechanstic model for keeping a beat in the context of music
    11th Floor Lecture Hall
    • Speaker
    • Amitabha Bose, New Jersey Institute of Technology
    • Session Chair
    • Peter Thomas, Case Western Reserve University
    Abstract
    While many people say they have no rhythm, most humans when listening to music can easily discern and move to a beat. On the other hand, many of us are not so adept at actually generating and maintaining a constant beat over a period of time. Demonstrating a beat is a very complicated task. Among other things, it involves the ability of our brains to estimate time intervals and to make physical movements, for example hitting a drum, in coordination with the time estimates that we make. How our brain and body solve this problem is an open and active area of research. In this talk, I will discuss a biophysical model for a group of neurons that can learn to keep a constant beat across a range of frequencies relevant to music. I will also show how to extend this model to more complex rhythmic patterns. The model leads to questions about the nature of time and the role of perception in our ability to make decisions.
Thursday, November 16, 2023
Math + Neuroscience: Strengthening the Interplay Between Theory and Mathematics
  • 3:00 - 3:30 pm EST
    Coffee Break
    11th Floor Collaborative Space
Friday, November 17, 2023
Math + Neuroscience: Strengthening the Interplay Between Theory and Mathematics
  • 11:00 - 11:30 am EST
    Homotopy and singular homology groups of finite graphs
    Post Doc/Graduate Student Seminar - 11th Floor Conference Room
    • Nikola Milicevic, Pennsylvania State University
    Abstract
    We verify analogues of classical results for higher homotopy groups and singular homology groups of (Cech) closure spaces. Closure spaces are a generalization of topological spaces that also include graphs and directed graphs and are thus a bridge that connects classical algebraic topology with the more applied side of topology, such as digital topology. More specifically, we show the existence of a long exact sequence for homotopy groups of pairs of closure spaces and that a weak homotopy equivalence induces isomorphisms for homology groups. Our main result is the construction of a weak homotopy equivalences between the geometric realizations of (directed) clique complexes and their underlying (directed) graphs. This implies that singular homology groups of finite graphs can be efficiently calculated from finite combinatorial structures, despite their associated chain groups being infinite dimensional. This work is similar to the work McCord did for finite topological spaces, but in the context of closure spaces. Our results also give a novel approach for studying (higher) homotopy groups of discrete mathematical structures such as digital images.
  • 11:30 am - 12:00 pm EST
    Closed-loop neuromechanical control of rhythmic motor behaviors
    Post Doc/Graduate Student Seminar - 11th Floor Conference Room
    • Zhuojun Yu, Case Western Reserve University
  • 1:30 - 3:00 pm EST
    Topology+Neuro Working Group
    Group Work - 11th Floor Conference Room
  • 3:00 - 3:30 pm EST
    Coffee Break
    11th Floor Collaborative Space
Monday, November 20, 2023
Math + Neuroscience: Strengthening the Interplay Between Theory and Mathematics
  • 3:00 - 3:30 pm EST
    Coffee Break
    11th Floor Collaborative Space
  • 3:30 - 5:00 pm EST
    TLN Working Group
    Group Work - 10th Floor Classroom
Tuesday, November 21, 2023
Math + Neuroscience: Strengthening the Interplay Between Theory and Mathematics
  • 3:00 - 3:30 pm EST
    Coffee Break
    11th Floor Collaborative Space
Wednesday, November 22, 2023
Math + Neuroscience: Strengthening the Interplay Between Theory and Mathematics
  • 3:00 - 3:30 pm EST
    Coffee Break
    11th Floor Collaborative Space
Monday, November 27, 2023
Math + Neuroscience: Strengthening the Interplay Between Theory and Mathematics
  • 3:00 - 3:30 pm EST
    Coffee Break
    11th Floor Collaborative Space
  • 3:30 - 5:00 pm EST
    TLN Working Group
    Group Work - 10th Floor Classroom
Tuesday, November 28, 2023
Math + Neuroscience: Strengthening the Interplay Between Theory and Mathematics
  • 9:00 - 10:30 am EST
    Phase reduction for stochastic oscillators (Part I)
    Tutorial - 10th Floor Classroom
    Abstract
    Phase reduction has played an important role in dynamical systems approaches to neuroscience, by reducing an n-dimensional limit cycle system (such as the Hodgkin-Huxley equations for a regularly spiking neuron) to a 1-dimensional "phase" description. In recent years, attention has also been drawn to "phase-amplitude" or "isochron-isostable" reduction, which adds one or more additional dimensions beyond the phase, while keeping the dynamics as simple as possible. In the first part of the tutorial I will work through the basics of phase/amplitude reduction, and then discuss why the standard approach falls apart when one is dealing with stochastic (noisy, irregular) oscillators. In the second part of the tutorial, I will work through different approaches to phase-reduction and phase-amplitude reduction that make sense whether the oscillator is noisy or not, and whether it oscillates without noise or not. The latter aspect is important as some systems (for instance conductance-based neurons in the "excitable" or subthreshold regime) do not oscillate at all unless they are driven in part by stochastic fluctuations.
  • 3:00 - 3:30 pm EST
    Coffee Break
    11th Floor Collaborative Space
Wednesday, November 29, 2023
Math + Neuroscience: Strengthening the Interplay Between Theory and Mathematics
  • 9:00 - 10:00 am EST
    Hiring Process
    Professional Development - 11th Floor Lecture Hall
  • 10:30 am - 12:00 pm EST
    Phase reduction for stochastic oscillators (Part II)
    Tutorial - 10th Floor Classroom
  • 3:00 - 3:30 pm EST
    Coffee Break
    11th Floor Collaborative Space
  • 3:30 - 4:30 pm EST
    On the Relationship between Information Processing and Fitness in Biology
    11th Floor Lecture Hall
    • Session Chair
    • Peter Thomas, Case Western Reserve University
Thursday, November 30, 2023
Math + Neuroscience: Strengthening the Interplay Between Theory and Mathematics
  • 9:00 - 10:30 am EST
    Journal Club
    Journal Club - 10th Floor Classroom
  • 3:00 - 3:30 pm EST
    Coffee Break
    11th Floor Collaborative Space
Friday, December 1, 2023
Math + Neuroscience: Strengthening the Interplay Between Theory and Mathematics
  • 9:30 - 10:30 am EST
    "Something Cool I Know" Seminar
    10th Floor Classroom
  • 11:00 am - 12:00 pm EST
    TBD
    Post Doc/Graduate Student Seminar - 10th Floor Classroom
  • 1:30 - 3:00 pm EST
    Topology+Neuro Working Group
    Group Work - 10th Floor Classroom
  • 3:00 - 3:30 pm EST
    Coffee Break
    11th Floor Collaborative Space
Monday, December 4, 2023
Math + Neuroscience: Strengthening the Interplay Between Theory and Mathematics
  • 3:00 - 3:30 pm EST
    Coffee Break
    11th Floor Collaborative Space
  • 3:30 - 5:00 pm EST
    Open Problem Session
    Problem Session - 10th Floor Classroom
Tuesday, December 5, 2023
Math + Neuroscience: Strengthening the Interplay Between Theory and Mathematics
  • 1:30 - 3:00 pm EST
    Journal Club
    Journal Club - 10th Floor Classroom
  • 3:00 - 3:30 pm EST
    Coffee Break
    11th Floor Collaborative Space
Wednesday, December 6, 2023
Math + Neuroscience: Strengthening the Interplay Between Theory and Mathematics
  • 10:00 - 11:30 am EST
    Open Problem Session
    Problem Session - 10th Floor Classroom
  • 3:00 - 3:30 pm EST
    Coffee Break
    11th Floor Collaborative Space
  • 3:30 - 4:30 pm EST
    Math + Neuro Seminar
    11th Floor Lecture Hall
    • Session Chair
    • Peter Thomas, Case Western Reserve University
Thursday, December 7, 2023
Math + Neuroscience: Strengthening the Interplay Between Theory and Mathematics
  • 9:00 - 10:30 am EST
    TDA Tutorial
    Tutorial - 10th Floor Classroom
  • 12:00 - 1:30 pm EST
    Open Problems Lunch
    Working Lunch - 10th Floor Collaborative Space
  • 3:00 - 3:30 pm EST
    Coffee Break
    11th Floor Collaborative Space
Friday, December 8, 2023
Math + Neuroscience: Strengthening the Interplay Between Theory and Mathematics
  • 9:30 - 10:30 am EST
    "Something Cool I Now Know" Seminar
    10th Floor Classroom
  • 11:00 am - 12:00 pm EST
    TBD
    Post Doc/Graduate Student Seminar - 10th Floor Classroom
  • 1:30 - 3:00 pm EST
    Topology+Neuro Working Group
    Group Work - 10th Floor Classroom
  • 3:00 - 3:30 pm EST
    Coffee Break
    11th Floor Collaborative Space

All event times are listed in ICERM local time in Providence, RI (Eastern Daylight Time / UTC-4).

All event times are listed in .

Request Reimbursement

This section is for general purposes only and does not indicate that all attendees receive funding. Please refer to your personalized invitation to review your offer.

ORCID iD
As this program is funded by the National Science Foundation (NSF), ICERM is required to collect your ORCID iD if you are receiving funding to attend this program. Be sure to add your ORCID iD to your Cube profile as soon as possible to avoid delaying your reimbursement.
Acceptable Costs
  • 1 roundtrip between your home institute and ICERM
  • Flights on U.S. or E.U. airlines – economy class to either Providence airport (PVD) or Boston airport (BOS)
  • Ground Transportation to and from airports and ICERM.
Unacceptable Costs
  • Flights on non-U.S. or non-E.U. airlines
  • Flights on U.K. airlines
  • Seats in economy plus, business class, or first class
  • Change ticket fees of any kind
  • Multi-use bus passes
  • Meals or incidentals
Advance Approval Required
  • Personal car travel to ICERM from outside New England
  • Multiple-destination plane ticket; does not include layovers to reach ICERM
  • Arriving or departing from ICERM more than a day before or day after the program
  • Multiple trips to ICERM
  • Rental car to/from ICERM
  • Flights on a Swiss, Japanese, or Australian airlines
  • Arriving or departing from airport other than PVD/BOS or home institution's local airport
  • 2 one-way plane tickets to create a roundtrip (often purchased from Expedia, Orbitz, etc.)
Travel Maximum Contributions
  • New England: $350
  • Other contiguous US: $850
  • Asia & Oceania: $2,000
  • All other locations: $1,500
  • Note these rates were updated in Spring 2023 and superseded any prior invitation rates. Any invitations without travel support will still not receive travel support.
Reimbursement Requests

Request Reimbursement with Cube

Refer to the back of your ID badge for more information. Checklists are available at the front desk and in the Reimbursement section of Cube.

Reimbursement Tips
  • Scanned original receipts are required for all expenses
  • Airfare receipt must show full itinerary and payment
  • ICERM does not offer per diem or meal reimbursement
  • Allowable mileage is reimbursed at prevailing IRS Business Rate and trip documented via pdf of Google Maps result
  • Keep all documentation until you receive your reimbursement!
Reimbursement Timing

6 - 8 weeks after all documentation is sent to ICERM. All reimbursement requests are reviewed by numerous central offices at Brown who may request additional documentation.

Reimbursement Deadline

Submissions must be received within 30 days of ICERM departure to avoid applicable taxes. Submissions after thirty days will incur applicable taxes. No submissions are accepted more than six months after the program end.

Associated Semester Workshops

Topology and Geometry in Neuroscience
Image for "Topology and Geometry in Neuroscience"
Neural Coding and Combinatorics
Image for "Neural Coding and Combinatorics"