Organizing Committee
Abstract

The goal of this Semester Program is to bring together a variety of mathematicians with researchers working in theoretical and computational neuroscience as well as some theory-friendly experimentalists. However, unlike programs in neuroscience that emphasize connections between theory and experiment, this program will focus on building bridges between theory and mathematics. This is motivated in part by the observation that theoretical developments in neuroscience are often limited not only by lack of data but also by the need to better develop the relevant mathematics. For example, theorists often rely on linear or near-linear modeling frameworks for neural networks simply because the mathematics of nonlinear network dynamics is still poorly understood. Conversely, just as in the history of physics, neuroscience problems give rise to new questions in mathematics. In recent years, these questions have touched on a rich variety of fields including geometry, topology, combinatorics, dynamical systems, and algebra. We believe the time has come to deepen these connections and foster new interactions and collaborations between neuroscientists who think deeply about theory and mathematicians who are looking for new problems inspired by science. In addition to collaborative research between theorists and mathematicians, an explicit goal of the program will be to produce an “open problems” document. This document will present a series of well-formulated open math problems together with explanations of their neuroscience motivation, partial progress, and the potential significance of their solutions.

Image for "Math + Neuroscience: Strengthening the Interplay Between Theory and Mathematics"

Confirmed Speakers & Participants

Talks will be presented virtually or in-person as indicated in the schedule below.

  • Speaker
  • Poster Presenter
  • Attendee
  • Virtual Attendee
  • Arman Afrasiyabi
    Yale University
    Oct 30-Nov 3, 2023
  • Yashar Ahmadian
    Cambridge University
    Sep 18-22, 2023
  • Daniele Avitabile
    Vrije Universiteit Amsterdam
    Sep 20-Dec 15, 2023
  • Huseyin Ayhan
    Florida State University
    Oct 16-20, 2023
  • Demba Ba
    Harvard University
    Sep 18-22, 2023
  • Aishwarya Balwani
    Georgia Institute of Technology
    Oct 16-20, 2023
  • Andrea Barreiro
    Southern Methodist University
    Sep 1-Dec 22, 2023
  • Robin Belton
    Smith College
    Sep 18-22, 2023
  • Marcus Benna
    UC San Diego
    Sep 18-22, 2023
  • Dhananjay Bhaskar
    Yale University
    Oct 16-20, 2023
  • Ginestra Bianconi
    Queen Mary University of London
    Oct 16-20, 2023
  • Prianka Bose
    New Jersey Institute of Technology
    Sep 18-22, 2023
  • Amitabha Bose
    New Jersey Institute of Technology
    Sep 15-Nov 17, 2023
  • Felipe Branco de Paiva
    University of Wisconsin-Madison
    Oct 16-20, 2023
  • Robyn Brooks
    University of Utah
    Sep 1-Dec 31, 2023
  • Peter Bubenik
    University of Florida
    Sep 18-22, 2023; Oct 16-20, 2023
  • Michael Buice
    Allen Institute
    Sep 18-22, 2023
  • Thomas Burns
    ICERM
    Aug 31-Dec 31, 2023
  • Johnathan Bush
    University of Florida
    Oct 16-20, 2023
  • Carlos Castañeda Castro
    Brown University
    Sep 6-Dec 8, 2023
  • Dmitri Chklovskii
    Flatiron Institute & NYU Neuroscience Institute
    Oct 16-20, 2023
  • Hannah Choi
    Georgia Institute of Technology
    Oct 30-Nov 3, 2023
  • Sehun Chun
    Yonsei University
    Sep 18-22, 2023
  • Heather Cihak
    University of Colorado Boulder
    Sep 18-22, 2023
  • Giovanna Citti
    university of Bologna
    Sep 17-Nov 3, 2023
  • Natasha Crepeau
    University of Washington
    Oct 30-Nov 3, 2023
  • Justin Curry
    University at Albany SUNY
    Oct 16-20, 2023
  • Carina Curto
    The Pennsylvania State University
    Sep 1-Dec 9, 2023
  • Rodica Curtu
    The University of Iowa
    Sep 18-22, 2023
  • Rava da Silveira
    Institute of Molecular and Clinical Ophthalmology Basel
    Oct 30-Nov 3, 2023
  • Steve Damelin
    University of Michigan
    Sep 17-22, 2023; Oct 15-20, 2023
  • Maria Dascalu
    University of Massachusetts Amherst
    Oct 29-Nov 4, 2023
  • Anda Degeratu
    University of Stuttgart
    Sep 17-Oct 7, 2023
  • Juan Carlos Díaz-Patiño
    Universidad Nacional Autónoma de México
    Oct 16-20, 2023
  • Darcy Diesburg
    Brown University
    Sep 18-22, 2023; Oct 16-20, 2023; Oct 30-Nov 3, 2023
  • Fatih Dinc
    Stanford University
    Sep 18-22, 2023
  • Brent Doiron
    University of Chicago
    Sep 17-23, 2023
  • Benjamin Dunn
    Norwegian University of Science and Technology
    Oct 16-20, 2023
  • Julia E Grigsby
    Boston College
    Oct 30-Nov 3, 2023
  • Ahmed elhady
    Konstanz Center for Advanced Study of Collective Behavior
    Nov 1-30, 2023
  • Ani Eloyan
    Brown University
    Oct 16-20, 2023
  • Aysel Erey
    Utah State University
    Oct 30-Nov 3, 2023
  • Sean Escola
    Columbia University
    Sep 18-22, 2023
  • Julio Esparza Ibanez
    Instituto Cajal - CSIC (Spanish National Research Council)
    Oct 16-26, 2023
  • Ashkan Faghiri
    Georgia state university
    Oct 16-20, 2023
  • Matthew Farrell
    Harvard University
    Sep 18-22, 2023
  • Richard Foster
    Virginia Commonwealth University
    Sep 18-22, 2023
  • Michael Frank
    Brown University
    Sep 6-Dec 8, 2023
  • Michael Freund
    Brown University
    Oct 16-20, 2023
  • Halley Fritze
    University of Oregon
    Oct 16-20, 2023
  • Marcio Gameiro
    Rutgers University
    Sep 6-Dec 8, 2023
  • Harshvardhan Gazula
    MIT
    Oct 16-20, 2023
  • Tomas Gedeon
    Montana State University
    Sep 11-Nov 1, 2023
  • Maria Geffen
    University of Pennsylvania
    Nov 1-3, 2023
  • Tim Gentner
    University of California, San Diego
    Oct 30-Nov 3, 2023
  • Juliann Geraci
    University of Nebraska- Lincoln
    Oct 30-Nov 10, 2023
  • Robert Ghrist
    University of Pennsylvania
    Oct 19-20, 2023
  • Chad Giusti
    University of Delaware
    Sep 17-30, 2023; Oct 15-Nov 6, 2023
  • Harold Xavier Gonzalez
    Stanford University
    Sep 6-22, 2023
  • Anna Grim
    Allen Institute
    Oct 16-20, 2023
  • Robert Gütig
    Charité Medical School Berlin
    Oct 16-20, 2023
  • Todd Hagen
    Bernstein Center for Computational Neuroscience
    Oct 16-20, 2023
  • Erik Hermansen
    Norwegian University of Scienc
    Oct 16-20, 2023
  • Abigail Hickok
    Columbia University
    Oct 15-20, 2023
  • Christian Hirsch
    Aarhus University
    Oct 16-20, 2023
  • Betty Hong
    California Institute of Technology
    Oct 30-Nov 3, 2023
  • Iris Horng
    University of Pennsylvania
    Oct 15-21, 2023
  • Ching-Peng Huang
    UKE
    Oct 16-20, 2023
  • Chengcheng Huang
    University of Pittsburgh
    Sep 18-22, 2023
  • Vladimir Itskov
    The Pennsylvania State University
    Sep 5-Dec 8, 2023
  • Jonathan Jaquette
    New Jersey Institute of Technology
    Sep 18-22, 2023
  • Yuchen Jiang
    Australian National University
    Oct 16-20, 2023
  • Alvin Jin
    Berkeley
    Oct 15-21, 2023
  • Kresimir Josic
    University of Houston
    Sep 18-22, 2023
  • Shabnam Kadir
    University of Hertfordshire
    Oct 30-Nov 3, 2023
  • Sameer Kailasa
    University of Michigan Ann Arbor
    Sep 5-Dec 9, 2023
  • Lida Kanari
    EPFL/Blue Brain
    Oct 16-20, 2023
  • Selvi Kara
    University of Utah
    Oct 30-Nov 3, 2023
  • Gabriella Keszthelyi
    Alfréd Rényi Institute of Mathematics
    Sep 16-22, 2023
  • Roozbeh Kiani
    New York University
    Oct 30-Nov 3, 2023
  • Christopher Kim
    National Institutes of Health
    Sep 18-22, 2023
  • Soon Ho Kim
    Georgia Institute of Technology
    Sep 18-22, 2023
  • Hyunjoong Kim
    University of Houston
    Sep 17-23, 2023
  • Kevin Knudson
    University of Florida
    Oct 16-20, 2023
  • Leo Kozachkov
    Massachusetts Institute of Technology
    Sep 18-22, 2023
  • Maxwell Kreider
    Case Western Reserve University
    Sep 6-Dec 8, 2023
  • Kishore Kuchibhotla
    Johns Hopkins University
    Oct 16-20, 2023
  • Ankit Kumar
    UC Berkeley
    Sep 18-22, 2023
  • Giancarlo La Camera
    Stony Brook University
    Oct 16-20, 2023
  • Kang-Ju Lee
    Seoul National University
    Oct 15-21, 2023
  • Ran Levi
    University of Aberdeen
    Oct 16-20, 2023
  • Noah Lewis
    Georgia Institute of Technology
    Oct 16-20, 2023
  • Yao Li
    University of Massachusetts Amherst
    Sep 6-Dec 8, 2023
  • Zelong Li
    Penn State University
    Sep 5-Dec 9, 2023
  • Caitlin Lienkaemper
    Boston University
    Sep 15-Nov 4, 2023
  • Kathryn Lindsey
    Boston College
    Sep 6-Dec 8, 2023
  • Justin Lines
    Columbia University
    Oct 30-Nov 3, 2023
  • Vasiliki Liontou
    ICERM
    Sep 6-Dec 8, 2023
  • David Lipshutz
    Flatiron Institute
    Sep 18-22, 2023
  • Sijing Liu
    Brown University
    Sep 1, 2023-May 31, 2024
  • Jessica Liu
    CUNY Graduate Center
    Sep 18-22, 2023
  • Simon Locke
    Johns Hopkins University
    Sep 18-22, 2023
  • Laureline Logiaco
    Massachusetts Institute of Technology
    Sep 18-22, 2023
  • Juliana Londono Alvarez
    Penn State
    Sep 6-Dec 8, 2023
  • Caio Lopes
    École Polytechnique Fédérale de Lausanne
    Oct 16-20, 2023
  • Christian Machens
    Champalimaud Foundation
    Oct 30-Nov 3, 2023
  • James MACLAURIN
    New Jersey Institute of Technology
    Sep 18-22, 2023
  • Matilde Marcolli
    California Institute of Technology
    Sep 6-Dec 8, 2023
  • Marissa Masden
    ICERM
    Sep 6, 2023-May 31, 2024
  • Sarah Mason
    Wake Forest University
    Oct 30-Nov 3, 2023
  • Leenoy Meshulam
    University of Washington
    Oct 30-Nov 3, 2023
  • Nikola Milicevic
    Pennsylvania State University
    Sep 1-Dec 10, 2023
  • Federica Milinanni
    KTH - Royal Institute of Technology
    Sep 17-Nov 5, 2023
  • Konstantin Mischaikow
    Rutgers University
    Sep 17-23, 2023; Sep 18-22, 2023; Oct 5-6, 2023
  • Katie Morrison
    University of Northern Colorado
    Sep 1-Dec 10, 2023
  • Noga Mudrik
    The Johns Hopkins University
    Sep 18-22, 2023
  • Audrey Nash
    Florida State University
    Sep 18-22, 2023
  • matt nassar
    Brown University
    Sep 6-Dec 8, 2023
  • Ilya Nemenman
    Emory University
    Oct 30-Nov 3, 2023
  • Fernando Nobrega Santos
    University of Amsterdam
    Oct 15-21, 2023
  • Gabe Ocker
    Boston University
    Sep 6-Dec 8, 2023
  • Choongseok Park
    NC A&T State University
    Sep 18-22, 2023
  • Ross Parker
    Center for Communications Research – Princeton
    Sep 18-22, 2023; Oct 16-20, 2023
  • Caitlyn Parmelee
    Keene State College
    Sep 5-Dec 9, 2023
  • alice patania
    University of Vermont
    Oct 16-20, 2023
  • Cengiz Pehlevan
    Harvard University
    Sep 6-Dec 8, 2023
  • Isabella Penido
    Brown University
    Sep 6-Dec 8, 2023
  • Jose Perea
    Northeastern University
    Sep 6-Dec 8, 2023
  • Giovanni Petri
    CENTAI Institute
    Oct 16-20, 2023
  • Mason Porter
    UCLA
    Sep 18-22, 2023
  • Rebecca R.G.
    George Mason University
    Oct 29-Nov 3, 2023
  • Niloufar Razmi
    Brown University
    Sep 6-Dec 8, 2023
  • Alex Reyes
    New York University
    Oct 16-20, 2023
  • Antonio Rieser
    Centro de Investigación en Matemáticas
    Sep 5-Dec 9, 2023
  • Dmitry Rinberg
    New York University
    Oct 16-20, 2023
  • Dario Ringach
    University of California, Los Angeles
    Oct 16-20, 2023
  • Jason Ritt
    Brown University
    Sep 6-Dec 8, 2023
  • Robert Rosenbaum
    University of Notre Dame
    Sep 18-22, 2023
  • Horacio Rotstein
    New Jersey Institute of Technology
    Sep 5-Dec 9, 2023
  • Jennifer Rozenblit
    University of Texas, Austin
    Oct 16-20, 2023
  • Safaan Sadiq
    Pennsylvania State University
    Sep 5-Dec 9, 2023
  • Nicole Sanderson
    Penn State University
    Sep 1-Dec 31, 2023
  • Hannah Santa Cruz
    Penn State
    Sep 5-Dec 9, 2023
  • Alessandro Sarti
    Centre D’analyse et de Mathématique Sociales
    Oct 13-22, 2023
  • Cristina Savin
    NYU
    Oct 30-Nov 3, 2023
  • Elad Schneidman
    Weizmann Institute of Science
    Oct 16-Nov 4, 2023
  • Nikolas Schonsheck
    University of Delaware
    Oct 15-Nov 4, 2023
  • David Schwab
    City University of New York
    Sep 5-Dec 9, 2023
  • Daniel Scott
    Brown University
    Sep 6-Dec 8, 2023
  • Thomas Serre
    Brown University
    Sep 6-Dec 8, 2023
  • Tatyana Sharpee
    Salk Institute
    Oct 16-20, 2023; Oct 30-Nov 3, 2023
  • Sage Shaw
    University of Colorado Boulder
    Sep 18-22, 2023
  • Nimrod Sherf
    University of Houston
    Sep 18-22, 2023
  • Farshad Shirani
    Georgia Institute of Technology
    Sep 18-22, 2023
  • Paramjeet Singh
    Thapar Institute of Engineering & Technology
    Sep 18-22, 2023
  • Bernadette Stolz
    EPFL
    Oct 16-20, 2023
  • Thibaud Taillefumier
    UT Austin
    Oct 30-Nov 3, 2023
  • Evelyn Tang
    Rice University
    Oct 16-20, 2023
  • Gaia Tavoni
    Washington University in St. Louis
    Oct 30-Nov 3, 2023
  • Dane Taylor
    University of Wyoming
    Sep 17-22, 2023; Oct 15-20, 2023
  • Peter Thomas
    Case Western Reserve University
    Sep 5-Dec 9, 2023
  • Tobias Timofeyev
    University of Vermont
    Sep 18-22, 2023; Oct 16-20, 2023
  • Nicholas Tolley
    Brown University
    Sep 6-Dec 8, 2023
  • Magnus Tournoy
    Flatiron Institute
    Sep 17-Oct 21, 2023; Sep 18-22, 2023; Oct 16-20, 2023
  • Taro Toyoizumi
    Riken Center for Brain Science
    Oct 30-Nov 3, 2023
  • Wilson Truccolo
    Brown University
    Sep 6-Dec 8, 2023
  • Ka Nap Tse
    University of Pittsburgh
    Sep 10-Dec 9, 2023
  • Misha Tsodyks
    Weizmann Institute
    Oct 30-Nov 3, 2023
  • Yuki Tsukada
    Keio University
    Oct 30-Nov 3, 2023
  • Junyi Tu
    Salisbury University
    Oct 16-20, 2023
  • Srinivas Turaga
    HHMI - Janelia Research Campus
    Oct 16-20, 2023
  • Melvin Vaupel
    Norwegian Institute of Science and Technology
    Oct 16-20, 2023
  • Jonathan Victor
    Weill Cornell Medical College
    Oct 16-20, 2023
  • Elizabeth Vidaurre
    Molloy College
    Oct 16-20, 2023
  • Bradley Vigil
    Texas Tech University
    Oct 16-20, 2023
  • Zhengchao Wan
    University of California San Diego
    Oct 16-20, 2023
  • Yangyang Wang
    Brandeis University
    Sep 18-22, 2023
  • Bin Wang
    University of California, San Diego
    Sep 6-Nov 30, 2023
  • Xinyi Wang
    Michigan State University
    Sep 10-Oct 27, 2023
  • Qingsong Wang
    University of Utah
    Oct 15-21, 2023; Oct 29-Nov 4, 2023
  • Alexander Williams
    Stanford University
    Oct 16-20, 2023
  • Zhuo-Cheng Xiao
    New York University
    Sep 18-22, 2023
  • Iris Yoon
    Wesleyan University
    Sep 6-Dec 8, 2023
  • Ryeongkyung Yoon
    University of Houston
    Sep 18-22, 2023
  • Kei Yoshida
    Brown University
    Sep 18-22, 2023; Oct 16-20, 2023
  • Kisung You
    City University of New York
    Oct 16-20, 2023
  • Lai-Sang Young
    Courant Institute
    Sep 18-22, 2023
  • Nora Youngs
    Colby College
    Sep 5-Dec 10, 2023
  • Zhuojun Yu
    Case Western Reserve University
    Sep 5-Dec 9, 2023
  • Gexin Yu
    College of William and Mary
    Sep 17-23, 2023
  • Wenhao Zhang
    UT Southwestern Medical Center
    Sep 18-22, 2023; Oct 16-20, 2023
  • Ling Zhou
    ICERM
    Sep 6-Dec 8, 2023
  • Robert Zielinski
    Brown University
    Sep 6-Dec 8, 2023

Visit dates listed on the participant list may be tentative and subject to change without notice.

Semester Schedule

Wednesday, September 6, 2023
Math + Neuroscience: Strengthening the Interplay Between Theory and Mathematics
  • 9:00 am - 3:00 pm EDT
    Check In
    11th Floor Collaborative Space
  • 10:00 - 11:00 am EDT
    Oranizer/Directorate Meeting
    Meeting - 11th Floor Conference Room
  • 4:00 - 5:00 pm EDT
    Informal Coffee/Tea Welcome
    Coffee Break - 11th Floor Collaborative Space
Thursday, September 7, 2023
Math + Neuroscience: Strengthening the Interplay Between Theory and Mathematics
  • 9:00 - 9:30 am EDT
    ICERM Welcome
    Welcome - 11th Floor Lecture Hall
  • 9:30 - 11:30 am EDT
    Organizer Welcome and Introductions
    Opening Remarks - 11th Floor Lecture Hall
  • 3:00 - 3:30 pm EDT
    Coffee Break
    11th Floor Collaborative Space
Friday, September 8, 2023
Math + Neuroscience: Strengthening the Interplay Between Theory and Mathematics
  • 10:00 - 11:00 am EDT
    Grad Student/Postdoc Meeting with ICERM Directorate
    Meeting - 11th Floor Lecture Hall
  • 12:00 - 2:00 pm EDT
    Planning Lunch
    Working Lunch - 11th Floor Collaborative Space
  • 3:00 - 3:30 pm EDT
    Coffee Break
    11th Floor Collaborative Space
Monday, September 11, 2023
Math + Neuroscience: Strengthening the Interplay Between Theory and Mathematics
  • 10:00 - 11:30 am EDT
    Journal Club & Neuro 101 Planning
    Meeting - 11th Floor Lecture Hall
  • 1:45 - 1:50 pm EDT
    Xavier Gonzalez Introduction
    Lightning Talks - 11th Floor Lecture Hall
    • Harold Xavier Gonzalez, Stanford University
  • 1:50 - 1:55 pm EDT
    Maxwell Kreider Introduction
    Lightning Talks - 11th Floor Lecture Hall
    • Maxwell Kreider, Case Western Reserve University
  • 1:55 - 2:00 pm EDT
    Juliana Londono Alvarez Introduction
    Lightning Talks - 11th Floor Lecture Hall
    • Juliana Londono Alvarez, Penn State
  • 2:00 - 2:05 pm EDT
    Safaan Sadiq Introduction
    Lightning Talks - 11th Floor Lecture Hall
    • Safaan Sadiq, Pennsylvania State University
  • 2:05 - 2:10 pm EDT
    Hannah Santa Cruz Introduction
    Lightning Talks - 11th Floor Lecture Hall
    • Hannah Santa Cruz, Penn State
  • 2:10 - 2:15 pm EDT
    Nicholas Tolley Introduction
    Lightning Talks - 11th Floor Lecture Hall
    • Nicholas Tolley, Brown University
  • 2:15 - 2:20 pm EDT
    Ka Nap Tse Introduction
    Lightning Talks - 11th Floor Lecture Hall
    • Ka Nap Tse, University of Pittsburgh
  • 2:20 - 2:25 pm EDT
    Bin Wang Introduction
    Lightning Talks - 11th Floor Lecture Hall
    • Bin Wang, University of California, San Diego
  • 2:25 - 2:30 pm EDT
    Zhuojun Yu Introduction
    Lightning Talks - 11th Floor Lecture Hall
    • Zhuojun Yu, Case Western Reserve University
  • 2:30 - 2:35 pm EDT
    Robert Zielinkski Introduction
    Lightning Talks - 11th Floor Lecture Hall
    • Robert Zielinski, Brown University
  • 2:35 - 2:40 pm EDT
    Zelong Li Intorduction
    Lightning Talks - 11th Floor Lecture Hall
    • Zelong Li, Penn State University
  • 2:40 - 2:45 pm EDT
    Sameer Kailasa Introduction
    Lightning Talks - 11th Floor Lecture Hall
    • Sameer Kailasa, University of Michigan Ann Arbor
  • 2:45 - 2:50 pm EDT
    Elena Wang Introduction
    Lightning Talks - 11th Floor Lecture Hall
    • Xinyi Wang, Michigan State University
  • 3:00 - 3:30 pm EDT
    Coffee Break
    11th Floor Collaborative Space
  • 3:30 - 3:40 pm EDT
    Robyn Brooks Introduction
    Lightning Talks - 11th Floor Lecture Hall
    • Robyn Brooks, University of Utah
  • 3:40 - 3:50 pm EDT
    Thomas Burns Introduction
    Lightning Talks - 11th Floor Lecture Hall
    • Thomas Burns, ICERM
  • 3:50 - 4:00 pm EDT
    Caitlin Leinkaemper Introduction
    Lightning Talks - 11th Floor Lecture Hall
    • Caitlin Lienkaemper, Boston University
  • 4:00 - 4:10 pm EDT
    Vasiliki Liontou Introduction
    Lightning Talks - 11th Floor Lecture Hall
    • Vasiliki Liontou, ICERM
  • 4:10 - 4:20 pm EDT
    Sijing Liu Introduction
    Lightning Talks - 11th Floor Lecture Hall
    • Sijing Liu, Brown University
  • 4:20 - 4:30 pm EDT
    Marissa Masden Introduction
    Lightning Talks - 11th Floor Lecture Hall
    • Marissa Masden, ICERM
  • 4:30 - 4:40 pm EDT
    Nikola Milicevic Introduction
    Lightning Talks - 11th Floor Lecture Hall
    • Nikola Milicevic, Pennsylvania State University
  • 4:40 - 4:50 pm EDT
    Nicole Sanderson
    Lightning Talks - 11th Floor Lecture Hall
    • Nicole Sanderson, Penn State University
  • 4:50 - 5:00 pm EDT
    Ling Zhou Introduction
    Lightning Talks - 11th Floor Lecture Hall
    • Ling Zhou, ICERM
  • 5:00 - 6:30 pm EDT
    Welcome Reception
    Reception - 11th Floor Collaborative Space
Tuesday, September 12, 2023
Math + Neuroscience: Strengthening the Interplay Between Theory and Mathematics
  • 10:30 am - 12:00 pm EDT
    TDA 101
    Tutorial - 11th Floor Lecture Hall
    • Nicole Sanderson, Penn State University
  • 3:00 - 3:30 pm EDT
    Coffee Break
    11th Floor Collaborative Space
Wednesday, September 13, 2023
Math + Neuroscience: Strengthening the Interplay Between Theory and Mathematics
  • 10:30 am - 12:00 pm EDT
    Network Dynamics & Modeling
    Tutorial - 11th Floor Lecture Hall
    • Horacio Rotstein, New Jersey Institute of Technology
  • 3:00 - 3:30 pm EDT
    Coffee Break
    11th Floor Collaborative Space
  • 3:30 - 4:30 pm EDT
    Closed-loop neuromechanical motor control models (or) On the importance of taking the body into account when modeling neuronal dynamics.
    11th Floor Lecture Hall
    • Peter Thomas, Case Western Reserve University
    Abstract
    The central nervous system is strongly coupled to the body. Through peripheral receptors and effectors, it is also coupled to the constantly changing outside world. A chief function of the brain is to close the loop between sensory inputs and motor output. It is through the brain's effectiveness as a control mechanism for the body, embedded in the external world, that it facilitates long-term survival. Thus to understand brain circuits (one might argue) one must also understand their behavioral and ecological context. However, studying closed-loop brain-body interactions is challenging experimentally, conceptually, and mathematically. In order to make progress, we focus on systems that generate rhythmic behaviors in order to accomplish a quantifiable goal, such as maintaining different forms of homeostasis. Time permitting, I'll mention two such systems, 1. control of feeding motions in the marine mollusk Aplysia californica, and 2. rhythm generation and control in the mammalian breathing system. In both of these systems, we propose that robustness in the face of variable metabolic or external demands arises from the interplay of multiple layers of control involving biomechanics, central neural dynamics, and sensory feedback.
Thursday, September 14, 2023
Math + Neuroscience: Strengthening the Interplay Between Theory and Mathematics
  • 9:00 - 10:30 am EDT
    Network Dynamics & Modeling (Part 2)
    Tutorial - 11th Floor Lecture Hall
    • Horacio Rotstein, New Jersey Institute of Technology
  • 12:00 - 1:30 pm EDT
    Open Problems fo TLNs (Bring Your Own Lunch)
    Problem Session - 11th Floor Lecture Hall
    • Session Chairs
    • Carina Curto, The Pennsylvania State University
    • Katie Morrison, University of Northern Colorado
  • 2:00 - 2:30 pm EDT
    TDA software
    Tutorial - 11th Floor Lecture Hall
    • Nicole Sanderson, Penn State University
  • 3:00 - 3:30 pm EDT
    Coffee Break/ Neuro 101
    Coffee Break - 11th Floor Collaborative Space
Friday, September 15, 2023
Math + Neuroscience: Strengthening the Interplay Between Theory and Mathematics
  • 9:30 - 10:30 am EDT
    Journal Club
    11th Floor Lecture Hall
    • Moderators
    • Harold Xavier Gonzalez, Stanford University
    • Sameer Kailasa, University of Michigan Ann Arbor
  • 11:00 - 11:30 am EDT
    Mathematical Challenges in Neuronal Network Dynamics
    Post Doc/Graduate Student Seminar - 11th Floor Lecture Hall
    • Marissa Masden, ICERM
    Abstract
    I will introduce a straightforward construction of the canonical polyhedral complex given by the activation patterns of a ReLU neural network. Then, I will describe how labeling the vertices of this polyhedral complex with sign vectors is (almost always) enough information to generate a cellular (co)chain complex labeling all of the polyhedral cells, and how this allows us to extract information about the decision boundary of the network.
  • 11:30 am - 12:00 pm EDT
    Detecting danger in gridworlds using Gromov's Link Condition
    Post Doc/Graduate Student Seminar - 11th Floor Lecture Hall
    • Thomas Burns, ICERM
    Abstract
    Gridworlds have been long-utilised in AI research, particularly in reinforcement learning, as they provide simple yet scalable models for many real-world applications such as robot navigation, emergent behaviour, and operations research. We initiate a study of gridworlds using the mathematical framework of reconfigurable systems and state complexes due to Abrams, Ghrist & Peterson. State complexes represent all possible configurations of a system as a single geometric space, thus making them conducive to study using geometric, topological, or combinatorial methods. The main contribution of this work is a modification to the original Abrams, Ghrist & Peterson setup which we introduce to capture agent braiding and thereby more naturally represent the topology of gridworlds. With this modification, the state complexes may exhibit geometric defects (failure of Gromov's Link Condition). Serendipitously, we discover these failures occur exactly where undesirable or dangerous states appear in the gridworld. Our results therefore provide a novel method for seeking guaranteed safety limitations in discrete task environments with single or multiple agents and offer useful safety information (in geometric and topological forms) for incorporation in or analysis of machine learning systems. More broadly, our work introduces tools from geometric group theory and combinatorics to the AI community and demonstrates a proof-of-concept for this geometric viewpoint of the task domain through the example of simple gridworld environments.
  • 1:30 - 3:00 pm EDT
    Topology + Neuroscience Working Groups
    Group Work - 10th Floor Classroom
  • 3:00 - 3:30 pm EDT
    Coffee Break
    11th Floor Collaborative Space
Monday, September 18, 2023
  • 8:50 - 9:00 am EDT
    Welcome
    11th Floor Lecture Hall
    • Session Chair
    • Brendan Hassett, ICERM/Brown University
  • 9:00 - 9:45 am EDT
    Neural dynamics on sparse networks—pruning, error correction, and signal reconstruction
    11th Floor Lecture Hall
    • Speaker
    • Rishidev Chaudhuri, University of California, Davis
    • Session Chair
    • Carina Curto, The Pennsylvania State University
    Abstract
    Many networks in the brain are sparsely connected, and the brain eliminates connections during development and learning. This talk will focus on questions related to computation and dynamics on these sparse networks. We will first focus on pruning redundant network connections while preserving dynamics and function. In a recurrent network, determining the importance of a connection between two neurons is a difficult computational problem, depending on the role that both neurons play and on all possible pathways of information flow between them. Noise is ubiquitous in neural systems, and often considered an irritant to be overcome. We suggest that noise could instead play a functional role in pruning, allowing the brain to probe network structure and determine which connections are redundant. We construct a simple, local, unsupervised rule that either strengthens or prunes synapses using only connection weight and the noise-driven covariance of the neighboring neurons. For a subset of linear and rectified-linear networks, we adapt matrix concentration of measure arguments from the field of graph sparsification to prove that this rule preserves the spectrum of the original matrix and hence preserves network dynamics even when the fraction of pruned connections asymptotically approaches 1. The plasticity rule is biologically-plausible and may suggest a new role for noise in neural computation. Time permitting, we will then discuss the application of sparse expander graphs to modeling dynamics on neural networks. Expander graphs combine the seemingly contradictory properties of being sparse and well-connected. Among other remarkable properties, they allow efficient communication, credit assignment and error correction with simple greedy dynamical rules. We suggest that these applications might provide new ways of thinking about neural dynamics, and provide several proofs of principle.
  • 10:00 - 10:15 am EDT
    Coffee Break
    11th Floor Collaborative Space
  • 10:15 - 11:00 am EDT
    Local breakdown of the balance of excitation and inhibition accounts for divisive normalization
    11th Floor Lecture Hall
    • Speaker
    • Yashar Ahmadian, Cambridge University
    • Session Chair
    • Carina Curto, The Pennsylvania State University
    Abstract
    Excitatory and inhibitory (E & I) inputs to cortical neurons remain balanced across different conditions. This is captured in the balanced network model in which neural populations dynamically adjust their rates to yield tightly balanced E and I inputs and a state in which all neurons are active at levels observed in cortex. But global tight E-I balance predicts linear stimulus dependence for population responses, and does not account for systematic cortical response nonlinearities such as divisive normalization, a canonical brain computation. However, when necessary connectivity conditions for global balance fail, states arise in which a subset of neurons are inhibition dominated and inactive. Here, we show analytically that the emergence of such localized balance states robustly leads to normalization, including sublinear integration and winner-take-all behavior. An alternative model that exhibits normalization is the Stabilized Supralinear Network (SSN), in which the E-I balance is generically loose, but becomes tight asymptotically for strong inputs. However, an understanding of the causal relationship between E-I balance and normalization in SSN are lacking. Here we show that when tight E-I balance in the asymptotic, strongly driven regime of SSN is not global, the network does not exhibit normalization at any input strength; thus, in SSN too, significant normalization requires the breakdown of global balance. In summary, we causally and quantitatively connect a fundamental feature of cortical dynamics with a canonical brain computation.
  • 11:15 - 11:45 am EDT
    Open Problems Discussion
    Problem Session - 11th Floor Lecture Hall
    • Session Chairs
    • Carina Curto, The Pennsylvania State University
    • Katie Morrison, University of Northern Colorado
  • 11:45 am - 1:30 pm EDT
    Lunch/Free Time
  • 1:30 - 2:15 pm EDT
    Discovering dynamical patterns of activity from single-trial neural data
    11th Floor Lecture Hall
    • Speaker
    • Rodica Curtu, The University of Iowa
    • Session Chair
    • Carina Curto, The Pennsylvania State University
    Abstract
    In this talk I will discuss a data-driven method that leverages time-delayed coordinates, diffusion maps, and dynamic mode decomposition, to identify neural features in large scale brain recordings that correlate with subject-reported perception. The method captures the dynamics of perception at multiple timescales and distinguishes attributes of neural encoding of the stimulus from those encoding the perceptual states. Our analysis reveals a set of latent variables that exhibit alternating dynamics along a low-dimensional manifold, like trajectories of attractor-based models. I will conclude by proposing a phase-amplitude-coupling-based model that illustrates the dynamics of data.
  • 2:30 - 2:35 pm EDT
    Synaptic mechanisms for resisting distractors in neural fields
    Lightning Talks - 11th Floor Lecture Hall
    • Speaker
    • Heather Cihak, University of Colorado Boulder
    • Session Chair
    • Carina Curto, The Pennsylvania State University
    Abstract
    Persistent neural activity has been observed in the non-human primate cortex when making delayed estimates. Organized activity patterns according to cell feature preference reveals "bumps" that represent analog variables during the delay. Continuum neural field models support bump attractors whose stochastic dynamics can be linked to response statistics (estimate bias and error). Models often ignore the distinct dynamics of bumps in both excitatory/inhibitory population activity, but recent neural and behavioral recordings suggest both play a role in delayed estimate codes and responses. In past work, we developed new methods in asymptotic and multiscale analyses for stochastic and spatiotemporal systems to understand how network architecture determines bump dynamics in networks with distinct E/I populations and short term plasticity. The inhibitory bump dynamics as well as facilitation and diffusion impact the stability and wandering motion of the excitatory bump. Our current work moves beyond studying ensemble statistics like variance to examine potential mechanisms underlying the robustness of working memory to distractors (irrelevant information) presented during the maintenance period wherein the relative timescales of the E/I populations, synaptic vs activity dynamics, as well as short term plasticity may play an important role.
  • 2:35 - 2:40 pm EDT
    Convex optimization of recurrent neural networks for rapid inference of neural dynamics
    Lightning Talks - 11th Floor Lecture Hall
    • Speaker
    • Fatih Dinc, Stanford University
    • Session Chair
    • Carina Curto, The Pennsylvania State University
    Abstract
    Advances in optical and electrophysiological recording technologies have made it possible to record the dynamics of thousands of neurons, opening up new possibilities for interpreting and controlling large neural populations. A promising way to extract computational principles from these large datasets is to train data-constrained recurrent neural networks (dRNNs). However, existing training algorithms for dRNNs are inefficient and have limited scalability, making it a challenge to analyze large neural recordings even in offline scenarios. To address these issues, we introduce a training method termed Convex Optimization of Recurrent Neural Networks (CORNN). In studies of simulated recordings of hundreds of cells, CORNN attained training speeds ~ 100-fold faster than traditional optimization approaches while maintaining or enhancing modeling accuracy. We further validated CORNN on simulations with thousands of cells that performed simple computations such as those of a 3-bit flip-flop or the execution of a timed response. Finally, we showed that CORNN can robustly reproduce network dynamics and underlying attractor structures despite mismatches between generator and inference models, severe subsampling of observed neurons, or mismatches in neural time-scales. Overall, by training dRNNs with millions of parameters in subminute processing times on a standard computer, CORNN constitutes a first step towards real-time network reproduction constrained on large-scale neural recordings and a powerful computational tool for advancing the understanding of neural computation. My talk focuses on how dRNNs enabled by CORNN can help us reverse engineer the neural code in the mammalian brain.
  • 2:40 - 2:45 pm EDT
    Recall tempo of Hebbian sequences depends on the interplay of Hebbian kernel with tutor signal timing
    Lightning Talks - 11th Floor Lecture Hall
    • Speaker
    • Matthew Farrell, Harvard University
    • Session Chair
    • Carina Curto, The Pennsylvania State University
    Abstract
    Understanding how neural circuits generate sequential activity is a longstanding challenge. While foundational theoretical models have shown how sequences can be stored as memories with Hebbian plasticity rules, these models considered only a narrow range of Hebbian rules. In this talk I introduce a model for arbitrary Hebbian plasticity rules, capturing the diversity of spike-timing-dependent synaptic plasticity seen in experiments, and show how the choice of these rules and of neural activity patterns influences sequence memory formation and retrieval. In particular, I will present a general theory that predicts the speed of sequence replay. This theory lays a foundation for explaining how cortical tutor signals might give rise to motor actions that eventually become ``automatic''. This theory also captures the impact of changing the speed of the tutor signal. Beyond shedding light on biological circuits, this theory has relevance in artificial intelligence by laying a foundation for frameworks whereby slow and computationally expensive deliberation can be stored as memories and eventually replaced by inexpensive recall.
  • 2:45 - 2:50 pm EDT
    Modeling human temporal EEG responses subject to VR visual stimuli
    Lightning Talks - 11th Floor Lecture Hall
    • Speaker
    • Richard Foster, Virginia Commonwealth University
    • Session Chair
    • Carina Curto, The Pennsylvania State University
    Abstract
    When subject to visual stimuli flashing at a constant temporal frequency, it is well-known that the EEG response has a sharp peak in the power spectrum at the driving frequency. But the EEG response with random frequency stimuli and corresponding biophysical mechanisms are largely unknown. We present a phenomenological model framework in hopes of eventually capturing these EEG responses and unveiling the biophysical mechanisms. Based on observed heterogeneous temporal frequency selectivity curves in V1 cells (Hawken et al. ‘96, Camillo et al ‘20, Priebe et al. ‘06), we endow individual units with these response properties. Preliminary simulation results show that particular temporal frequency selectivity curves can be more indicative of the EEG response. Future directions include the construction of network architecture with interacting units to faithfully model the EEG response.
  • 2:50 - 2:55 pm EDT
    RNNs of RNNs: Recursive Construction of Stable Assemblies of Recurrent Neural Networks
    Lightning Talks - 11th Floor Lecture Hall
    • Speaker
    • Leo Kozachkov, Massachusetts Institute of Technology
    • Session Chair
    • Carina Curto, The Pennsylvania State University
    Abstract
    Recurrent neural networks (RNNs) are widely used throughout neuroscience as models of local neural activity. Many properties of single RNNs are well characterized theoretically, but experimental neuroscience has moved in the direction of studying multiple interacting areas, and RNN theory needs to be likewise extended. We take a constructive approach towards this problem, leveraging tools from nonlinear control theory and machine learning to characterize when combinations of stable RNNs will themselves be stable. Importantly, we derive conditions which allow for massive feedback connections between interacting RNNs. We parameterize these conditions for easy optimization using gradient-based techniques, and show that stability-constrained "networks of networks" can perform well on challenging sequential-processing benchmark tasks. Altogether, our results provide a principled approach towards understanding distributed, modular function in the brain.
  • 3:15 - 3:45 pm EDT
    Coffee Break
    11th Floor Collaborative Space
  • 3:45 - 4:30 pm EDT
    Universal Properties of Strongly Coupled Recurrent Networks
    11th Floor Lecture Hall
    • Speaker
    • Robert Rosenbaum, University of Notre Dame
    • Session Chair
    • Carina Curto, The Pennsylvania State University
    Abstract
    Balanced excitation and inhibition is widely observed in cortex. How does this balance shape neural computations and stimulus representations? This question is often studied using computational models of neuronal networks in a dynamically balanced state. But balanced network models predict a linear relationship between stimuli and population responses. So how do cortical circuits implement nonlinear representations and computations? We show that every balanced network architecture admits stimuli that break the balanced state and these breaks in balance push the network into a “semi-balanced state” characterized by excess inhibition to some neurons, but an absence of excess excitation. The semi-balanced state produces nonlinear stimulus representations and nonlinear computations, is unavoidable in networks driven by multiple stimuli, is consistent with cortical recordings, and has a direct mathematical relationship to artificial neural networks.
  • 4:30 - 6:00 pm EDT
    Reception
    11th Floor Collaborative Space
Tuesday, September 19, 2023
  • 9:00 - 9:45 am EDT
    Multilayer Networks in Neuroscience
    11th Floor Lecture Hall
    • Speaker
    • Mason Porter, UCLA
    • Session Chair
    • Brent Doiron, University of Chicago
    Abstract
    I will discuss multilayer networks in neuroscience. I will introduce the idea of multilayer networks and discuss some uses of multilayer networks in dneuroscience. I will present some interesting challenges.
  • 10:00 - 10:15 am EDT
    Coffee Break
    11th Floor Collaborative Space
  • 10:15 - 11:00 am EDT
    State modulation in spatial networks of multiple interneuron subtypes
    11th Floor Lecture Hall
    • Speaker
    • Chengcheng Huang, University of Pittsburgh
    • Session Chair
    • Brent Doiron, University of Chicago
    Abstract
    Neuronal responses to sensory stimuli can be strongly modulated by animal's brain state. Three distinct subtypes of inhibitory interneurons, parvalbumin (PV), somatostatin (SOM), and vasoactive intestinal peptide (VIP) expressing cells, have been identified as key players of flexibly modulating network activity. The three interneuron populations have specialized local microcircuit motifs and are targeted differentially by neuromodulators and top-down inputs from higher-order cortical areas. In this work, we systematically study the function of each interneuron cell type at modulating network dynamics in a spatially ordered spiking neuron network. We analyze the changes in firing rates and network synchrony as we apply static current to each cell population. We find that the modulation pattern by activating E or PV cells is distinct from that by activating SOM or VIP cells. In particular, we identify SOM cells as the main driver of network synchrony.
  • 11:15 - 11:45 am EDT
    Open Problems Discussion
    Problem Session - 11th Floor Lecture Hall
    • Session Chairs
    • Brent Doiron, University of Chicago
    • Zachary Kilpatrick, University of Colorado Boulder
  • 11:50 am - 12:00 pm EDT
    Group Photo (Immediately After Talk)
    11th Floor Lecture Hall
  • 12:00 - 1:30 pm EDT
    Working Lunch
    11th Floor Collaborative Space
  • 1:30 - 2:15 pm EDT
    Plasticity in balanced neuronal networks
    11th Floor Lecture Hall
    • Speaker
    • Kresimir Josic, University of Houston
    • Session Chair
    • Brent Doiron, University of Chicago
    Abstract
    I will first describe how to extend the theory of balanced networks to account for synaptic plasticity. This theory can be used to show when a plastic network will maintain balance, and when it will be driven into an unbalanced state. I will next discuss how this approach provides evidence for a novel form of rapid compensatory inhibitory plasticity using experimental evidence obtained using optogenetic activation of excitatory neurons in primate visual cortex (area V1). The theory explains how such activation induces a population-wide dynamic reduction in the strength of neuronal interactions over the timescale of minutes during the awake state, but not during rest. I will shift gears in the final part of the talk, and discuss how community detection algorithms can help uncover the large scale organization of neuronal networks from connectome data, using the Drosophila hemibrain dataset as an example.
  • 2:35 - 2:40 pm EDT
    Q-Phase reduction of multi-dimensional stochastic Ornstein-Uhlenbeck process networks
    Lightning Talks - 11th Floor Lecture Hall
    • Speaker
    • Maxwell Kreider, Case Western Reserve University
    • Session Chair
    • Brent Doiron, University of Chicago
    Abstract
    Phase reduction is an effective tool to study the network dynamics of deterministic limit-cycle oscillators. The recent introduction of stochastic phase concepts allows us to extend these tools to stochastic oscillators; of particular utility is the asymptotic stochastic phase, derived from the eigenfunction decomposition of the system's probability density. Here, we study networks of coupled oscillatory two-dimensional Ornstein-Uhlenbeck processes (OUPs) with complex eigenvalues. We characterize system dynamics by providing an exact expression for the asymptotic stochastic phase for OUP networks of any dimension and arbitrary coupling structure. Furthermore, we introduce an order parameter quantifying the synchrony of networks of stochastic oscillators, and apply it to our OUP model. We argue that the OUP network provides a new, analytically tractable approach to analysis of large scale electrophysiological recordings.
  • 2:40 - 2:45 pm EDT
    Feedback Controllability as a Normative Theory of Neural Dynamics
    Lightning Talks - 11th Floor Lecture Hall
    • Speaker
    • Ankit Kumar, UC Berkeley
    • Session Chair
    • Brent Doiron, University of Chicago
    Abstract
    Brain computations emerge from the collective dynamics of distributed neural populations. Behaviors including reaching and speech are explained by principles of optimal feedback control, yet if and how this normative description shapes neural population dynamics is unknown. We created dimensionality reduction methods that identify subspaces of dynamics that are most feedforward controllable (FFC) vs. feedback controllable (FBC). We show that FBC and FFC subspaces diverge for dynamics generated by non-normal connectivity. In neural recordings from monkey M1 and S1 during reaching, FBC subspaces are better decoders of reach velocity, particularly during reach acceleration, and that FBC provides a first principles account of the observation of rotational dynamics. Overall, our results demonstrate feedback controllability is a novel, normative theory of neural population dynamics, and reveal how the structure of high dynamical systems shape their ability to be controlled.
  • 2:45 - 2:50 pm EDT
    Adaptive whitening with fast gain modulation and slow synaptic plasticity
    Lightning Talks - 11th Floor Lecture Hall
    • Speaker
    • David Lipshutz, Flatiron Institute
    • Session Chair
    • Brent Doiron, University of Chicago
    Abstract
    Neurons in early sensory areas rapidly adapt to changing sensory statistics, both by normalizing the variance of their individual responses and by reducing correlations between their responses. Together, these transformations may be viewed as an adaptive form of statistical whitening. In this talk, I will present a normative multi-timescale mechanistic model of adaptive whitening with complementary computational roles for gain modulation and synaptic plasticity. Gains are modified on a fast timescale to adapt to the current statistical context, whereas synapses are modified on a slow timescale to learn structural properties of the input statistics that are invariant across contexts.
  • 2:50 - 2:55 pm EDT
    The combinatorial code and the graph rules of Dale networks
    Lightning Talks - 11th Floor Lecture Hall
    • Speaker
    • Nikola Milicevic, Pennsylvania State University
    • Session Chair
    • Brent Doiron, University of Chicago
    Abstract
    We describe the combinatorics of equilibria and steady states of neurons in threshold-linear networks that satisfy the Dale’s law. The combinatorial code of a Dale network is characterized in terms of two conditions: (i) a condition on the network connectivity graph, and (ii) a spectral condition on the synaptic matrix. We find that in the weak coupling regime the combinatorial code depends only on the connectivity graph, and not on the particulars of the synaptic strengths. Moreover, we prove that the combinatorial code of a weakly coupled network is a sublattice, and we provide a learning rule for encoding a sublattice in a weakly coupled excitatory network. In the strong coupling regime we prove that the combinatorial code of a generic Dale network is intersection-complete and is therefore a convex code, as is common in some sensory systems in the brain.
  • 2:55 - 3:00 pm EDT
    Decomposed Linear Dynamical Systems for Studying Inter and Intra-Region Neural Dynamics
    Lightning Talks - 11th Floor Lecture Hall
    • Speaker
    • Noga Mudrik, The Johns Hopkins University
    • Session Chair
    • Brent Doiron, University of Chicago
    Abstract
    Understanding the intricate relationship between recorded neural activity and behavior is a pivotal pursuit in neuroscience. However, existing models frequently overlook the non-linear and non-stationary behavior evident in neural data, opting instead to center their focus on simplified projections or overt dynamical systems. We introduce a Decomposed Linear Dynamical Systems (dLDS) approach to capture these complex dynamics by representing them as a sparse time-varying linear combination of interpretable linear dynamical components. dLDS is trained using an expectation maximization procedure where the obscured dynamical components are iteratively inferred using dictionary learning. This approach enables the identification of overlapping circuits, while the sparsity applied during the training maintains the model interpretability. We demonstrate that dLDS successfully recovers the underlying linear components and their time-varying coefficients in both synthetic and neural data examples, and show that it can learn efficient representations of complex data. By leveraging the rich data from the International Brain Laboratory’s Brain Wide Map dataset, we extend dLDS to model communication among ensembles within and between brain regions, drawing insights from multiple non-simultaneous recording sessions.
  • 3:00 - 3:05 pm EDT
    Characterizing Neural Spike Train Data for Chemosensory Coding Analysis
    Lightning Talks - 11th Floor Lecture Hall
    • Speaker
    • Audrey Nash, Florida State University
    • Session Chair
    • Brent Doiron, University of Chicago
    Abstract
    In this presentation, we explore neural spike train data to discern a neuron's ability to distinguish between various stimuli. By examining both the spiking rate and the temporal distribution of spikes (phase of spiking), we aim to unravel the intricacies of chemosensory coding in neurons. We will provide a concise overview of our methodology for identifying chemosensory coding neurons and delve into the application of metric-based analysis techniques in conjunction with optimal transport methods. This combined approach allows us to uncover emerging patterns in tastant coding across multiple neurons and quantify the respective impacts of spiking rate and temporal phase in taste decoding.
  • 3:05 - 3:10 pm EDT
    Infinite-dimensional Dynamics in a Model of EEG Activity in the Neocortex
    Lightning Talks - 11th Floor Lecture Hall
    • Speaker
    • Farshad Shirani, Georgia Institute of Technology
    • Session Chair
    • Brent Doiron, University of Chicago
    Abstract
    I present key analytical and computational results on a mean field model of electroencephalographic activity in the neocortex, which is composed of a system of coupled ODEs and PDEs. I show that for some sets of biophysical parameter values the equilibrium set of the model is not compact, which further implies that the global attracting set of the model is infinite-dimensional. I also present computational results on generation and spatial propagation of transient gamma oscillations in the solutions of the model. The results identify important challenges in interpreting and modelling the temporal pattern of EEG recordings, caused by low spatial resolution of EEG electrodes.
  • 3:10 - 3:15 pm EDT
    What is the optimal topology of setwise connections for a memory network?
    Lightning Talks - 11th Floor Lecture Hall
    • Speaker
    • Thomas Burns, ICERM
    • Session Chair
    • Brent Doiron, University of Chicago
    Abstract
    Simplicial Hopfield networks (Burns & Fukai, 2023) explicitly model setwise connections between neurons based on a simplicial complex to store memory patterns. Randomly diluted networks -- where only a randomly chosen fraction of the simplices, i.e., setwise connections, have non-zero weights -- show performance above traditional associative memory networks with only pairwise connections between neurons but the same total number of non-zero weighted connections. However, could there be a cleverer choice of connections to weight given known memory patterns we want to store? I suspect so, and in this talk I will to formally pose the problem for others to consider.
  • 3:30 - 4:00 pm EDT
    Coffee Break
    11th Floor Collaborative Space
  • 4:00 - 4:45 pm EDT
    Reliability and robustness of oscillations in some slow-fast chaotic systems
    11th Floor Lecture Hall
    • Speaker
    • Jonathan Jaquette, New Jersey Institute of Technology
    • Session Chair
    • Brent Doiron, University of Chicago
    Abstract
    A variety of nonlinear models of biological systems generate complex chaotic behaviors that contrast with biological homeostasis, the observation that many biological systems prove remarkably robust in the face of changing external or internal conditions. Motivated by the subtle dynamics of cell activity in a crustacean central pattern generator, we propose a refinement of the notion of chaos that reconciles homeostasis and chaos in systems with multiple timescales. We show that systems displaying relaxation cycles going through chaotic attractors generate chaotic dynamics that are regular at macroscopic timescales, thus consistent with physiological function. We further show that this relative regularity may break down through global bifurcations of chaotic attractors such as crises, beyond which the system may generate erratic activity also at slow timescales. We analyze in detail these phenomena in the chaotic Rulkov map, a classical neuron model known to exhibit a variety of chaotic spike patterns. This leads us to propose that the passage of slow relaxation cycles through a chaotic attractor crisis is a robust, general mechanism for the transition between such dynamics, and we validate this numerically in other models.
  • 5:30 - 7:00 pm EDT
    Networking event with Carney Institute for Brain Science
    External Event - Carney Institute for Brain Science - 164 Angell St, Providence RI, 02906
Wednesday, September 20, 2023
  • 9:00 - 9:45 am EDT
    Modeling in neuroscience: the challenges of biological realism and computability
    11th Floor Lecture Hall
    • Speaker
    • Lai-Sang Young, Courant Institute
    • Session Chair
    • Katie Morrison, University of Northern Colorado
    Abstract
    Biologically realistic models of the brain have the potential to offer insight into neural mechanisms; they have predictive power, the ultimate goal of biological modeling. These benefits, however, come at considerable costs: network models that involve hundreds of thousands of neurons and many (unknown) parameters are unwieldy to build and to test, let alone to simulate and to analyze. Reduced models have obvious advantages, but the farther removed from biology a model is, the harder it is to draw meaningful inferences. In this talk, I propose a modeling strategy that aspires to be both realistic and computable. Two crucial ingredients are (i) we track neuronal dynamics on two spatial scales: coarse-grained dynamics informed by local activity, and (ii) we compute a family of potential local responses in advance, eliminating the need to perform similar computations at each spatial location in each update. I will illustrate this computational strategy using a model of the monkey visual cortex, which is very similar to that of humans.
  • 10:00 - 10:15 am EDT
    Coffee Break
    11th Floor Collaborative Space
  • 10:15 - 11:00 am EDT
    Uncertainty Quantification for Neurobiological Networks.
    11th Floor Lecture Hall
    • Speaker
    • Daniele Avitabile, Vrije Universiteit Amsterdam
    • Session Chair
    • Katie Morrison, University of Northern Colorado
    Abstract
    This talk presents a framework for forward uncertainty quantification problems in spatially-extended neurobiological networks. We will consider networks in which the cortex is represented as a continuum domain, and local neuronal activity evolves according to an integro-differential equation, collecting inputs nonlocally, from the whole cortex. These models are sometimes referred to as neural field equations. Large-scale brain simulations of such models are currently performed heuristically, and the numerical analysis of these problems is largely unexplored. In the first part of the talk I will summarise recent developments for the rigorous numerical analysis of projection schemes for deterministic neural fields, which sets the foundation for developing Finite-Element and Spectral schemes for large-scale problems. The second part of the talk will discuss the case of networks in the presence of uncertainties modelled with random data, in particular: random synaptic connections, external stimuli, neuronal firing rates, and initial conditions. Such problems give rise to random solutions, whose mean, variance, or other quantities of interest have to be estimated using numerical simulations. This so-called forward uncertainty quantification problem is challenging because it couples spatially nonlocal, nonlinear problems to large-dimensional random data. I will present a family of schemes that couple a spatial projector for the spatial discretisation, to stochastic collocation for the random data. We will analyse the time- dependent problem with random data and the schemes from a functional analytic viewpoint, and show that the proposed methods can achieve spectral accuracy, provided the random data is sufficiently regular. We will showcase the schemes using several examples. Acknowledgements This talk presents joint work with Francesca Cavallini (VU Amsterdam), Svetlana Dubinkina (VU Amsterdam), and Gabriel Lord (Radboud University).
  • 11:15 am - 12:00 pm EDT
    Open Problems Discussion
    Problem Session - 11th Floor Lecture Hall
    • Session Chairs
    • Konstantin Mischaikow, Rutgers University
    • Katie Morrison, University of Northern Colorado
  • 12:00 - 2:00 pm EDT
    Lunch/Free Time
  • 2:00 - 2:45 pm EDT
    Dynamics of stochastic integrate-and-fire networks
    11th Floor Lecture Hall
    • Speaker
    • Gabe Ocker, Boston University
    • Session Chair
    • Katie Morrison, University of Northern Colorado
  • 3:00 - 3:05 pm EDT
    A Step Towards Uncovering The Structure of Multistable Neural Networks
    Lightning Talks - 11th Floor Lecture Hall
    • Speaker
    • Magnus Tournoy, Flatiron Institute
    • Session Chair
    • Katie Morrison, University of Northern Colorado
    Abstract
    With the experimental advances in the recording of large populations of neurons, theorists are in the humbling position of making sense of a staggering amount of data. One question that will become more into reach is how network structure relates to function. But going beyond explanatory models and becoming more predictive will require a fundamental approach. In this talk we’ll take the view of a physicist and formulate exact results within a simple, yet general, toy model called Glass networks. Named after its originator Leon Glass, they are the infinite gain limit of well-known circuit models like continuous-time Hopfield networks. We’ll show that, within this limit, stability conditions reduce to semipositivity constraints on the synaptic weight matrix. Having a clear link between structure and function in possession, the consequences of multistability on the network architecture can be explored. One finding is the factorization of the weight matrix in terms of nonnegative matrices. Interestingly this factorization completely identifies the existence of stable states. Another result is the reduction of allowed sign patterns for the connections. A consequence hereof are lower bounds on the number of excitatory and inhibitory connections. At last we will discuss the special case of “sign stability”, where stability is guaranteed by the topology of the network. Derivations of these results will be supplemented by a number of examples.
  • 3:05 - 3:10 pm EDT
    Clustering and Distribution of the Adaptation Variable
    Lightning Talks - 11th Floor Lecture Hall
    • Speaker
    • Ka Nap Tse, University of Pittsburgh
    • Session Chair
    • Katie Morrison, University of Northern Colorado
    Abstract
    Brain wave is an important phenomenon in neuroscience. Besides synchronous spiking, excitatory cells with adaptation can spike in clusters to cause a rhythmic activity of the network. In previous works, the adaptation variable is usually eliminated for further analysis. In this talk, a way to study this clustering behaviour through the evolution of the distribution of the adaptation variable will be discussed. We then transform the distribution to the time-to-spike coordinate for further explorations.
  • 3:10 - 3:15 pm EDT
    Low-dimensional manifold of neural oscillations revealed by data-driven model reduction
    Lightning Talks - 11th Floor Lecture Hall
    • Speaker
    • Zhuo-Cheng Xiao, New York University
    • Session Chair
    • Katie Morrison, University of Northern Colorado
    Abstract
    Neural oscillations across various frequency bands are believed to underlie essential brain functions, such as information processing and cognitive activities. However, the emergence of oscillatory dynamics from spiking neuronal networks—and the interplay among different cortical rhythms—has seldom been theoretically explored, largely due to the strong nonlinearity and high dimensionality involved. To address this challenge, we have developed a series of data-driven model reduction methods tailored for spiking network dynamics. In this talk I will present nearly two-dimensional manifolds in the reduced coordinates that successfully capture the emergence of gamma oscillations. Specifically, we find that the initiation phases of each oscillation cycle are the most critical. Subsequent cycles are more deterministic and lie on the aforementioned two-dimensional manifold. The Poincaré mappings between these initiation phases reveal the structure of the dynamical system and successfully explain the bifurcation from gamma oscillations to multi-band oscillations.
  • 3:15 - 3:20 pm EDT
    Sensitivity to control signals in triphasic rhythmic neural systems: a comparative mechanistic analysis via infinitesimal local timing response curves
    Lightning Talks - 11th Floor Lecture Hall
    • Speaker
    • Zhuojun Yu, Case Western Reserve University
    • Session Chair
    • Katie Morrison, University of Northern Colorado
    Abstract
    Similar activity patterns may arise from model neural networks with distinct coupling properties and individual unit dynamics. These similar patterns may, however, respond differently to parameter variations and, specifically, to tuning of inputs that represent control signals. In this work, we analyze the responses resulting from modulation of a localized input in each of three classes of model neural networks that have been recognized in the literature for their capacity to produce robust three-phase rhythms: coupled fast-slow oscillators, near-heteroclinic oscillators, and threshold-linear networks. Triphasic rhythms, in which each phase consists of a prolonged activation of a corresponding subgroup of neurons followed by a fast transition to another phase, represent a fundamental activity pattern observed across a range of central pattern generators underlying behaviors critical to survival, including respiration, locomotion, and feeding. To perform our analysis, we extend the recently developed local timing response curve (lTRC), which allows us to characterize the timing effects due to perturbations, and we complement our lTRC approach with model-specific dynamical systems analysis. Interestingly, we observe disparate effects of similar perturbations across distinct model classes. Thus, this work provides an analytical framework for studying control of oscillations in nonlinear dynamical systems, and may help guide model selection in future efforts to study systems exhibiting triphasic rhythmic activity.
  • 3:20 - 3:25 pm EDT
    Modeling the effects of cell-type specific lateral inhibition
    Lightning Talks - 11th Floor Lecture Hall
    • Speaker
    • Soon Ho Kim, Georgia Institute of Technology
    • Session Chair
    • Katie Morrison, University of Northern Colorado
  • 3:30 - 4:00 pm EDT
    Coffee Break
    11th Floor Collaborative Space
  • 4:00 - 4:45 pm EDT
    Computing the Global Dynamics of Parameterized Families of ODEs
    11th Floor Lecture Hall
    • Speaker
    • Marcio Gameiro, Rutgers University
    • Session Chair
    • Katie Morrison, University of Northern Colorado
    Abstract
    We present a combinatorial topological method to compute the dynamics of a parameterized family of ODEs. A discretization of the state space of the systems is used to construct a combinatorial representation from which recurrent versus non-recurrent dynamics is extracted. Algebraic topology is then used to validate and characterize the dynamics of the system. We will discuss the combinatorial description and the algebraic topological computations and will present applications to systems of ODEs arising from gene regulatory networks.
Thursday, September 21, 2023
  • 9:00 - 9:45 am EDT
    Multiple timescale respiratory dynamics and effect of neuromodulation
    11th Floor Lecture Hall
    • Speaker
    • Yangyang Wang, Brandeis University
    • Session Chair
    • Zachary Kilpatrick, University of Colorado Boulder
    Abstract
    Respiration is an involuntary process in all living beings required for our survival. The preBötzinger complex (preBötC) in the mammalian brainstem is a neuronal network that drives inspiratory rhythmogenesis, whose activity is constantly modulated by neuromodulators in response to changes in the environment. In this talk, we will discuss chanllenges involved in the analysis of bursting dynamics in preBötC neurons and how these dynamics change during prenatal development. We will also combine insights from in vitro recordings and dynamical systems modeling to investigate the effect of norepinephrine (NE), an excitatory neuromodulator, on respiratory dynamics. Our investigation employs bifurcation analysis to reveal the mechanisms by which NE differentially modulates different types of preBötC bursting neurons.
  • 10:00 - 10:30 am EDT
    Coffee Break
    11th Floor Collaborative Space
  • 10:30 - 11:15 am EDT
    Enhancing Neuronal Classification Capacity via Nonlinear Parallel Synapses
    11th Floor Lecture Hall
    • Speaker
    • Marcus Benna, UC San Diego
    • Session Chair
    • Zachary Kilpatrick, University of Colorado Boulder
    Abstract
    We discuss models of a neuron that has multiple synaptic contacts with the same presynaptic axon. We show that a diverse set of learned nonlinearities in these parallel synapses leads to a substantial increase in the neuronal classification capacity.
  • 11:30 am - 1:30 pm EDT
    Working Lunch: Open Problems Session
    Working Lunch - 11th Floor Collaborative Space
  • 1:30 - 2:15 pm EDT
    Combinatorial structure of continuous dynamics in gene regulatory networks
    11th Floor Lecture Hall
    • Speaker
    • Tomas Gedeon, Montana State University
    • Session Chair
    • Zachary Kilpatrick, University of Colorado Boulder
    Abstract
    Gene network dynamics and neural network dynamics face similar challenges of high dimensionality of both phase space and parameter space, and a lack of reliable experimental data to infer parameters. We first describe the mathematical foundation of DSGRN (Dynamic Signatures Generated by Regulatory Networks), an approach that provides a combinatorial description of global dynamics of a network over its parameter space. Finite description allows comparison of parameterized dynamics between hundreds of networks to discard networks that are not compatible with experimental data. We also describe a close connection of DSGRN to Boolean network models that allows us to view DSGRN as a connection between parameterized continuous time dynamics and discrete dynamics of Boolean modets. If time allows, we discuss several applications of this methodology to systems biology.
  • 2:30 - 3:15 pm EDT
    A model of the mammalian neural motor architecture elucidates the mechanisms underlying efficient and flexible control of network dynamics
    11th Floor Lecture Hall
    • Speaker
    • Laureline Logiaco, Massachusetts Institute of Technology
    • Session Chair
    • Zachary Kilpatrick, University of Colorado Boulder
    Abstract
    One of the fundamental functions of the brain is to flexibly plan and control movement production at different timescales in order to efficiently shape structured behaviors. I will present research investigating how these complex computations are performed in the mammalian brain, with an emphasis on autonomous motor control. Specifically, I will focus on the mechanisms supporting efficient interfacing between 'higher-level' planning commands and 'lower-level' motor cortical dynamics that ultimately drive muscles. I will take advantage of the fact that the anatomy of the circuits underlying motor control is well known. It notably involves the primary motor cortex, a recurrent network that generates learned commands to drive muscles while interacting through loops with thalamic neurons that lack recurrent excitation. Using an analytically tractable model that incorporates these architectural constraints, I will explain how this motor circuit can implement a form of efficient modularity by combining (i) plastic thalamocortical loops that are movement-specific and (ii) shared hardwired circuits. I will show that this modular architecture can balance two different objectives: first, supporting the flexible recombination of an extensible library of re-usable motor primitives; and second, promoting the efficient use of neural resources by taking advantage of shared connections between modules. I will end by mentioning some open avenues for further mathematical analyses related to this framework.
  • 3:30 - 4:00 pm EDT
    Coffee Break
    11th Floor Collaborative Space
  • 4:00 - 4:45 pm EDT
    Low-rank neural connectivity for the discrimination of temporal patterns.
    11th Floor Lecture Hall
    • Speaker
    • Sean Escola, Columbia University
    • Session Chair
    • Zachary Kilpatrick, University of Colorado Boulder
Friday, September 22, 2023
  • 9:00 - 9:45 am EDT
    Mean-field theory of learning dynamics in deep neural networks
    11th Floor Lecture Hall
    • Speaker
    • Cengiz Pehlevan, Harvard University
    • Session Chair
    • Konstantin Mischaikow, Rutgers University
    Abstract
    Learning dynamics of deep neural networks is complex. While previous approaches made advances in mathematical analysis of the dynamics of two-layer neural networks, addressing deeper networks have been challenging. In this talk, I will present a mean field theory of the learning dynamics of deep networks and discuss its implications.
  • 10:00 - 10:45 am EDT
    Multi-level measures for understanding and comparing biological and artificial neural networks
    11th Floor Lecture Hall
    • Speaker
    • SueYeon Chung, New York University
    • Session Chair
    • Konstantin Mischaikow, Rutgers University
    Abstract
    I will share recent theoretical advances on how representation's population level properties such as high-dimensional geometries and spectral properties can be used to capture (1) the classification capacity of neural manifolds, and (2) prediction error of neural data from network model representations.
  • 11:00 - 11:30 am EDT
    Coffee Break
    11th Floor Collaborative Space
  • 11:30 am - 12:15 pm EDT
    A Sparse-coding Model of Category-specific Functional Organization in IT Cortex
    11th Floor Lecture Hall
    • Speaker
    • Demba Ba, Harvard University
    • Session Chair
    • Konstantin Mischaikow, Rutgers University
    Abstract
    Primary sensory areas in the brain of mammals may have evolved to compute efficient representations of natural scenes. In the late 90s, Olhausen and Field proposed a model that expresses the components of a natural scene, e.g. natural-image patches, as sparse combinations of a common set of patterns. Applied to a dataset of natural images, this so-called sparse coding model learns patterns that resemble the receptive fields of V1 neurons. Recordings from the monkey infero-temporal (IT) cortex suggest the presence, in this region, of a sparse code for natural-image categories. The recordings also suggest that, physically, IT neurons form spatial clusters, each of which preferentially responds to images from certain categories. Taken together, this evidence suggests that neurons in IT cortex form functional groups that reflect the grouping of natural images into categories. My talk will introduce a new sparse-coding model that exhibits this categorical form of functional grouping.
  • 12:30 - 2:00 pm EDT
    Lunch/Free Time
  • 2:00 - 2:45 pm EDT
    Final Open Problems Discussion
    Problem Session - 11th Floor Lecture Hall
    • Session Chairs
    • Carina Curto, The Pennsylvania State University
    • Konstantin Mischaikow, Rutgers University
  • 3:00 - 3:30 pm EDT
    Coffee Break
    11th Floor Collaborative Space
Monday, September 25, 2023
Math + Neuroscience: Strengthening the Interplay Between Theory and Mathematics
  • 10:00 - 11:00 am EDT
    Journal Club
    11th Floor Lecture Hall
  • 3:00 - 3:30 pm EDT
    Coffee Break
    11th Floor Collaborative Space
  • 3:30 - 5:00 pm EDT
    TLN Working Group
    Group Work - 10th Floor Classroom
Tuesday, September 26, 2023
Math + Neuroscience: Strengthening the Interplay Between Theory and Mathematics
  • 9:00 - 10:30 am EDT
    Tutorial
    Tutorial - 11th Floor Lecture Hall
  • 3:00 - 3:30 pm EDT
    Coffee Break
    11th Floor Collaborative Space
Wednesday, September 27, 2023
Math + Neuroscience: Strengthening the Interplay Between Theory and Mathematics
  • 9:00 - 10:00 am EDT
    Professional Development: Ethics I
    Professional Development - 11th Floor Lecture Hall
  • 3:00 - 3:30 pm EDT
    Coffee Break
    11th Floor Collaborative Space
Thursday, September 28, 2023
Math + Neuroscience: Strengthening the Interplay Between Theory and Mathematics
  • 9:00 - 10:30 am EDT
    Tutorial
    Tutorial - 11th Floor Lecture Hall
  • 12:00 - 1:30 pm EDT
    Open Problems Lunch Seminar
    Working Lunch - 11th Floor Lecture Hall
  • 3:00 - 3:30 pm EDT
    Coffee Break
    11th Floor Collaborative Space
Friday, September 29, 2023
Math + Neuroscience: Strengthening the Interplay Between Theory and Mathematics
  • 11:00 - 11:30 am EDT
    Recurrent network models for predictive processing
    Post Doc/Graduate Student Seminar - 10th Floor Classroom
    • Bin Wang, University of California, San Diego
    Abstract
    Predictive responses to sensory stimuli are prevalent across cortical networks and are thought to be important for multi-sensory and sensorimotor learning. It has been hypothesized that predictive processing relies on computations done by two separate functional classes of cortical neurons: one specialized for “faithful” representation of external stimuli, and another for conveying prediction-error signals. It remains unclear how such predictive representations are formed in natural conditions, where stimuli are high-dimensional. In this presentation, I will present some efforts on characterizing how high-dimensional predictive processing can be performed through recurrent networks. I will start with the neuroscience motivations, define the mathematical models and mention some related mathematical questions that we haven't yet solved along the way.
  • 11:30 am - 12:00 pm EDT
    Fishing for beta: uncovering mechanisms underlying cortical oscillations in large-scale biophysical models
    Post Doc/Graduate Student Seminar - 10th Floor Classroom
    • Nicholas Tolley, Brown University
    Abstract
    Beta frequency (13-30 Hz) oscillations are robustly observed across the neocortex, and are strongly predictive of behavior and disease states. While several theories exist regarding their functional significance, the cell and circuit level activity patterns underlying the generation of beta activity remains uncertain. We approach this problem using the Human Neocortical Neurosolver (HNN; hnn.brown.edu), a detailed biophysical model of a cortical column which simulates the microscale activity patterns underlying macroscale field potentials like beta oscillations. Detailed biophysical models potentially offer concrete and biologically interpretable predictions, but their use is challenged by computationally expensive simulations, an overwhelmingly large parameter space, and highly complex relationships between parameters and model outputs. We demonstrate how these challenges can be overcome by combining HNN with simulation based inference (SBI), a deep learning based Bayesian inference framework, and use it to characterize the space of parameters capable of producing beta oscillations. Specifically, we use the HNN-SBI framework to characterize the constraints on network connectivity for producing spontaneous beta. In future work, we plan to compare these predictions to higher level neural models to identify which simplifying assumptions are consistent with detailed models of neural oscillations.
  • 1:30 - 3:00 pm EDT
    Topology+Neuro Working Group
    Group Work - 10th Floor Classroom
  • 3:00 - 3:30 pm EDT
    Coffee Break
    11th Floor Collaborative Space
Monday, October 2, 2023
Math + Neuroscience: Strengthening the Interplay Between Theory and Mathematics
  • 3:30 - 5:00 pm EDT
    TLN Working Group
    Group Work - 10th Floor Classroom
Wednesday, October 4, 2023
Math + Neuroscience: Strengthening the Interplay Between Theory and Mathematics
  • 9:00 - 10:00 am EDT
    Professional Development: Ethics II
    Professional Development - 11th Floor Lecture Hall
Friday, October 6, 2023
Math + Neuroscience: Strengthening the Interplay Between Theory and Mathematics
  • 1:30 - 3:00 pm EDT
    Topology+Neuro Working Group
    Group Work - 10th Floor Classroom
Monday, October 9, 2023
Math + Neuroscience: Strengthening the Interplay Between Theory and Mathematics
  • 3:30 - 5:00 pm EDT
    TLN Working Group
    Group Work - 10th Floor Classroom
Friday, October 13, 2023
Math + Neuroscience: Strengthening the Interplay Between Theory and Mathematics
  • 1:30 - 3:00 pm EDT
    Topology+Neuro Working Group
    Group Work - 10th Floor Classroom
Monday, October 23, 2023
Math + Neuroscience: Strengthening the Interplay Between Theory and Mathematics
  • 3:30 - 5:00 pm EDT
    TLN Working Group
    Group Work - 10th Floor Classroom
Wednesday, October 25, 2023
Math + Neuroscience: Strengthening the Interplay Between Theory and Mathematics
  • 9:00 - 10:00 am EDT
    Professional Development: Job Applications
    Professional Development - 11th Floor Lecture Hall
Friday, October 27, 2023
Math + Neuroscience: Strengthening the Interplay Between Theory and Mathematics
  • 1:30 - 3:00 pm EDT
    Topology+Neuro Working Group
    Group Work - 10th Floor Classroom
Wednesday, November 8, 2023
Math + Neuroscience: Strengthening the Interplay Between Theory and Mathematics
  • 9:00 - 10:00 am EST
    Professional Development: Papers
    Professional Development - 11th Floor Lecture Hall
Friday, November 10, 2023
Math + Neuroscience: Strengthening the Interplay Between Theory and Mathematics
  • 1:30 - 3:00 pm EST
    Topology+Neuro Working Group
    Group Work - 10th Floor Classroom
Monday, November 13, 2023
Math + Neuroscience: Strengthening the Interplay Between Theory and Mathematics
  • 3:30 - 5:00 pm EST
    TLN Working Group
    Group Work - 10th Floor Classroom
Wednesday, November 15, 2023
Math + Neuroscience: Strengthening the Interplay Between Theory and Mathematics
  • 9:00 - 10:00 am EST
    Professional Development: Grants
    Professional Development - 11th Floor Lecture Hall
Friday, November 17, 2023
Math + Neuroscience: Strengthening the Interplay Between Theory and Mathematics
  • 1:30 - 3:00 pm EST
    Topology+Neuro Working Group
    Group Work - 10th Floor Classroom
Monday, November 20, 2023
Math + Neuroscience: Strengthening the Interplay Between Theory and Mathematics
  • 3:30 - 5:00 pm EST
    TLN Working Group
    Group Work - 10th Floor Classroom
Friday, November 24, 2023
Math + Neuroscience: Strengthening the Interplay Between Theory and Mathematics
  • 1:30 - 3:00 pm EST
    Topology+Neuro Working Group
    Group Work - 10th Floor Classroom
Monday, November 27, 2023
Math + Neuroscience: Strengthening the Interplay Between Theory and Mathematics
  • 3:30 - 5:00 pm EST
    TLN Working Group
    Group Work - 10th Floor Classroom
Friday, December 1, 2023
Math + Neuroscience: Strengthening the Interplay Between Theory and Mathematics
  • 1:30 - 3:00 pm EST
    Topology+Neuro Working Group
    Group Work - 10th Floor Classroom
Monday, December 4, 2023
Math + Neuroscience: Strengthening the Interplay Between Theory and Mathematics
  • 3:30 - 5:00 pm EST
    TLN Working Group
    Group Work - 10th Floor Classroom
Friday, December 8, 2023
Math + Neuroscience: Strengthening the Interplay Between Theory and Mathematics
  • 1:30 - 3:00 pm EST
    Topology+Neuro Working Group
    Group Work - 10th Floor Classroom

All event times are listed in ICERM local time in Providence, RI (Eastern Daylight Time / UTC-4).

All event times are listed in .

Application Information

This program is at capacity, and ICERM is no longer accepting applications.

Your Visit to ICERM

ICERM Facilities
ICERM is located on the 10th & 11th floors of 121 South Main Street in Providence, Rhode Island. ICERM's business hours are 8:30am - 5:00pm during this event. See our facilities page for more info about ICERM and Brown's available facilities.
Traveling to ICERM
ICERM is located at Brown University in Providence, Rhode Island. Providence's T.F. Green Airport (15 minutes south) and Boston's Logan Airport (1 hour north) are the closest airports. Providence is also on Amtrak's Northeast Corridor. In-depth directions and transportation information are available on our travel page.
Lodging/Housing
Visiting ICERM for longer than a week-long workshop? ICERM staff works with participants to locate accommodations that fit their needs. Since short-term furnished housing is in very high demand, take advantage of the housing options ICERM may recommend. Contact housing@icerm.brown.edu for more details.
Childcare/Schools
Those traveling with family who are interested in information about childcare and/or schools should contact housing@icerm.brown.edu.
Technology Resources
Wireless internet access and wireless printing is available for all ICERM visitors. Eduroam is available for members of participating institutions. Thin clients in all offices and common areas provide open access to a web browser, SSH terminal, and printing capability. See our Technology Resources page for setup instructions and to learn about all available technology.
Accessibility
To request special services, accommodations, or assistance for this event, please contact accessibility@icerm.brown.edu as far in advance of the event as possible. Thank you.
Discrimination and Harassment Policy
ICERM is committed to creating a safe, professional, and welcoming environment that benefits from the diversity and experiences of all its participants. Brown University's "Code of Conduct", "Discrimination and Workplace Harassment Policy", "Sexual and Gender-based Misconduct Policy", and "Title IX Policy" apply to all ICERM participants and staff. Participants with concerns or requests for assistance on a discrimination or harassment issue should contact the ICERM Director; they are the responsible employees at ICERM under this policy.
Fundamental Research
ICERM research programs aim to promote Fundamental Research and mathematical sciences education. If you are engaged in sensitive or proprietary work, please be aware that ICERM programs often have participants from countries and entities subject to United States export control restrictions. Any discoveries of economically significant intellectual property supported by ICERM funding should be disclosed.
Exploring Providence
Providence's world-renowned culinary scene provides ample options for lunch and dinner. Neighborhoods near campus, including College Hill Historic District, have many local attractions. Check out the map on our Explore Providence page to see what's near ICERM.

Visa Information

Contact visa@icerm.brown.edu for assistance.

Need a US Visa?
J-1 visa requested via ICERM staff
Eligible to be reimbursed
B-1 or Visa Waiver Business (WB) –if you already have either visa – contact ICERM staff for a visa specific invitation letter.
Ineligible to be reimbursed
B-2 or Visa Waiver Tourist (WT)
Already in the US?

F-1 and J-1 not sponsored by ICERM: obtain a letter approving reimbursement from the International Office of your home institution PRIOR to travel.

H-1B holders do not need letter of approval.

All other visas: alert ICERM staff immediately about your situation.

ICERM does not reimburse visa fees. This chart is to inform visitors whether the visa they enter the US on allows them to receive reimbursement for the items outlined in their invitation letter.

Financial Support

This section is for general purposes only and does not indicate that all attendees receive funding. Please refer to your personalized invitation to review your offer.

ORCID iD
As this program is funded by the National Science Foundation (NSF), ICERM is required to collect your ORCID iD if you are receiving funding to attend this program. Be sure to add your ORCID iD to your Cube profile as soon as possible to avoid delaying your reimbursement.
Acceptable Costs
  • 1 roundtrip between your home institute and ICERM
  • Flights on U.S. or E.U. airlines – economy class to either Providence airport (PVD) or Boston airport (BOS)
  • Ground Transportation to and from airports and ICERM.
Unacceptable Costs
  • Flights on non-U.S. or non-E.U. airlines
  • Flights on U.K. airlines
  • Seats in economy plus, business class, or first class
  • Change ticket fees of any kind
  • Multi-use bus passes
  • Meals or incidentals
Advance Approval Required
  • Personal car travel to ICERM from outside New England
  • Multiple-destination plane ticket; does not include layovers to reach ICERM
  • Arriving or departing from ICERM more than a day before or day after the program
  • Multiple trips to ICERM
  • Rental car to/from ICERM
  • Flights on a Swiss, Japanese, or Australian airlines
  • Arriving or departing from airport other than PVD/BOS or home institution's local airport
  • 2 one-way plane tickets to create a roundtrip (often purchased from Expedia, Orbitz, etc.)
Travel Maximum Contributions
  • New England: $350
  • Other contiguous US: $850
  • Asia & Oceania: $2,000
  • All other locations: $1,500
  • Note these rates were updated in Spring 2023 and superseded any prior invitation rates. Any invitations without travel support will still not receive travel support.
Reimbursement Requests

Request Reimbursement with Cube

Refer to the back of your ID badge for more information. Checklists are available at the front desk and in the Reimbursement section of Cube.

Reimbursement Tips
  • Scanned original receipts are required for all expenses
  • Airfare receipt must show full itinerary and payment
  • ICERM does not offer per diem or meal reimbursement
  • Allowable mileage is reimbursed at prevailing IRS Business Rate and trip documented via pdf of Google Maps result
  • Keep all documentation until you receive your reimbursement!
Reimbursement Timing

6 - 8 weeks after all documentation is sent to ICERM. All reimbursement requests are reviewed by numerous central offices at Brown who may request additional documentation.

Reimbursement Deadline

Submissions must be received within 30 days of ICERM departure to avoid applicable taxes. Submissions after thirty days will incur applicable taxes. No submissions are accepted more than six months after the program end.

Associated Semester Workshops

Topology and Geometry in Neuroscience
Image for "Topology and Geometry in Neuroscience"
Neural Coding and Combinatorics
Image for "Neural Coding and Combinatorics"