Organizing Committee
 Carina Curto
The Pennsylvania State University  Brent Doiron
University of Chicago  Robert Ghrist
University of Pennsylvania  Kathryn Hess
EPFL  Zachary Kilpatrick
University of Colorado Boulder  Matilde Marcolli
California Institute of Technology  Konstantin Mischaikow
Rutgers University  Katie Morrison
University of Northern Colorado  Elad Schneidman
Weizmann Institute of Science  Tatyana Sharpee
Salk Institute
Abstract
The goal of this Semester Program is to bring together a variety of mathematicians with researchers working in theoretical and computational neuroscience as well as some theoryfriendly experimentalists. However, unlike programs in neuroscience that emphasize connections between theory and experiment, this program will focus on building bridges between theory and mathematics. This is motivated in part by the observation that theoretical developments in neuroscience are often limited not only by lack of data but also by the need to better develop the relevant mathematics. For example, theorists often rely on linear or nearlinear modeling frameworks for neural networks simply because the mathematics of nonlinear network dynamics is still poorly understood. Conversely, just as in the history of physics, neuroscience problems give rise to new questions in mathematics. In recent years, these questions have touched on a rich variety of fields including geometry, topology, combinatorics, dynamical systems, and algebra. We believe the time has come to deepen these connections and foster new interactions and collaborations between neuroscientists who think deeply about theory and mathematicians who are looking for new problems inspired by science. In addition to collaborative research between theorists and mathematicians, an explicit goal of the program will be to produce an “open problems” document. This document will present a series of wellformulated open math problems together with explanations of their neuroscience motivation, partial progress, and the potential significance of their solutions.
Confirmed Speakers & Participants
Talks will be presented virtually or inperson as indicated in the schedule below.
 Speaker
 Poster Presenter
 Attendee
 Virtual Attendee

Arman Afrasiyabi
Yale UniversityOct 30Nov 3, 2023

Yashar Ahmadian
Cambridge UniversitySep 1822, 2023

Daniele Avitabile
Vrije Universiteit AmsterdamSep 20Dec 15, 2023

Huseyin Ayhan
Florida State UniversityOct 1620, 2023

Demba Ba
Harvard UniversitySep 1822, 2023

Aishwarya Balwani
Georgia Institute of TechnologyOct 1620, 2023

Andrea Barreiro
Southern Methodist UniversitySep 1Dec 22, 2023

Robin Belton
Smith CollegeSep 1822, 2023

Marcus Benna
UC San DiegoSep 1822, 2023

Dhananjay Bhaskar
Yale UniversityOct 1620, 2023

Ginestra Bianconi
Queen Mary University of LondonOct 1620, 2023

Prianka Bose
New Jersey Institute of TechnologySep 1822, 2023

Amitabha Bose
New Jersey Institute of TechnologySep 15Nov 17, 2023

Felipe Branco de Paiva
University of WisconsinMadisonOct 1620, 2023

Robyn Brooks
University of UtahSep 1Dec 31, 2023

Peter Bubenik
University of FloridaSep 1822, 2023; Oct 1620, 2023

Michael Buice
Allen InstituteSep 1822, 2023

Thomas Burns
ICERMAug 31Dec 31, 2023

Johnathan Bush
University of FloridaOct 1620, 2023

Carlos Castañeda Castro
Brown UniversitySep 6Dec 8, 2023

Dmitri Chklovskii
Flatiron Institute & NYU Neuroscience InstituteOct 1620, 2023

Hannah Choi
Georgia Institute of TechnologyOct 30Nov 3, 2023

Sehun Chun
Yonsei UniversitySep 1822, 2023

Heather Cihak
University of Colorado BoulderSep 1822, 2023

Giovanna Citti
university of BolognaSep 17Nov 3, 2023

Natasha Crepeau
University of WashingtonOct 30Nov 3, 2023

Justin Curry
University at Albany SUNYOct 1620, 2023

Carina Curto
The Pennsylvania State UniversitySep 1Dec 9, 2023

Rodica Curtu
The University of IowaSep 1822, 2023

Rava da Silveira
Institute of Molecular and Clinical Ophthalmology BaselOct 30Nov 3, 2023

Steve Damelin
University of MichiganSep 1722, 2023; Oct 1520, 2023

Maria Dascalu
University of Massachusetts AmherstOct 29Nov 4, 2023

Anda Degeratu
University of StuttgartSep 17Oct 7, 2023

Juan Carlos DíazPatiño
Universidad Nacional Autónoma de MéxicoOct 1620, 2023

Darcy Diesburg
Brown UniversitySep 1822, 2023; Oct 1620, 2023; Oct 30Nov 3, 2023

Fatih Dinc
Stanford UniversitySep 1822, 2023

Brent Doiron
University of ChicagoSep 1723, 2023

Benjamin Dunn
Norwegian University of Science and TechnologyOct 1620, 2023

Julia E Grigsby
Boston CollegeOct 30Nov 3, 2023

Ahmed elhady
Konstanz Center for Advanced Study of Collective BehaviorNov 130, 2023

Ani Eloyan
Brown UniversityOct 1620, 2023

Aysel Erey
Utah State UniversityOct 30Nov 3, 2023

Sean Escola
Columbia UniversitySep 1822, 2023

Julio Esparza Ibanez
Instituto Cajal  CSIC (Spanish National Research Council)Oct 1626, 2023

Ashkan Faghiri
Georgia state universityOct 1620, 2023

Matthew Farrell
Harvard UniversitySep 1822, 2023

Richard Foster
Virginia Commonwealth UniversitySep 1822, 2023

Michael Frank
Brown UniversitySep 6Dec 8, 2023

Michael Freund
Brown UniversityOct 1620, 2023

Halley Fritze
University of OregonOct 1620, 2023

Marcio Gameiro
Rutgers UniversitySep 6Dec 8, 2023

Harshvardhan Gazula
MITOct 1620, 2023

Tomas Gedeon
Montana State UniversitySep 11Nov 1, 2023

Maria Geffen
University of PennsylvaniaNov 13, 2023

Tim Gentner
University of California, San DiegoOct 30Nov 3, 2023

Juliann Geraci
University of Nebraska LincolnOct 30Nov 10, 2023

Robert Ghrist
University of PennsylvaniaOct 1920, 2023

Chad Giusti
University of DelawareSep 1730, 2023; Oct 15Nov 6, 2023

Harold Xavier Gonzalez
Stanford UniversitySep 622, 2023

Anna Grim
Allen InstituteOct 1620, 2023

Robert Gütig
Charité Medical School BerlinOct 1620, 2023

Todd Hagen
Bernstein Center for Computational NeuroscienceOct 1620, 2023

Erik Hermansen
Norwegian University of SciencOct 1620, 2023

Abigail Hickok
Columbia UniversityOct 1520, 2023

Christian Hirsch
Aarhus UniversityOct 1620, 2023

Betty Hong
California Institute of TechnologyOct 30Nov 3, 2023

Iris Horng
University of PennsylvaniaOct 1521, 2023

ChingPeng Huang
UKEOct 1620, 2023

Chengcheng Huang
University of PittsburghSep 1822, 2023

Vladimir Itskov
The Pennsylvania State UniversitySep 5Dec 8, 2023

Jonathan Jaquette
New Jersey Institute of TechnologySep 1822, 2023

Yuchen Jiang
Australian National UniversityOct 1620, 2023

Alvin Jin
BerkeleyOct 1521, 2023

Kresimir Josic
University of HoustonSep 1822, 2023

Shabnam Kadir
University of HertfordshireOct 30Nov 3, 2023

Sameer Kailasa
University of Michigan Ann ArborSep 5Dec 9, 2023

Lida Kanari
EPFL/Blue BrainOct 1620, 2023

Selvi Kara
University of UtahOct 30Nov 3, 2023

Gabriella Keszthelyi
Alfréd Rényi Institute of MathematicsSep 1622, 2023

Roozbeh Kiani
New York UniversityOct 30Nov 3, 2023

Christopher Kim
National Institutes of HealthSep 1822, 2023

Soon Ho Kim
Georgia Institute of TechnologySep 1822, 2023

Hyunjoong Kim
University of HoustonSep 1723, 2023

Kevin Knudson
University of FloridaOct 1620, 2023

Leo Kozachkov
Massachusetts Institute of TechnologySep 1822, 2023

Maxwell Kreider
Case Western Reserve UniversitySep 6Dec 8, 2023

Kishore Kuchibhotla
Johns Hopkins UniversityOct 1620, 2023

Ankit Kumar
UC BerkeleySep 1822, 2023

Giancarlo La Camera
Stony Brook UniversityOct 1620, 2023

KangJu Lee
Seoul National UniversityOct 1521, 2023

Ran Levi
University of AberdeenOct 1620, 2023

Noah Lewis
Georgia Institute of TechnologyOct 1620, 2023

Yao Li
University of Massachusetts AmherstSep 6Dec 8, 2023

Zelong Li
Penn State UniversitySep 5Dec 9, 2023

Caitlin Lienkaemper
Boston UniversitySep 15Nov 4, 2023

Kathryn Lindsey
Boston CollegeSep 6Dec 8, 2023

Justin Lines
Columbia UniversityOct 30Nov 3, 2023

Vasiliki Liontou
ICERMSep 6Dec 8, 2023

David Lipshutz
Flatiron InstituteSep 1822, 2023

Sijing Liu
Brown UniversitySep 1, 2023May 31, 2024

Jessica Liu
CUNY Graduate CenterSep 1822, 2023

Simon Locke
Johns Hopkins UniversitySep 1822, 2023

Laureline Logiaco
Massachusetts Institute of TechnologySep 1822, 2023

Juliana Londono Alvarez
Penn StateSep 6Dec 8, 2023

Caio Lopes
École Polytechnique Fédérale de LausanneOct 1620, 2023

Christian Machens
Champalimaud FoundationOct 30Nov 3, 2023

James MACLAURIN
New Jersey Institute of TechnologySep 1822, 2023

Matilde Marcolli
California Institute of TechnologySep 6Dec 8, 2023

Marissa Masden
ICERMSep 6, 2023May 31, 2024

Sarah Mason
Wake Forest UniversityOct 30Nov 3, 2023

Leenoy Meshulam
University of WashingtonOct 30Nov 3, 2023

Nikola Milicevic
Pennsylvania State UniversitySep 1Dec 10, 2023

Federica Milinanni
KTH  Royal Institute of TechnologySep 17Nov 5, 2023

Konstantin Mischaikow
Rutgers UniversitySep 1723, 2023; Sep 1822, 2023; Oct 56, 2023

Katie Morrison
University of Northern ColoradoSep 1Dec 10, 2023

Noga Mudrik
The Johns Hopkins UniversitySep 1822, 2023

Audrey Nash
Florida State UniversitySep 1822, 2023

matt nassar
Brown UniversitySep 6Dec 8, 2023

Ilya Nemenman
Emory UniversityOct 30Nov 3, 2023

Fernando Nobrega Santos
University of AmsterdamOct 1521, 2023

Gabe Ocker
Boston UniversitySep 6Dec 8, 2023

Choongseok Park
NC A&T State UniversitySep 1822, 2023

Ross Parker
Center for Communications Research – PrincetonSep 1822, 2023; Oct 1620, 2023

Caitlyn Parmelee
Keene State CollegeSep 5Dec 9, 2023

alice patania
University of VermontOct 1620, 2023

Cengiz Pehlevan
Harvard UniversitySep 6Dec 8, 2023

Isabella Penido
Brown UniversitySep 6Dec 8, 2023

Jose Perea
Northeastern UniversitySep 6Dec 8, 2023

Giovanni Petri
CENTAI InstituteOct 1620, 2023

Mason Porter
UCLASep 1822, 2023

Rebecca R.G.
George Mason UniversityOct 29Nov 3, 2023

Niloufar Razmi
Brown UniversitySep 6Dec 8, 2023

Alex Reyes
New York UniversityOct 1620, 2023

Antonio Rieser
Centro de Investigación en MatemáticasSep 5Dec 9, 2023

Dmitry Rinberg
New York UniversityOct 1620, 2023

Dario Ringach
University of California, Los AngelesOct 1620, 2023

Jason Ritt
Brown UniversitySep 6Dec 8, 2023

Robert Rosenbaum
University of Notre DameSep 1822, 2023

Horacio Rotstein
New Jersey Institute of TechnologySep 5Dec 9, 2023

Jennifer Rozenblit
University of Texas, AustinOct 1620, 2023

Safaan Sadiq
Pennsylvania State UniversitySep 5Dec 9, 2023

Nicole Sanderson
Penn State UniversitySep 1Dec 31, 2023

Hannah Santa Cruz
Penn StateSep 5Dec 9, 2023

Alessandro Sarti
Centre D’analyse et de Mathématique SocialesOct 1322, 2023

Cristina Savin
NYUOct 30Nov 3, 2023

Elad Schneidman
Weizmann Institute of ScienceOct 16Nov 4, 2023

Nikolas Schonsheck
University of DelawareOct 15Nov 4, 2023

David Schwab
City University of New YorkSep 5Dec 9, 2023

Daniel Scott
Brown UniversitySep 6Dec 8, 2023

Thomas Serre
Brown UniversitySep 6Dec 8, 2023

Tatyana Sharpee
Salk InstituteOct 1620, 2023; Oct 30Nov 3, 2023

Sage Shaw
University of Colorado BoulderSep 1822, 2023

Nimrod Sherf
University of HoustonSep 1822, 2023

Farshad Shirani
Georgia Institute of TechnologySep 1822, 2023

Paramjeet Singh
Thapar Institute of Engineering & TechnologySep 1822, 2023

Bernadette Stolz
EPFLOct 1620, 2023

Thibaud Taillefumier
UT AustinOct 30Nov 3, 2023

Evelyn Tang
Rice UniversityOct 1620, 2023

Gaia Tavoni
Washington University in St. LouisOct 30Nov 3, 2023

Dane Taylor
University of WyomingSep 1722, 2023; Oct 1520, 2023

Peter Thomas
Case Western Reserve UniversitySep 5Dec 9, 2023

Tobias Timofeyev
University of VermontSep 1822, 2023; Oct 1620, 2023

Nicholas Tolley
Brown UniversitySep 6Dec 8, 2023

Magnus Tournoy
Flatiron InstituteSep 17Oct 21, 2023; Sep 1822, 2023; Oct 1620, 2023

Taro Toyoizumi
Riken Center for Brain ScienceOct 30Nov 3, 2023

Wilson Truccolo
Brown UniversitySep 6Dec 8, 2023

Ka Nap Tse
University of PittsburghSep 10Dec 9, 2023

Misha Tsodyks
Weizmann InstituteOct 30Nov 3, 2023

Yuki Tsukada
Keio UniversityOct 30Nov 3, 2023

Junyi Tu
Salisbury UniversityOct 1620, 2023

Srinivas Turaga
HHMI  Janelia Research CampusOct 1620, 2023

Melvin Vaupel
Norwegian Institute of Science and TechnologyOct 1620, 2023

Jonathan Victor
Weill Cornell Medical CollegeOct 1620, 2023

Elizabeth Vidaurre
Molloy CollegeOct 1620, 2023

Bradley Vigil
Texas Tech UniversityOct 1620, 2023

Zhengchao Wan
University of California San DiegoOct 1620, 2023

Yangyang Wang
Brandeis UniversitySep 1822, 2023

Bin Wang
University of California, San DiegoSep 6Nov 30, 2023

Xinyi Wang
Michigan State UniversitySep 10Oct 27, 2023

Qingsong Wang
University of UtahOct 1521, 2023; Oct 29Nov 4, 2023

Alexander Williams
Stanford UniversityOct 1620, 2023

ZhuoCheng Xiao
New York UniversitySep 1822, 2023

Iris Yoon
Wesleyan UniversitySep 6Dec 8, 2023

Ryeongkyung Yoon
University of HoustonSep 1822, 2023

Kei Yoshida
Brown UniversitySep 1822, 2023; Oct 1620, 2023

Kisung You
City University of New YorkOct 1620, 2023

LaiSang Young
Courant InstituteSep 1822, 2023

Nora Youngs
Colby CollegeSep 5Dec 10, 2023

Zhuojun Yu
Case Western Reserve UniversitySep 5Dec 9, 2023

Gexin Yu
College of William and MarySep 1723, 2023

Wenhao Zhang
UT Southwestern Medical CenterSep 1822, 2023; Oct 1620, 2023

Ling Zhou
ICERMSep 6Dec 8, 2023

Robert Zielinski
Brown UniversitySep 6Dec 8, 2023
Visit dates listed on the participant list may be tentative and subject to change without notice.
Semester Schedule
Wednesday, September 6, 2023
Math + Neuroscience: Strengthening the Interplay Between Theory and Mathematics

9:00 am  3:00 pm EDTCheck In11th Floor Collaborative Space

10:00  11:00 am EDTOranizer/Directorate MeetingMeeting  11th Floor Conference Room

4:00  5:00 pm EDTInformal Coffee/Tea WelcomeCoffee Break  11th Floor Collaborative Space
Thursday, September 7, 2023
Math + Neuroscience: Strengthening the Interplay Between Theory and Mathematics

9:00  9:30 am EDTICERM WelcomeWelcome  11th Floor Lecture Hall

9:30  11:30 am EDTOrganizer Welcome and IntroductionsOpening Remarks  11th Floor Lecture Hall

3:00  3:30 pm EDTCoffee Break11th Floor Collaborative Space
Friday, September 8, 2023
Math + Neuroscience: Strengthening the Interplay Between Theory and Mathematics

10:00  11:00 am EDTGrad Student/Postdoc Meeting with ICERM DirectorateMeeting  11th Floor Lecture Hall

12:00  2:00 pm EDTPlanning LunchWorking Lunch  11th Floor Collaborative Space

3:00  3:30 pm EDTCoffee Break11th Floor Collaborative Space
Monday, September 11, 2023
Math + Neuroscience: Strengthening the Interplay Between Theory and Mathematics

10:00  11:30 am EDTJournal Club & Neuro 101 PlanningMeeting  11th Floor Lecture Hall

1:45  1:50 pm EDTXavier Gonzalez IntroductionLightning Talks  11th Floor Lecture Hall
 Harold Xavier Gonzalez, Stanford University

1:50  1:55 pm EDTMaxwell Kreider IntroductionLightning Talks  11th Floor Lecture Hall
 Maxwell Kreider, Case Western Reserve University

1:55  2:00 pm EDTJuliana Londono Alvarez IntroductionLightning Talks  11th Floor Lecture Hall
 Juliana Londono Alvarez, Penn State

2:00  2:05 pm EDTSafaan Sadiq IntroductionLightning Talks  11th Floor Lecture Hall
 Safaan Sadiq, Pennsylvania State University

2:05  2:10 pm EDTHannah Santa Cruz IntroductionLightning Talks  11th Floor Lecture Hall
 Hannah Santa Cruz, Penn State

2:10  2:15 pm EDTNicholas Tolley IntroductionLightning Talks  11th Floor Lecture Hall
 Nicholas Tolley, Brown University

2:15  2:20 pm EDTKa Nap Tse IntroductionLightning Talks  11th Floor Lecture Hall
 Ka Nap Tse, University of Pittsburgh

2:20  2:25 pm EDTBin Wang IntroductionLightning Talks  11th Floor Lecture Hall
 Bin Wang, University of California, San Diego

2:25  2:30 pm EDTZhuojun Yu IntroductionLightning Talks  11th Floor Lecture Hall
 Zhuojun Yu, Case Western Reserve University

2:30  2:35 pm EDTRobert Zielinkski IntroductionLightning Talks  11th Floor Lecture Hall
 Robert Zielinski, Brown University

2:35  2:40 pm EDTZelong Li IntorductionLightning Talks  11th Floor Lecture Hall
 Zelong Li, Penn State University

2:40  2:45 pm EDTSameer Kailasa IntroductionLightning Talks  11th Floor Lecture Hall
 Sameer Kailasa, University of Michigan Ann Arbor

2:45  2:50 pm EDTElena Wang IntroductionLightning Talks  11th Floor Lecture Hall
 Xinyi Wang, Michigan State University

3:00  3:30 pm EDTCoffee Break11th Floor Collaborative Space

3:30  3:40 pm EDTRobyn Brooks IntroductionLightning Talks  11th Floor Lecture Hall
 Robyn Brooks, University of Utah

3:40  3:50 pm EDTThomas Burns IntroductionLightning Talks  11th Floor Lecture Hall
 Thomas Burns, ICERM

3:50  4:00 pm EDTCaitlin Leinkaemper IntroductionLightning Talks  11th Floor Lecture Hall
 Caitlin Lienkaemper, Boston University

4:00  4:10 pm EDTVasiliki Liontou IntroductionLightning Talks  11th Floor Lecture Hall
 Vasiliki Liontou, ICERM

4:10  4:20 pm EDTSijing Liu IntroductionLightning Talks  11th Floor Lecture Hall
 Sijing Liu, Brown University

4:20  4:30 pm EDTMarissa Masden IntroductionLightning Talks  11th Floor Lecture Hall
 Marissa Masden, ICERM

4:30  4:40 pm EDTNikola Milicevic IntroductionLightning Talks  11th Floor Lecture Hall
 Nikola Milicevic, Pennsylvania State University

4:40  4:50 pm EDTNicole SandersonLightning Talks  11th Floor Lecture Hall
 Nicole Sanderson, Penn State University

4:50  5:00 pm EDTLing Zhou IntroductionLightning Talks  11th Floor Lecture Hall
 Ling Zhou, ICERM

5:00  6:30 pm EDTWelcome ReceptionReception  11th Floor Collaborative Space
Tuesday, September 12, 2023
Math + Neuroscience: Strengthening the Interplay Between Theory and Mathematics

10:30 am  12:00 pm EDTTDA 101Tutorial  11th Floor Lecture Hall
 Nicole Sanderson, Penn State University

3:00  3:30 pm EDTCoffee Break11th Floor Collaborative Space
Wednesday, September 13, 2023
Math + Neuroscience: Strengthening the Interplay Between Theory and Mathematics

10:30 am  12:00 pm EDTNetwork Dynamics & ModelingTutorial  11th Floor Lecture Hall
 Horacio Rotstein, New Jersey Institute of Technology

3:00  3:30 pm EDTCoffee Break11th Floor Collaborative Space

3:30  4:30 pm EDTClosedloop neuromechanical motor control models (or) On the importance of taking the body into account when modeling neuronal dynamics.11th Floor Lecture Hall
 Peter Thomas, Case Western Reserve University
Abstract
The central nervous system is strongly coupled to the body. Through peripheral receptors and effectors, it is also coupled to the constantly changing outside world. A chief function of the brain is to close the loop between sensory inputs and motor output. It is through the brain's effectiveness as a control mechanism for the body, embedded in the external world, that it facilitates longterm survival. Thus to understand brain circuits (one might argue) one must also understand their behavioral and ecological context. However, studying closedloop brainbody interactions is challenging experimentally, conceptually, and mathematically. In order to make progress, we focus on systems that generate rhythmic behaviors in order to accomplish a quantifiable goal, such as maintaining different forms of homeostasis. Time permitting, I'll mention two such systems, 1. control of feeding motions in the marine mollusk Aplysia californica, and 2. rhythm generation and control in the mammalian breathing system. In both of these systems, we propose that robustness in the face of variable metabolic or external demands arises from the interplay of multiple layers of control involving biomechanics, central neural dynamics, and sensory feedback.
Thursday, September 14, 2023
Math + Neuroscience: Strengthening the Interplay Between Theory and Mathematics

9:00  10:30 am EDTNetwork Dynamics & Modeling (Part 2)Tutorial  11th Floor Lecture Hall
 Horacio Rotstein, New Jersey Institute of Technology

12:00  1:30 pm EDTOpen Problems fo TLNs (Bring Your Own Lunch)Problem Session  11th Floor Lecture Hall
 Session Chairs
 Carina Curto, The Pennsylvania State University
 Katie Morrison, University of Northern Colorado

2:00  2:30 pm EDTTDA softwareTutorial  11th Floor Lecture Hall
 Nicole Sanderson, Penn State University

3:00  3:30 pm EDTCoffee Break/ Neuro 101Coffee Break  11th Floor Collaborative Space
Friday, September 15, 2023
Math + Neuroscience: Strengthening the Interplay Between Theory and Mathematics

9:30  10:30 am EDTJournal Club11th Floor Lecture Hall
 Moderators
 Harold Xavier Gonzalez, Stanford University
 Sameer Kailasa, University of Michigan Ann Arbor

11:00  11:30 am EDTMathematical Challenges in Neuronal Network DynamicsPost Doc/Graduate Student Seminar  11th Floor Lecture Hall
 Marissa Masden, ICERM
Abstract
I will introduce a straightforward construction of the canonical polyhedral complex given by the activation patterns of a ReLU neural network. Then, I will describe how labeling the vertices of this polyhedral complex with sign vectors is (almost always) enough information to generate a cellular (co)chain complex labeling all of the polyhedral cells, and how this allows us to extract information about the decision boundary of the network.

11:30 am  12:00 pm EDTDetecting danger in gridworlds using Gromov's Link ConditionPost Doc/Graduate Student Seminar  11th Floor Lecture Hall
 Thomas Burns, ICERM
Abstract
Gridworlds have been longutilised in AI research, particularly in reinforcement learning, as they provide simple yet scalable models for many realworld applications such as robot navigation, emergent behaviour, and operations research. We initiate a study of gridworlds using the mathematical framework of reconfigurable systems and state complexes due to Abrams, Ghrist & Peterson. State complexes represent all possible configurations of a system as a single geometric space, thus making them conducive to study using geometric, topological, or combinatorial methods. The main contribution of this work is a modification to the original Abrams, Ghrist & Peterson setup which we introduce to capture agent braiding and thereby more naturally represent the topology of gridworlds. With this modification, the state complexes may exhibit geometric defects (failure of Gromov's Link Condition). Serendipitously, we discover these failures occur exactly where undesirable or dangerous states appear in the gridworld. Our results therefore provide a novel method for seeking guaranteed safety limitations in discrete task environments with single or multiple agents and offer useful safety information (in geometric and topological forms) for incorporation in or analysis of machine learning systems. More broadly, our work introduces tools from geometric group theory and combinatorics to the AI community and demonstrates a proofofconcept for this geometric viewpoint of the task domain through the example of simple gridworld environments.

1:30  3:00 pm EDTTopology + Neuroscience Working GroupsGroup Work  10th Floor Classroom

3:00  3:30 pm EDTCoffee Break11th Floor Collaborative Space
Monday, September 18, 2023

8:50  9:00 am EDTWelcome11th Floor Lecture Hall
 Session Chair
 Brendan Hassett, ICERM/Brown University

9:00  9:45 am EDTNeural dynamics on sparse networks—pruning, error correction, and signal reconstruction11th Floor Lecture Hall
 Speaker
 Rishidev Chaudhuri, University of California, Davis
 Session Chair
 Carina Curto, The Pennsylvania State University
Abstract
Many networks in the brain are sparsely connected, and the brain eliminates connections during development and learning. This talk will focus on questions related to computation and dynamics on these sparse networks. We will first focus on pruning redundant network connections while preserving dynamics and function. In a recurrent network, determining the importance of a connection between two neurons is a difficult computational problem, depending on the role that both neurons play and on all possible pathways of information flow between them. Noise is ubiquitous in neural systems, and often considered an irritant to be overcome. We suggest that noise could instead play a functional role in pruning, allowing the brain to probe network structure and determine which connections are redundant. We construct a simple, local, unsupervised rule that either strengthens or prunes synapses using only connection weight and the noisedriven covariance of the neighboring neurons. For a subset of linear and rectifiedlinear networks, we adapt matrix concentration of measure arguments from the field of graph sparsification to prove that this rule preserves the spectrum of the original matrix and hence preserves network dynamics even when the fraction of pruned connections asymptotically approaches 1. The plasticity rule is biologicallyplausible and may suggest a new role for noise in neural computation. Time permitting, we will then discuss the application of sparse expander graphs to modeling dynamics on neural networks. Expander graphs combine the seemingly contradictory properties of being sparse and wellconnected. Among other remarkable properties, they allow efficient communication, credit assignment and error correction with simple greedy dynamical rules. We suggest that these applications might provide new ways of thinking about neural dynamics, and provide several proofs of principle.

10:00  10:15 am EDTCoffee Break11th Floor Collaborative Space

10:15  11:00 am EDTLocal breakdown of the balance of excitation and inhibition accounts for divisive normalization11th Floor Lecture Hall
 Speaker
 Yashar Ahmadian, Cambridge University
 Session Chair
 Carina Curto, The Pennsylvania State University
Abstract
Excitatory and inhibitory (E & I) inputs to cortical neurons remain balanced across different conditions. This is captured in the balanced network model in which neural populations dynamically adjust their rates to yield tightly balanced E and I inputs and a state in which all neurons are active at levels observed in cortex. But global tight EI balance predicts linear stimulus dependence for population responses, and does not account for systematic cortical response nonlinearities such as divisive normalization, a canonical brain computation. However, when necessary connectivity conditions for global balance fail, states arise in which a subset of neurons are inhibition dominated and inactive. Here, we show analytically that the emergence of such localized balance states robustly leads to normalization, including sublinear integration and winnertakeall behavior. An alternative model that exhibits normalization is the Stabilized Supralinear Network (SSN), in which the EI balance is generically loose, but becomes tight asymptotically for strong inputs. However, an understanding of the causal relationship between EI balance and normalization in SSN are lacking. Here we show that when tight EI balance in the asymptotic, strongly driven regime of SSN is not global, the network does not exhibit normalization at any input strength; thus, in SSN too, significant normalization requires the breakdown of global balance. In summary, we causally and quantitatively connect a fundamental feature of cortical dynamics with a canonical brain computation.

11:15  11:45 am EDTOpen Problems DiscussionProblem Session  11th Floor Lecture Hall
 Session Chairs
 Carina Curto, The Pennsylvania State University
 Katie Morrison, University of Northern Colorado

11:45 am  1:30 pm EDTLunch/Free Time

1:30  2:15 pm EDTDiscovering dynamical patterns of activity from singletrial neural data11th Floor Lecture Hall
 Speaker
 Rodica Curtu, The University of Iowa
 Session Chair
 Carina Curto, The Pennsylvania State University
Abstract
In this talk I will discuss a datadriven method that leverages timedelayed coordinates, diffusion maps, and dynamic mode decomposition, to identify neural features in large scale brain recordings that correlate with subjectreported perception. The method captures the dynamics of perception at multiple timescales and distinguishes attributes of neural encoding of the stimulus from those encoding the perceptual states. Our analysis reveals a set of latent variables that exhibit alternating dynamics along a lowdimensional manifold, like trajectories of attractorbased models. I will conclude by proposing a phaseamplitudecouplingbased model that illustrates the dynamics of data.

2:30  2:35 pm EDTSynaptic mechanisms for resisting distractors in neural fieldsLightning Talks  11th Floor Lecture Hall
 Speaker
 Heather Cihak, University of Colorado Boulder
 Session Chair
 Carina Curto, The Pennsylvania State University
Abstract
Persistent neural activity has been observed in the nonhuman primate cortex when making delayed estimates. Organized activity patterns according to cell feature preference reveals "bumps" that represent analog variables during the delay. Continuum neural field models support bump attractors whose stochastic dynamics can be linked to response statistics (estimate bias and error). Models often ignore the distinct dynamics of bumps in both excitatory/inhibitory population activity, but recent neural and behavioral recordings suggest both play a role in delayed estimate codes and responses. In past work, we developed new methods in asymptotic and multiscale analyses for stochastic and spatiotemporal systems to understand how network architecture determines bump dynamics in networks with distinct E/I populations and short term plasticity. The inhibitory bump dynamics as well as facilitation and diffusion impact the stability and wandering motion of the excitatory bump. Our current work moves beyond studying ensemble statistics like variance to examine potential mechanisms underlying the robustness of working memory to distractors (irrelevant information) presented during the maintenance period wherein the relative timescales of the E/I populations, synaptic vs activity dynamics, as well as short term plasticity may play an important role.

2:35  2:40 pm EDTConvex optimization of recurrent neural networks for rapid inference of neural dynamicsLightning Talks  11th Floor Lecture Hall
 Speaker
 Fatih Dinc, Stanford University
 Session Chair
 Carina Curto, The Pennsylvania State University
Abstract
Advances in optical and electrophysiological recording technologies have made it possible to record the dynamics of thousands of neurons, opening up new possibilities for interpreting and controlling large neural populations. A promising way to extract computational principles from these large datasets is to train dataconstrained recurrent neural networks (dRNNs). However, existing training algorithms for dRNNs are inefficient and have limited scalability, making it a challenge to analyze large neural recordings even in offline scenarios. To address these issues, we introduce a training method termed Convex Optimization of Recurrent Neural Networks (CORNN). In studies of simulated recordings of hundreds of cells, CORNN attained training speeds ~ 100fold faster than traditional optimization approaches while maintaining or enhancing modeling accuracy. We further validated CORNN on simulations with thousands of cells that performed simple computations such as those of a 3bit flipflop or the execution of a timed response. Finally, we showed that CORNN can robustly reproduce network dynamics and underlying attractor structures despite mismatches between generator and inference models, severe subsampling of observed neurons, or mismatches in neural timescales. Overall, by training dRNNs with millions of parameters in subminute processing times on a standard computer, CORNN constitutes a first step towards realtime network reproduction constrained on largescale neural recordings and a powerful computational tool for advancing the understanding of neural computation. My talk focuses on how dRNNs enabled by CORNN can help us reverse engineer the neural code in the mammalian brain.

2:40  2:45 pm EDTRecall tempo of Hebbian sequences depends on the interplay of Hebbian kernel with tutor signal timingLightning Talks  11th Floor Lecture Hall
 Speaker
 Matthew Farrell, Harvard University
 Session Chair
 Carina Curto, The Pennsylvania State University
Abstract
Understanding how neural circuits generate sequential activity is a longstanding challenge. While foundational theoretical models have shown how sequences can be stored as memories with Hebbian plasticity rules, these models considered only a narrow range of Hebbian rules. In this talk I introduce a model for arbitrary Hebbian plasticity rules, capturing the diversity of spiketimingdependent synaptic plasticity seen in experiments, and show how the choice of these rules and of neural activity patterns influences sequence memory formation and retrieval. In particular, I will present a general theory that predicts the speed of sequence replay. This theory lays a foundation for explaining how cortical tutor signals might give rise to motor actions that eventually become ``automatic''. This theory also captures the impact of changing the speed of the tutor signal. Beyond shedding light on biological circuits, this theory has relevance in artificial intelligence by laying a foundation for frameworks whereby slow and computationally expensive deliberation can be stored as memories and eventually replaced by inexpensive recall.

2:45  2:50 pm EDTModeling human temporal EEG responses subject to VR visual stimuliLightning Talks  11th Floor Lecture Hall
 Speaker
 Richard Foster, Virginia Commonwealth University
 Session Chair
 Carina Curto, The Pennsylvania State University
Abstract
When subject to visual stimuli flashing at a constant temporal frequency, it is wellknown that the EEG response has a sharp peak in the power spectrum at the driving frequency. But the EEG response with random frequency stimuli and corresponding biophysical mechanisms are largely unknown. We present a phenomenological model framework in hopes of eventually capturing these EEG responses and unveiling the biophysical mechanisms. Based on observed heterogeneous temporal frequency selectivity curves in V1 cells (Hawken et al. ‘96, Camillo et al ‘20, Priebe et al. ‘06), we endow individual units with these response properties. Preliminary simulation results show that particular temporal frequency selectivity curves can be more indicative of the EEG response. Future directions include the construction of network architecture with interacting units to faithfully model the EEG response.

2:50  2:55 pm EDTRNNs of RNNs: Recursive Construction of Stable Assemblies of Recurrent Neural NetworksLightning Talks  11th Floor Lecture Hall
 Speaker
 Leo Kozachkov, Massachusetts Institute of Technology
 Session Chair
 Carina Curto, The Pennsylvania State University
Abstract
Recurrent neural networks (RNNs) are widely used throughout neuroscience as models of local neural activity. Many properties of single RNNs are well characterized theoretically, but experimental neuroscience has moved in the direction of studying multiple interacting areas, and RNN theory needs to be likewise extended. We take a constructive approach towards this problem, leveraging tools from nonlinear control theory and machine learning to characterize when combinations of stable RNNs will themselves be stable. Importantly, we derive conditions which allow for massive feedback connections between interacting RNNs. We parameterize these conditions for easy optimization using gradientbased techniques, and show that stabilityconstrained "networks of networks" can perform well on challenging sequentialprocessing benchmark tasks. Altogether, our results provide a principled approach towards understanding distributed, modular function in the brain.

3:15  3:45 pm EDTCoffee Break11th Floor Collaborative Space

3:45  4:30 pm EDTUniversal Properties of Strongly Coupled Recurrent Networks11th Floor Lecture Hall
 Speaker
 Robert Rosenbaum, University of Notre Dame
 Session Chair
 Carina Curto, The Pennsylvania State University
Abstract
Balanced excitation and inhibition is widely observed in cortex. How does this balance shape neural computations and stimulus representations? This question is often studied using computational models of neuronal networks in a dynamically balanced state. But balanced network models predict a linear relationship between stimuli and population responses. So how do cortical circuits implement nonlinear representations and computations? We show that every balanced network architecture admits stimuli that break the balanced state and these breaks in balance push the network into a “semibalanced state” characterized by excess inhibition to some neurons, but an absence of excess excitation. The semibalanced state produces nonlinear stimulus representations and nonlinear computations, is unavoidable in networks driven by multiple stimuli, is consistent with cortical recordings, and has a direct mathematical relationship to artificial neural networks.

4:30  6:00 pm EDTReception11th Floor Collaborative Space
Tuesday, September 19, 2023

9:00  9:45 am EDTMultilayer Networks in Neuroscience11th Floor Lecture Hall
 Speaker
 Mason Porter, UCLA
 Session Chair
 Brent Doiron, University of Chicago
Abstract
I will discuss multilayer networks in neuroscience. I will introduce the idea of multilayer networks and discuss some uses of multilayer networks in dneuroscience. I will present some interesting challenges.

10:00  10:15 am EDTCoffee Break11th Floor Collaborative Space

10:15  11:00 am EDTState modulation in spatial networks of multiple interneuron subtypes11th Floor Lecture Hall
 Speaker
 Chengcheng Huang, University of Pittsburgh
 Session Chair
 Brent Doiron, University of Chicago
Abstract
Neuronal responses to sensory stimuli can be strongly modulated by animal's brain state. Three distinct subtypes of inhibitory interneurons, parvalbumin (PV), somatostatin (SOM), and vasoactive intestinal peptide (VIP) expressing cells, have been identified as key players of flexibly modulating network activity. The three interneuron populations have specialized local microcircuit motifs and are targeted differentially by neuromodulators and topdown inputs from higherorder cortical areas. In this work, we systematically study the function of each interneuron cell type at modulating network dynamics in a spatially ordered spiking neuron network. We analyze the changes in firing rates and network synchrony as we apply static current to each cell population. We find that the modulation pattern by activating E or PV cells is distinct from that by activating SOM or VIP cells. In particular, we identify SOM cells as the main driver of network synchrony.

11:15  11:45 am EDTOpen Problems DiscussionProblem Session  11th Floor Lecture Hall
 Session Chairs
 Brent Doiron, University of Chicago
 Zachary Kilpatrick, University of Colorado Boulder

11:50 am  12:00 pm EDTGroup Photo (Immediately After Talk)11th Floor Lecture Hall

12:00  1:30 pm EDTWorking Lunch11th Floor Collaborative Space

1:30  2:15 pm EDTPlasticity in balanced neuronal networks11th Floor Lecture Hall
 Speaker
 Kresimir Josic, University of Houston
 Session Chair
 Brent Doiron, University of Chicago
Abstract
I will first describe how to extend the theory of balanced networks to account for synaptic plasticity. This theory can be used to show when a plastic network will maintain balance, and when it will be driven into an unbalanced state. I will next discuss how this approach provides evidence for a novel form of rapid compensatory inhibitory plasticity using experimental evidence obtained using optogenetic activation of excitatory neurons in primate visual cortex (area V1). The theory explains how such activation induces a populationwide dynamic reduction in the strength of neuronal interactions over the timescale of minutes during the awake state, but not during rest. I will shift gears in the final part of the talk, and discuss how community detection algorithms can help uncover the large scale organization of neuronal networks from connectome data, using the Drosophila hemibrain dataset as an example.

2:35  2:40 pm EDTQPhase reduction of multidimensional stochastic OrnsteinUhlenbeck process networksLightning Talks  11th Floor Lecture Hall
 Speaker
 Maxwell Kreider, Case Western Reserve University
 Session Chair
 Brent Doiron, University of Chicago
Abstract
Phase reduction is an effective tool to study the network dynamics of deterministic limitcycle oscillators. The recent introduction of stochastic phase concepts allows us to extend these tools to stochastic oscillators; of particular utility is the asymptotic stochastic phase, derived from the eigenfunction decomposition of the system's probability density. Here, we study networks of coupled oscillatory twodimensional OrnsteinUhlenbeck processes (OUPs) with complex eigenvalues. We characterize system dynamics by providing an exact expression for the asymptotic stochastic phase for OUP networks of any dimension and arbitrary coupling structure. Furthermore, we introduce an order parameter quantifying the synchrony of networks of stochastic oscillators, and apply it to our OUP model. We argue that the OUP network provides a new, analytically tractable approach to analysis of large scale electrophysiological recordings.

2:40  2:45 pm EDTFeedback Controllability as a Normative Theory of Neural DynamicsLightning Talks  11th Floor Lecture Hall
 Speaker
 Ankit Kumar, UC Berkeley
 Session Chair
 Brent Doiron, University of Chicago
Abstract
Brain computations emerge from the collective dynamics of distributed neural populations. Behaviors including reaching and speech are explained by principles of optimal feedback control, yet if and how this normative description shapes neural population dynamics is unknown. We created dimensionality reduction methods that identify subspaces of dynamics that are most feedforward controllable (FFC) vs. feedback controllable (FBC). We show that FBC and FFC subspaces diverge for dynamics generated by nonnormal connectivity. In neural recordings from monkey M1 and S1 during reaching, FBC subspaces are better decoders of reach velocity, particularly during reach acceleration, and that FBC provides a first principles account of the observation of rotational dynamics. Overall, our results demonstrate feedback controllability is a novel, normative theory of neural population dynamics, and reveal how the structure of high dynamical systems shape their ability to be controlled.

2:45  2:50 pm EDTAdaptive whitening with fast gain modulation and slow synaptic plasticityLightning Talks  11th Floor Lecture Hall
 Speaker
 David Lipshutz, Flatiron Institute
 Session Chair
 Brent Doiron, University of Chicago
Abstract
Neurons in early sensory areas rapidly adapt to changing sensory statistics, both by normalizing the variance of their individual responses and by reducing correlations between their responses. Together, these transformations may be viewed as an adaptive form of statistical whitening. In this talk, I will present a normative multitimescale mechanistic model of adaptive whitening with complementary computational roles for gain modulation and synaptic plasticity. Gains are modified on a fast timescale to adapt to the current statistical context, whereas synapses are modified on a slow timescale to learn structural properties of the input statistics that are invariant across contexts.

2:50  2:55 pm EDTThe combinatorial code and the graph rules of Dale networksLightning Talks  11th Floor Lecture Hall
 Speaker
 Nikola Milicevic, Pennsylvania State University
 Session Chair
 Brent Doiron, University of Chicago
Abstract
We describe the combinatorics of equilibria and steady states of neurons in thresholdlinear networks that satisfy the Dale’s law. The combinatorial code of a Dale network is characterized in terms of two conditions: (i) a condition on the network connectivity graph, and (ii) a spectral condition on the synaptic matrix. We find that in the weak coupling regime the combinatorial code depends only on the connectivity graph, and not on the particulars of the synaptic strengths. Moreover, we prove that the combinatorial code of a weakly coupled network is a sublattice, and we provide a learning rule for encoding a sublattice in a weakly coupled excitatory network. In the strong coupling regime we prove that the combinatorial code of a generic Dale network is intersectioncomplete and is therefore a convex code, as is common in some sensory systems in the brain.

2:55  3:00 pm EDTDecomposed Linear Dynamical Systems for Studying Inter and IntraRegion Neural DynamicsLightning Talks  11th Floor Lecture Hall
 Speaker
 Noga Mudrik, The Johns Hopkins University
 Session Chair
 Brent Doiron, University of Chicago
Abstract
Understanding the intricate relationship between recorded neural activity and behavior is a pivotal pursuit in neuroscience. However, existing models frequently overlook the nonlinear and nonstationary behavior evident in neural data, opting instead to center their focus on simplified projections or overt dynamical systems. We introduce a Decomposed Linear Dynamical Systems (dLDS) approach to capture these complex dynamics by representing them as a sparse timevarying linear combination of interpretable linear dynamical components. dLDS is trained using an expectation maximization procedure where the obscured dynamical components are iteratively inferred using dictionary learning. This approach enables the identification of overlapping circuits, while the sparsity applied during the training maintains the model interpretability. We demonstrate that dLDS successfully recovers the underlying linear components and their timevarying coefficients in both synthetic and neural data examples, and show that it can learn efficient representations of complex data. By leveraging the rich data from the International Brain Laboratory’s Brain Wide Map dataset, we extend dLDS to model communication among ensembles within and between brain regions, drawing insights from multiple nonsimultaneous recording sessions.

3:00  3:05 pm EDTCharacterizing Neural Spike Train Data for Chemosensory Coding AnalysisLightning Talks  11th Floor Lecture Hall
 Speaker
 Audrey Nash, Florida State University
 Session Chair
 Brent Doiron, University of Chicago
Abstract
In this presentation, we explore neural spike train data to discern a neuron's ability to distinguish between various stimuli. By examining both the spiking rate and the temporal distribution of spikes (phase of spiking), we aim to unravel the intricacies of chemosensory coding in neurons. We will provide a concise overview of our methodology for identifying chemosensory coding neurons and delve into the application of metricbased analysis techniques in conjunction with optimal transport methods. This combined approach allows us to uncover emerging patterns in tastant coding across multiple neurons and quantify the respective impacts of spiking rate and temporal phase in taste decoding.

3:05  3:10 pm EDTInfinitedimensional Dynamics in a Model of EEG Activity in the NeocortexLightning Talks  11th Floor Lecture Hall
 Speaker
 Farshad Shirani, Georgia Institute of Technology
 Session Chair
 Brent Doiron, University of Chicago
Abstract
I present key analytical and computational results on a mean field model of electroencephalographic activity in the neocortex, which is composed of a system of coupled ODEs and PDEs. I show that for some sets of biophysical parameter values the equilibrium set of the model is not compact, which further implies that the global attracting set of the model is infinitedimensional. I also present computational results on generation and spatial propagation of transient gamma oscillations in the solutions of the model. The results identify important challenges in interpreting and modelling the temporal pattern of EEG recordings, caused by low spatial resolution of EEG electrodes.

3:10  3:15 pm EDTWhat is the optimal topology of setwise connections for a memory network?Lightning Talks  11th Floor Lecture Hall
 Speaker
 Thomas Burns, ICERM
 Session Chair
 Brent Doiron, University of Chicago
Abstract
Simplicial Hopfield networks (Burns & Fukai, 2023) explicitly model setwise connections between neurons based on a simplicial complex to store memory patterns. Randomly diluted networks  where only a randomly chosen fraction of the simplices, i.e., setwise connections, have nonzero weights  show performance above traditional associative memory networks with only pairwise connections between neurons but the same total number of nonzero weighted connections. However, could there be a cleverer choice of connections to weight given known memory patterns we want to store? I suspect so, and in this talk I will to formally pose the problem for others to consider.

3:30  4:00 pm EDTCoffee Break11th Floor Collaborative Space

4:00  4:45 pm EDTReliability and robustness of oscillations in some slowfast chaotic systems11th Floor Lecture Hall
 Speaker
 Jonathan Jaquette, New Jersey Institute of Technology
 Session Chair
 Brent Doiron, University of Chicago
Abstract
A variety of nonlinear models of biological systems generate complex chaotic behaviors that contrast with biological homeostasis, the observation that many biological systems prove remarkably robust in the face of changing external or internal conditions. Motivated by the subtle dynamics of cell activity in a crustacean central pattern generator, we propose a refinement of the notion of chaos that reconciles homeostasis and chaos in systems with multiple timescales. We show that systems displaying relaxation cycles going through chaotic attractors generate chaotic dynamics that are regular at macroscopic timescales, thus consistent with physiological function. We further show that this relative regularity may break down through global bifurcations of chaotic attractors such as crises, beyond which the system may generate erratic activity also at slow timescales. We analyze in detail these phenomena in the chaotic Rulkov map, a classical neuron model known to exhibit a variety of chaotic spike patterns. This leads us to propose that the passage of slow relaxation cycles through a chaotic attractor crisis is a robust, general mechanism for the transition between such dynamics, and we validate this numerically in other models.

5:30  7:00 pm EDTNetworking event with Carney Institute for Brain ScienceExternal Event  Carney Institute for Brain Science  164 Angell St, Providence RI, 02906
Wednesday, September 20, 2023

9:00  9:45 am EDTModeling in neuroscience: the challenges of biological realism and computability11th Floor Lecture Hall
 Speaker
 LaiSang Young, Courant Institute
 Session Chair
 Katie Morrison, University of Northern Colorado
Abstract
Biologically realistic models of the brain have the potential to offer insight into neural mechanisms; they have predictive power, the ultimate goal of biological modeling. These benefits, however, come at considerable costs: network models that involve hundreds of thousands of neurons and many (unknown) parameters are unwieldy to build and to test, let alone to simulate and to analyze. Reduced models have obvious advantages, but the farther removed from biology a model is, the harder it is to draw meaningful inferences. In this talk, I propose a modeling strategy that aspires to be both realistic and computable. Two crucial ingredients are (i) we track neuronal dynamics on two spatial scales: coarsegrained dynamics informed by local activity, and (ii) we compute a family of potential local responses in advance, eliminating the need to perform similar computations at each spatial location in each update. I will illustrate this computational strategy using a model of the monkey visual cortex, which is very similar to that of humans.

10:00  10:15 am EDTCoffee Break11th Floor Collaborative Space

10:15  11:00 am EDTUncertainty Quantification for Neurobiological Networks.11th Floor Lecture Hall
 Speaker
 Daniele Avitabile, Vrije Universiteit Amsterdam
 Session Chair
 Katie Morrison, University of Northern Colorado
Abstract
This talk presents a framework for forward uncertainty quantification problems in spatiallyextended neurobiological networks. We will consider networks in which the cortex is represented as a continuum domain, and local neuronal activity evolves according to an integrodifferential equation, collecting inputs nonlocally, from the whole cortex. These models are sometimes referred to as neural field equations. Largescale brain simulations of such models are currently performed heuristically, and the numerical analysis of these problems is largely unexplored. In the first part of the talk I will summarise recent developments for the rigorous numerical analysis of projection schemes for deterministic neural fields, which sets the foundation for developing FiniteElement and Spectral schemes for largescale problems. The second part of the talk will discuss the case of networks in the presence of uncertainties modelled with random data, in particular: random synaptic connections, external stimuli, neuronal firing rates, and initial conditions. Such problems give rise to random solutions, whose mean, variance, or other quantities of interest have to be estimated using numerical simulations. This socalled forward uncertainty quantification problem is challenging because it couples spatially nonlocal, nonlinear problems to largedimensional random data. I will present a family of schemes that couple a spatial projector for the spatial discretisation, to stochastic collocation for the random data. We will analyse the time dependent problem with random data and the schemes from a functional analytic viewpoint, and show that the proposed methods can achieve spectral accuracy, provided the random data is sufficiently regular. We will showcase the schemes using several examples. Acknowledgements This talk presents joint work with Francesca Cavallini (VU Amsterdam), Svetlana Dubinkina (VU Amsterdam), and Gabriel Lord (Radboud University).

11:15 am  12:00 pm EDTOpen Problems DiscussionProblem Session  11th Floor Lecture Hall
 Session Chairs
 Konstantin Mischaikow, Rutgers University
 Katie Morrison, University of Northern Colorado

12:00  2:00 pm EDTLunch/Free Time

2:00  2:45 pm EDTDynamics of stochastic integrateandfire networks11th Floor Lecture Hall
 Speaker
 Gabe Ocker, Boston University
 Session Chair
 Katie Morrison, University of Northern Colorado

3:00  3:05 pm EDTA Step Towards Uncovering The Structure of Multistable Neural NetworksLightning Talks  11th Floor Lecture Hall
 Speaker
 Magnus Tournoy, Flatiron Institute
 Session Chair
 Katie Morrison, University of Northern Colorado
Abstract
With the experimental advances in the recording of large populations of neurons, theorists are in the humbling position of making sense of a staggering amount of data. One question that will become more into reach is how network structure relates to function. But going beyond explanatory models and becoming more predictive will require a fundamental approach. In this talk we’ll take the view of a physicist and formulate exact results within a simple, yet general, toy model called Glass networks. Named after its originator Leon Glass, they are the infinite gain limit of wellknown circuit models like continuoustime Hopfield networks. We’ll show that, within this limit, stability conditions reduce to semipositivity constraints on the synaptic weight matrix. Having a clear link between structure and function in possession, the consequences of multistability on the network architecture can be explored. One finding is the factorization of the weight matrix in terms of nonnegative matrices. Interestingly this factorization completely identifies the existence of stable states. Another result is the reduction of allowed sign patterns for the connections. A consequence hereof are lower bounds on the number of excitatory and inhibitory connections. At last we will discuss the special case of “sign stability”, where stability is guaranteed by the topology of the network. Derivations of these results will be supplemented by a number of examples.

3:05  3:10 pm EDTClustering and Distribution of the Adaptation VariableLightning Talks  11th Floor Lecture Hall
 Speaker
 Ka Nap Tse, University of Pittsburgh
 Session Chair
 Katie Morrison, University of Northern Colorado
Abstract
Brain wave is an important phenomenon in neuroscience. Besides synchronous spiking, excitatory cells with adaptation can spike in clusters to cause a rhythmic activity of the network. In previous works, the adaptation variable is usually eliminated for further analysis. In this talk, a way to study this clustering behaviour through the evolution of the distribution of the adaptation variable will be discussed. We then transform the distribution to the timetospike coordinate for further explorations.

3:10  3:15 pm EDTLowdimensional manifold of neural oscillations revealed by datadriven model reductionLightning Talks  11th Floor Lecture Hall
 Speaker
 ZhuoCheng Xiao, New York University
 Session Chair
 Katie Morrison, University of Northern Colorado
Abstract
Neural oscillations across various frequency bands are believed to underlie essential brain functions, such as information processing and cognitive activities. However, the emergence of oscillatory dynamics from spiking neuronal networks—and the interplay among different cortical rhythms—has seldom been theoretically explored, largely due to the strong nonlinearity and high dimensionality involved. To address this challenge, we have developed a series of datadriven model reduction methods tailored for spiking network dynamics. In this talk I will present nearly twodimensional manifolds in the reduced coordinates that successfully capture the emergence of gamma oscillations. Specifically, we find that the initiation phases of each oscillation cycle are the most critical. Subsequent cycles are more deterministic and lie on the aforementioned twodimensional manifold. The Poincaré mappings between these initiation phases reveal the structure of the dynamical system and successfully explain the bifurcation from gamma oscillations to multiband oscillations.

3:15  3:20 pm EDTSensitivity to control signals in triphasic rhythmic neural systems: a comparative mechanistic analysis via infinitesimal local timing response curvesLightning Talks  11th Floor Lecture Hall
 Speaker
 Zhuojun Yu, Case Western Reserve University
 Session Chair
 Katie Morrison, University of Northern Colorado
Abstract
Similar activity patterns may arise from model neural networks with distinct coupling properties and individual unit dynamics. These similar patterns may, however, respond differently to parameter variations and, specifically, to tuning of inputs that represent control signals. In this work, we analyze the responses resulting from modulation of a localized input in each of three classes of model neural networks that have been recognized in the literature for their capacity to produce robust threephase rhythms: coupled fastslow oscillators, nearheteroclinic oscillators, and thresholdlinear networks. Triphasic rhythms, in which each phase consists of a prolonged activation of a corresponding subgroup of neurons followed by a fast transition to another phase, represent a fundamental activity pattern observed across a range of central pattern generators underlying behaviors critical to survival, including respiration, locomotion, and feeding. To perform our analysis, we extend the recently developed local timing response curve (lTRC), which allows us to characterize the timing effects due to perturbations, and we complement our lTRC approach with modelspecific dynamical systems analysis. Interestingly, we observe disparate effects of similar perturbations across distinct model classes. Thus, this work provides an analytical framework for studying control of oscillations in nonlinear dynamical systems, and may help guide model selection in future efforts to study systems exhibiting triphasic rhythmic activity.

3:20  3:25 pm EDTModeling the effects of celltype specific lateral inhibitionLightning Talks  11th Floor Lecture Hall
 Speaker
 Soon Ho Kim, Georgia Institute of Technology
 Session Chair
 Katie Morrison, University of Northern Colorado

3:30  4:00 pm EDTCoffee Break11th Floor Collaborative Space

4:00  4:45 pm EDTComputing the Global Dynamics of Parameterized Families of ODEs11th Floor Lecture Hall
 Speaker
 Marcio Gameiro, Rutgers University
 Session Chair
 Katie Morrison, University of Northern Colorado
Abstract
We present a combinatorial topological method to compute the dynamics of a parameterized family of ODEs. A discretization of the state space of the systems is used to construct a combinatorial representation from which recurrent versus nonrecurrent dynamics is extracted. Algebraic topology is then used to validate and characterize the dynamics of the system. We will discuss the combinatorial description and the algebraic topological computations and will present applications to systems of ODEs arising from gene regulatory networks.
Thursday, September 21, 2023

9:00  9:45 am EDTMultiple timescale respiratory dynamics and effect of neuromodulation11th Floor Lecture Hall
 Speaker
 Yangyang Wang, Brandeis University
 Session Chair
 Zachary Kilpatrick, University of Colorado Boulder
Abstract
Respiration is an involuntary process in all living beings required for our survival. The preBötzinger complex (preBötC) in the mammalian brainstem is a neuronal network that drives inspiratory rhythmogenesis, whose activity is constantly modulated by neuromodulators in response to changes in the environment. In this talk, we will discuss chanllenges involved in the analysis of bursting dynamics in preBötC neurons and how these dynamics change during prenatal development. We will also combine insights from in vitro recordings and dynamical systems modeling to investigate the effect of norepinephrine (NE), an excitatory neuromodulator, on respiratory dynamics. Our investigation employs bifurcation analysis to reveal the mechanisms by which NE differentially modulates different types of preBötC bursting neurons.

10:00  10:30 am EDTCoffee Break11th Floor Collaborative Space

10:30  11:15 am EDTEnhancing Neuronal Classification Capacity via Nonlinear Parallel Synapses11th Floor Lecture Hall
 Speaker
 Marcus Benna, UC San Diego
 Session Chair
 Zachary Kilpatrick, University of Colorado Boulder
Abstract
We discuss models of a neuron that has multiple synaptic contacts with the same presynaptic axon. We show that a diverse set of learned nonlinearities in these parallel synapses leads to a substantial increase in the neuronal classification capacity.

11:30 am  1:30 pm EDTWorking Lunch: Open Problems SessionWorking Lunch  11th Floor Collaborative Space

1:30  2:15 pm EDTCombinatorial structure of continuous dynamics in gene regulatory networks11th Floor Lecture Hall
 Speaker
 Tomas Gedeon, Montana State University
 Session Chair
 Zachary Kilpatrick, University of Colorado Boulder
Abstract
Gene network dynamics and neural network dynamics face similar challenges of high dimensionality of both phase space and parameter space, and a lack of reliable experimental data to infer parameters. We first describe the mathematical foundation of DSGRN (Dynamic Signatures Generated by Regulatory Networks), an approach that provides a combinatorial description of global dynamics of a network over its parameter space. Finite description allows comparison of parameterized dynamics between hundreds of networks to discard networks that are not compatible with experimental data. We also describe a close connection of DSGRN to Boolean network models that allows us to view DSGRN as a connection between parameterized continuous time dynamics and discrete dynamics of Boolean modets. If time allows, we discuss several applications of this methodology to systems biology.

2:30  3:15 pm EDTA model of the mammalian neural motor architecture elucidates the mechanisms underlying efficient and flexible control of network dynamics11th Floor Lecture Hall
 Speaker
 Laureline Logiaco, Massachusetts Institute of Technology
 Session Chair
 Zachary Kilpatrick, University of Colorado Boulder
Abstract
One of the fundamental functions of the brain is to flexibly plan and control movement production at different timescales in order to efficiently shape structured behaviors. I will present research investigating how these complex computations are performed in the mammalian brain, with an emphasis on autonomous motor control. Specifically, I will focus on the mechanisms supporting efficient interfacing between 'higherlevel' planning commands and 'lowerlevel' motor cortical dynamics that ultimately drive muscles. I will take advantage of the fact that the anatomy of the circuits underlying motor control is well known. It notably involves the primary motor cortex, a recurrent network that generates learned commands to drive muscles while interacting through loops with thalamic neurons that lack recurrent excitation. Using an analytically tractable model that incorporates these architectural constraints, I will explain how this motor circuit can implement a form of efficient modularity by combining (i) plastic thalamocortical loops that are movementspecific and (ii) shared hardwired circuits. I will show that this modular architecture can balance two different objectives: first, supporting the flexible recombination of an extensible library of reusable motor primitives; and second, promoting the efficient use of neural resources by taking advantage of shared connections between modules. I will end by mentioning some open avenues for further mathematical analyses related to this framework.

3:30  4:00 pm EDTCoffee Break11th Floor Collaborative Space

4:00  4:45 pm EDTLowrank neural connectivity for the discrimination of temporal patterns.11th Floor Lecture Hall
 Speaker
 Sean Escola, Columbia University
 Session Chair
 Zachary Kilpatrick, University of Colorado Boulder
Friday, September 22, 2023

9:00  9:45 am EDTMeanfield theory of learning dynamics in deep neural networks11th Floor Lecture Hall
 Speaker
 Cengiz Pehlevan, Harvard University
 Session Chair
 Konstantin Mischaikow, Rutgers University
Abstract
Learning dynamics of deep neural networks is complex. While previous approaches made advances in mathematical analysis of the dynamics of twolayer neural networks, addressing deeper networks have been challenging. In this talk, I will present a mean field theory of the learning dynamics of deep networks and discuss its implications.

10:00  10:45 am EDTMultilevel measures for understanding and comparing biological and artificial neural networks11th Floor Lecture Hall
 Speaker
 SueYeon Chung, New York University
 Session Chair
 Konstantin Mischaikow, Rutgers University
Abstract
I will share recent theoretical advances on how representation's population level properties such as highdimensional geometries and spectral properties can be used to capture (1) the classification capacity of neural manifolds, and (2) prediction error of neural data from network model representations.

11:00  11:30 am EDTCoffee Break11th Floor Collaborative Space

11:30 am  12:15 pm EDTA Sparsecoding Model of Categoryspecific Functional Organization in IT Cortex11th Floor Lecture Hall
 Speaker
 Demba Ba, Harvard University
 Session Chair
 Konstantin Mischaikow, Rutgers University
Abstract
Primary sensory areas in the brain of mammals may have evolved to compute efficient representations of natural scenes. In the late 90s, Olhausen and Field proposed a model that expresses the components of a natural scene, e.g. naturalimage patches, as sparse combinations of a common set of patterns. Applied to a dataset of natural images, this socalled sparse coding model learns patterns that resemble the receptive fields of V1 neurons. Recordings from the monkey inferotemporal (IT) cortex suggest the presence, in this region, of a sparse code for naturalimage categories. The recordings also suggest that, physically, IT neurons form spatial clusters, each of which preferentially responds to images from certain categories. Taken together, this evidence suggests that neurons in IT cortex form functional groups that reflect the grouping of natural images into categories. My talk will introduce a new sparsecoding model that exhibits this categorical form of functional grouping.

12:30  2:00 pm EDTLunch/Free Time

2:00  2:45 pm EDTFinal Open Problems DiscussionProblem Session  11th Floor Lecture Hall
 Session Chairs
 Carina Curto, The Pennsylvania State University
 Konstantin Mischaikow, Rutgers University

3:00  3:30 pm EDTCoffee Break11th Floor Collaborative Space
Monday, September 25, 2023
Math + Neuroscience: Strengthening the Interplay Between Theory and Mathematics

10:00  11:00 am EDTJournal Club11th Floor Lecture Hall

3:00  3:30 pm EDTCoffee Break11th Floor Collaborative Space

3:30  5:00 pm EDTTLN Working GroupGroup Work  10th Floor Classroom
Tuesday, September 26, 2023
Math + Neuroscience: Strengthening the Interplay Between Theory and Mathematics

9:00  10:30 am EDTTutorialTutorial  11th Floor Lecture Hall

3:00  3:30 pm EDTCoffee Break11th Floor Collaborative Space
Wednesday, September 27, 2023
Math + Neuroscience: Strengthening the Interplay Between Theory and Mathematics

9:00  10:00 am EDTProfessional Development: Ethics IProfessional Development  11th Floor Lecture Hall

3:00  3:30 pm EDTCoffee Break11th Floor Collaborative Space
Thursday, September 28, 2023
Math + Neuroscience: Strengthening the Interplay Between Theory and Mathematics

9:00  10:30 am EDTTutorialTutorial  11th Floor Lecture Hall

12:00  1:30 pm EDTOpen Problems Lunch SeminarWorking Lunch  11th Floor Lecture Hall

3:00  3:30 pm EDTCoffee Break11th Floor Collaborative Space
Friday, September 29, 2023
Math + Neuroscience: Strengthening the Interplay Between Theory and Mathematics

11:00  11:30 am EDTRecurrent network models for predictive processingPost Doc/Graduate Student Seminar  10th Floor Classroom
 Bin Wang, University of California, San Diego
Abstract
Predictive responses to sensory stimuli are prevalent across cortical networks and are thought to be important for multisensory and sensorimotor learning. It has been hypothesized that predictive processing relies on computations done by two separate functional classes of cortical neurons: one specialized for “faithful” representation of external stimuli, and another for conveying predictionerror signals. It remains unclear how such predictive representations are formed in natural conditions, where stimuli are highdimensional. In this presentation, I will present some efforts on characterizing how highdimensional predictive processing can be performed through recurrent networks. I will start with the neuroscience motivations, define the mathematical models and mention some related mathematical questions that we haven't yet solved along the way.

11:30 am  12:00 pm EDTFishing for beta: uncovering mechanisms underlying cortical oscillations in largescale biophysical modelsPost Doc/Graduate Student Seminar  10th Floor Classroom
 Nicholas Tolley, Brown University
Abstract
Beta frequency (1330 Hz) oscillations are robustly observed across the neocortex, and are strongly predictive of behavior and disease states. While several theories exist regarding their functional significance, the cell and circuit level activity patterns underlying the generation of beta activity remains uncertain. We approach this problem using the Human Neocortical Neurosolver (HNN; hnn.brown.edu), a detailed biophysical model of a cortical column which simulates the microscale activity patterns underlying macroscale field potentials like beta oscillations. Detailed biophysical models potentially offer concrete and biologically interpretable predictions, but their use is challenged by computationally expensive simulations, an overwhelmingly large parameter space, and highly complex relationships between parameters and model outputs. We demonstrate how these challenges can be overcome by combining HNN with simulation based inference (SBI), a deep learning based Bayesian inference framework, and use it to characterize the space of parameters capable of producing beta oscillations. Specifically, we use the HNNSBI framework to characterize the constraints on network connectivity for producing spontaneous beta. In future work, we plan to compare these predictions to higher level neural models to identify which simplifying assumptions are consistent with detailed models of neural oscillations.

1:30  3:00 pm EDTTopology+Neuro Working GroupGroup Work  10th Floor Classroom

3:00  3:30 pm EDTCoffee Break11th Floor Collaborative Space
Monday, October 2, 2023
Math + Neuroscience: Strengthening the Interplay Between Theory and Mathematics

3:30  5:00 pm EDTTLN Working GroupGroup Work  10th Floor Classroom
Wednesday, October 4, 2023
Math + Neuroscience: Strengthening the Interplay Between Theory and Mathematics

9:00  10:00 am EDTProfessional Development: Ethics IIProfessional Development  11th Floor Lecture Hall
Friday, October 6, 2023
Math + Neuroscience: Strengthening the Interplay Between Theory and Mathematics

1:30  3:00 pm EDTTopology+Neuro Working GroupGroup Work  10th Floor Classroom
Monday, October 9, 2023
Math + Neuroscience: Strengthening the Interplay Between Theory and Mathematics

3:30  5:00 pm EDTTLN Working GroupGroup Work  10th Floor Classroom
Friday, October 13, 2023
Math + Neuroscience: Strengthening the Interplay Between Theory and Mathematics

1:30  3:00 pm EDTTopology+Neuro Working GroupGroup Work  10th Floor Classroom
Monday, October 23, 2023
Math + Neuroscience: Strengthening the Interplay Between Theory and Mathematics

3:30  5:00 pm EDTTLN Working GroupGroup Work  10th Floor Classroom
Wednesday, October 25, 2023
Math + Neuroscience: Strengthening the Interplay Between Theory and Mathematics

9:00  10:00 am EDTProfessional Development: Job ApplicationsProfessional Development  11th Floor Lecture Hall
Friday, October 27, 2023
Math + Neuroscience: Strengthening the Interplay Between Theory and Mathematics

1:30  3:00 pm EDTTopology+Neuro Working GroupGroup Work  10th Floor Classroom
Wednesday, November 8, 2023
Math + Neuroscience: Strengthening the Interplay Between Theory and Mathematics

9:00  10:00 am ESTProfessional Development: PapersProfessional Development  11th Floor Lecture Hall
Friday, November 10, 2023
Math + Neuroscience: Strengthening the Interplay Between Theory and Mathematics

1:30  3:00 pm ESTTopology+Neuro Working GroupGroup Work  10th Floor Classroom
Monday, November 13, 2023
Math + Neuroscience: Strengthening the Interplay Between Theory and Mathematics

3:30  5:00 pm ESTTLN Working GroupGroup Work  10th Floor Classroom
Wednesday, November 15, 2023
Math + Neuroscience: Strengthening the Interplay Between Theory and Mathematics

9:00  10:00 am ESTProfessional Development: GrantsProfessional Development  11th Floor Lecture Hall
Friday, November 17, 2023
Math + Neuroscience: Strengthening the Interplay Between Theory and Mathematics

1:30  3:00 pm ESTTopology+Neuro Working GroupGroup Work  10th Floor Classroom
Monday, November 20, 2023
Math + Neuroscience: Strengthening the Interplay Between Theory and Mathematics

3:30  5:00 pm ESTTLN Working GroupGroup Work  10th Floor Classroom
Friday, November 24, 2023
Math + Neuroscience: Strengthening the Interplay Between Theory and Mathematics

1:30  3:00 pm ESTTopology+Neuro Working GroupGroup Work  10th Floor Classroom
Monday, November 27, 2023
Math + Neuroscience: Strengthening the Interplay Between Theory and Mathematics

3:30  5:00 pm ESTTLN Working GroupGroup Work  10th Floor Classroom
Friday, December 1, 2023
Math + Neuroscience: Strengthening the Interplay Between Theory and Mathematics

1:30  3:00 pm ESTTopology+Neuro Working GroupGroup Work  10th Floor Classroom
Monday, December 4, 2023
Math + Neuroscience: Strengthening the Interplay Between Theory and Mathematics

3:30  5:00 pm ESTTLN Working GroupGroup Work  10th Floor Classroom
Friday, December 8, 2023
Math + Neuroscience: Strengthening the Interplay Between Theory and Mathematics

1:30  3:00 pm ESTTopology+Neuro Working GroupGroup Work  10th Floor Classroom
All event times are listed in ICERM local time in Providence, RI (Eastern Daylight Time / UTC4).
All event times are listed in .
ICERM local time in Providence, RI is Eastern Daylight Time (UTC4). Would you like to switch back to ICERM time or choose a different custom timezone?
Application Information
This program is at capacity, and ICERM is no longer accepting applications.
Your Visit to ICERM
 ICERM Facilities
 ICERM is located on the 10th & 11th floors of 121 South Main Street in Providence, Rhode Island. ICERM's business hours are 8:30am  5:00pm during this event. See our facilities page for more info about ICERM and Brown's available facilities.
 Traveling to ICERM
 ICERM is located at Brown University in Providence, Rhode Island. Providence's T.F. Green Airport (15 minutes south) and Boston's Logan Airport (1 hour north) are the closest airports. Providence is also on Amtrak's Northeast Corridor. Indepth directions and transportation information are available on our travel page.
 Lodging/Housing
 Visiting ICERM for longer than a weeklong workshop? ICERM staff works with participants to locate accommodations that fit their needs. Since shortterm furnished housing is in very high demand, take advantage of the housing options ICERM may recommend. Contact housing@icerm.brown.edu for more details.
 Childcare/Schools
 Those traveling with family who are interested in information about childcare and/or schools should contact housing@icerm.brown.edu.
 Technology Resources
 Wireless internet access and wireless printing is available for all ICERM visitors. Eduroam is available for members of participating institutions. Thin clients in all offices and common areas provide open access to a web browser, SSH terminal, and printing capability. See our Technology Resources page for setup instructions and to learn about all available technology.
 Accessibility
 To request special services, accommodations, or assistance for this event, please contact accessibility@icerm.brown.edu as far in advance of the event as possible. Thank you.
 Discrimination and Harassment Policy
 ICERM is committed to creating a safe, professional, and welcoming environment that benefits from the diversity and experiences of all its participants. Brown University's "Code of Conduct", "Discrimination and Workplace Harassment Policy", "Sexual and Genderbased Misconduct Policy", and "Title IX Policy" apply to all ICERM participants and staff. Participants with concerns or requests for assistance on a discrimination or harassment issue should contact the ICERM Director; they are the responsible employees at ICERM under this policy.
 Fundamental Research
 ICERM research programs aim to promote Fundamental Research and mathematical sciences education. If you are engaged in sensitive or proprietary work, please be aware that ICERM programs often have participants from countries and entities subject to United States export control restrictions. Any discoveries of economically significant intellectual property supported by ICERM funding should be disclosed.
 Exploring Providence
 Providence's worldrenowned culinary scene provides ample options for lunch and dinner. Neighborhoods near campus, including College Hill Historic District, have many local attractions. Check out the map on our Explore Providence page to see what's near ICERM.
Visa Information
Contact visa@icerm.brown.edu for assistance.
 Need a US Visa?
 J1 visa requested via ICERM staff
 Eligible to be reimbursed
 B1 or Visa Waiver Business (WB) –if you already have either visa – contact ICERM staff for a visa specific invitation letter.
 Ineligible to be reimbursed
 B2 or Visa Waiver Tourist (WT)
 Already in the US?

F1 and J1 not sponsored by ICERM: obtain a letter approving reimbursement from the International Office of your home institution PRIOR to travel.
H1B holders do not need letter of approval.
All other visas: alert ICERM staff immediately about your situation.
ICERM does not reimburse visa fees. This chart is to inform visitors whether the visa they enter the US on allows them to receive reimbursement for the items outlined in their invitation letter.
Financial Support
This section is for general purposes only and does not indicate that all attendees receive funding. Please refer to your personalized invitation to review your offer.
 ORCID iD
 As this program is funded by the National Science Foundation (NSF), ICERM is required to collect your ORCID iD if you are receiving funding to attend this program. Be sure to add your ORCID iD to your Cube profile as soon as possible to avoid delaying your reimbursement.
 Acceptable Costs

 1 roundtrip between your home institute and ICERM
 Flights on U.S. or E.U. airlines – economy class to either Providence airport (PVD) or Boston airport (BOS)
 Ground Transportation to and from airports and ICERM.
 Unacceptable Costs

 Flights on nonU.S. or nonE.U. airlines
 Flights on U.K. airlines
 Seats in economy plus, business class, or first class
 Change ticket fees of any kind
 Multiuse bus passes
 Meals or incidentals
 Advance Approval Required

 Personal car travel to ICERM from outside New England
 Multipledestination plane ticket; does not include layovers to reach ICERM
 Arriving or departing from ICERM more than a day before or day after the program
 Multiple trips to ICERM
 Rental car to/from ICERM
 Flights on a Swiss, Japanese, or Australian airlines
 Arriving or departing from airport other than PVD/BOS or home institution's local airport
 2 oneway plane tickets to create a roundtrip (often purchased from Expedia, Orbitz, etc.)
 Travel Maximum Contributions

 New England: $350
 Other contiguous US: $850
 Asia & Oceania: $2,000
 All other locations: $1,500
 Note these rates were updated in Spring 2023 and superseded any prior invitation rates. Any invitations without travel support will still not receive travel support.
 Reimbursement Requests

Request Reimbursement with Cube
Refer to the back of your ID badge for more information. Checklists are available at the front desk and in the Reimbursement section of Cube.
 Reimbursement Tips

 Scanned original receipts are required for all expenses
 Airfare receipt must show full itinerary and payment
 ICERM does not offer per diem or meal reimbursement
 Allowable mileage is reimbursed at prevailing IRS Business Rate and trip documented via pdf of Google Maps result
 Keep all documentation until you receive your reimbursement!
 Reimbursement Timing

6  8 weeks after all documentation is sent to ICERM. All reimbursement requests are reviewed by numerous central offices at Brown who may request additional documentation.
 Reimbursement Deadline

Submissions must be received within 30 days of ICERM departure to avoid applicable taxes. Submissions after thirty days will incur applicable taxes. No submissions are accepted more than six months after the program end.