A popular idea in biology is that the intrinsic timescale of an individual “unit” plays a crucial role in the information processed by the system as whole. For example, it has been proposed that the intrinsic timescales of single neurons in different brain areas are related to functional differences between these areas. However, disentangling between intrinsic and collective timescales remains a highly nontrivial task, and could benefit by drawing intuition from simple physical toy models. To this end, we consider the prototypical model of collective temporal behavior: kinetic Ising models, where identical units are connected with a given topology, and neighboring units stochastically interact with one another. We analyze how the behavior of such models is altered when considering many aspects relevant to their computational implementation, namely, finite temporal resolution, topological connectivity, and finite system size. Not coincidentally, these considerations have biological analogues. For example, the clock speed of processor is functionally similar to an “inverse refractory period” of a neuron. While locality of interactions can be exploited for parallel simulation of physical systems, the diversity of topologies in biological systems is key to their expressive power.
Foraging is a fundamental behavior as animals’ search for food is crucial for their survival. Patch leaving is a canonical foraging behavior, but classic theoretical conceptions of patch leaving decisions lack some key naturalistic details. Optimal foraging theory provides general rules for when an animal should leave a patch, but does not provide mechanistic insights about how those rules change with the structure of the environment. Such a mechanistic framework would aid in designing quantitative experiments to unravel behavioral and neural underpinnings of foraging. To address these shortcomings, we develop a normative theory of patch foraging decisions. Using a Bayesian approach, we treat patch leaving behavior as a statistical inference problem. We derive the animals’ optimal decision strategies in both non-depleting and depleting environments. A majority of these cases can be analyzed explicitly using methods from stochastic processes. Our behavioral predictions are expressed in terms of the optimal patch residence time and the decision rule by which an animal departs a patch. We also extend our theory to a hierarchical model in which the forager learns the environmental food resource distribution. The quantitative framework we develop will therefore help experimenters move from analyzing trial based behavior to continuous behavior without the loss of quantitative rigor. Our theoretical framework both extends optimal foraging theory and motivates a variety of behavioral and neuroscientific experiments investigating patch foraging behavior.
The processes that neuroscientists study, including sensory systems, cognitive capabilities, and decision-making, depend on ecological and evolutionary forces that shape how animals adapt to environmental change. This raises fundamental questions both in neuroscience and in evolutionary biology. In systems neuroscience, decision-making has usually been studied by training animals to perform stereotyped behavior in laboratory conditions (Gold and Shadlen, 2007; Shadlen and Kiani, 2013; Hanks and Summerfield, 2017). This has helped to elucidate the neurobiological mechanisms of decision-making, but does not describe how such decisions are performed in a natural environment, and what are the ecological and evolutionary forces that shaped these processes (Krakauer et al., 2017; Mobbs et al., 2018). It remains an open question whether the neural mechanisms for trained behavior are recruited for decisions made in natural settings. On the other hand, behavioral ecology examines the evolutionary pressures that lead to decisions that function in natural environments (Krebs and Davies, 1997). This approach generally does not examine the cognitive machinery that processes information, leaving open the question how neural processing systems constrain decision-making.
Recent advances in data acquisition technology, computer vision, behavioral modeling, and machine learning facilitate the collection and efficient processing of data on behavior and environmental details (Berman, 2018; Brown and Bivort, 2018), and also enable neural recordings from freely moving animals (Kerr and Nimmerjahn, 2012; Jun et al., 2017). The perspectives of both systems neuroscience and behavioral ecology are needed in order to use such data to form a deeper mechanistic understanding of decision-making (Bateson and Laland, 2013; Nesse, 2013). The articles in this special issue provide theoretical frameworks and case studies that highlight the crucial importance of connecting ecological context to the study of decision making. Two review articles discuss challenges, open questions, and theoretical frameworks related to incorporating ecological context and cognitive processes into studies on decision-making (DeAngelis and Diaz), Budaev et al.. Three research articles provide examples across a range of species—ants, caterpillars, and primates—for how both lab and field experiments can be used to study natural decisions (Despland), how environmental context affects evolution of sensory processing systems (Ogawa et al.), and how environment conditions can affect the information used to make decisions (Janson).
A challenge in designing laboratory experiments is to capture the key elements of the decisions that animals make in their natural environment. Despland combined field observations with lab experiments to study aggregation behavior of caterpillars and their decisions when to initiate feeding on leaves. The results show that when a caterpillar decides to feed, a behavior which influences its survival, depends both on social context and environmental factors including the trichome defenses of the plant.
Natural environments present many sensory stimuli that can combine to influence decision-making. Budaev et al. discuss the influence of state-dependence and the need to filter relevant sensory information, such as hunger or fear, on decisions made in a complex environment. In the framework they present, an animal's state sets the priority for each decision system, such as nutrition or survival, and top-down attention mechanisms regulate and limit which sensory information is processed.
Filtering of information can also be selected for, leading to animals with sensory systems that are adapted to their environment, as demonstrated in the case of ants in the article by Ogawa et al. This study examines two closely related ant species and show that they have optimized their visual system differently in order to adapt to their respective visual ecology.
Janson demonstrates that environmental conditions and individual capabilities influence the relative benefit of additional information and cognitive processes, such as memory, in decision-making (e.g., memory). Inspired by the results of field experiments on wild capuchin monkeys, they simulate foraging decisions and ask how including memory of elapsed time since prior foraging visits improves foraging efficiency in different resource environments.
DeAngelis and Diaz highlight the importance of including individuals' decision making strategies into population level ecological models, and discuss how agent-based models can be used to examine the fitness consequences of specific individual decision rules. For example, agent based modeling can represent differences between individuals in a population or co-dependent strategies such as predator-prey interactions, and opens up the opportunity to provide mechanistic interpretations for long standing population level ecological phenomena.
We hope these articles inspire further work that combines systems neuroscience and behavioral ecology.
Axons functionally link the somato-dendritic compartment to synaptic terminals. Structurally and functionally diverse, they accomplish a central role in determining the delays and reliability with which neuronal ensembles communicate. By combining their active and passive biophysical properties, they ensure a plethora of physiological computations. In this review, we revisit the biophysics of generation and propagation of electrical signals in the axon and their dynamics. We further place the computational abilities of axons in the context of intracellular and intercellular coupling. We discuss how, by means of sophisticated biophysical mechanisms, axons expand the repertoire of axonal computation, and thereby, of neural computation.
The patch-leaving problem is a canonical foraging task, in which a forager must decide to leave a current resource in search for another. Theoretical work has derived optimal strategies for when to leave a patch, and experiments have tested for conditions where animals do or do not follow an optimal strategy. Nevertheless, models of patch-leaving decisions do not consider the imperfect and noisy sampling process through which an animal gathers information, and how this process is constrained by neurobiological mechanisms. In this theoretical study, we formulate an evidence accumulation model of patch-leaving decisions where the animal averages over noisy measurements to estimate the state of the current patch and the overall environment. We solve the model for conditions where foraging decisions are optimal and equivalent to the marginal value theorem, and perform simulations to analyze deviations from optimal when these conditions are not met. By adjusting the drift rate and decision threshold, the model can represent different “strategies”, for example an incremental, decremental, or counting strategy. These strategies yield identical decisions in the limiting case but differ in how patch residence times adapt when the foraging environment is uncertain. To describe sub-optimal decisions, we introduce an energy-dependent marginal utility function that predicts longer than optimal patch residence times when food is plentiful. Our model provides a quantitative connection between ecological models of foraging behavior and evidence accumulation models of decision making. Moreover, it provides a theoretical framework for potential experiments which seek to identify neural circuits underlying patch-leaving decisions.
Decision making in dynamic environments requires discounting old evidence that may no longer inform the current state of the world. Previous work found that humans discount old evidence in a dynamic environment, but do not discount at the optimal rate. Here we investigated whether rats can optimally discount evidence in a dynamic environment by adapting the timescale over which they accumulate evidence. Using discrete evidence pulses, we exactly compute the optimal inference process. We show that the optimal timescale for evidence discounting depends on both the stimulus statistics and noise in sensory processing. When both of these components are taken into account, rats accumulate and discount evidence with the optimal timescale. Finally, by changing the volatility of the environment, we demonstrate experimental control over the rats’ accumulation timescale. The mechanisms supporting integration are a subject of extensive study, and experimental control over these timescales may open new avenues of investigation.
Closed Loop Neuroscience addresses the technical aspects of closed loop neurophysiology, presenting the implementation of these approaches spanning several domains of neuroscience, from cellular and network neurophysiology, through sensory and motor systems, and then clinical therapeutic devices.
Although closed-loop approaches have long been a part of the neuroscientific toolbox, these techniques are only now gaining popularity in research and clinical applications. As there is not yet a comprehensive methods book addressing the topic as a whole, this volume fills that gap, presenting state-of-the-art approaches and the technical advancements that enable their application to different scientific problems in neuroscience.
The dysfunction of the small-conductance calcium-activated K+ channel SK3 has been described as one of the factors responsible for the progress of psychoneurological diseases, but the molecular basis of this is largely unknown. This report reveals through use of immunohistochemistry and computational tomography that long-term increased expression of the SK3 small-conductance calcium-activated potassium channel (SK3-T/T) in mice induces a notable bilateral reduction of the hippocampal area (more than 50 %). Histological analysis showed that SK3-T/T mice have cellular disarrangements and neuron discontinuities in the hippocampal formation CA1 and CA3 neuronal layer. SK3 overexpression resulted in cognitive loss as determined by the object recognition test. Electrophysiological examination of hippocampal slices revealed that SK3 channel overexpression induced deficiency of long-term potentiation in hippocampal microcircuits. In association with these results, there were changes at the mRNA levels of some genes involved in Alzheimer's disease and/or linked to schizophrenia, epilepsy, and autism. Taken together, these features suggest that augmenting the function of SK3 ion channel in mice may present a unique opportunity to investigate the neural basis of central nervous system dysfunctions associated with schizophrenia, Alzheimer's disease, or other neuropsychiatric/neurodegenerative disorders in this model system. As a more detailed understanding of the role of the SK3 channel in brain disorders is limited by the lack of specific SK3 antagonists and agonists, the results observed in this study are of significant interest; they suggest a new approach for the development of neuroprotective strategies in neuropsychiatric/neurodegenerative diseases with SK3 representing a potential drug target.
Multi-electrode arrays (MEAs) allow non-invasive multi-unit recording in-vitro from cultured neuronal networks. For sufficient neuronal growth and adhesion on such MEAs, substrate preparation is required. Plating of dissociated neurons on a uniformly prepared MEA's surface results in the formation of spatially extended random networks with substantial inter-sample variability. Such cultures are not optimally suited to study the relationship between defined structure and dynamics in neuronal networks. To overcome these shortcomings, neurons can be cultured with pre-defined topology by spatially structured surface modification. Spatially structuring a MEA surface accurately and reproducibly with the equipment of a typical cell-culture laboratory is challenging.
In this paper, we present a novel approach utilizing micro-contact printing (μCP) combined with a custom-made device to accurately position patterns on MEAs with high precision. We call this technique AP-μCP (accurate positioning micro-contact printing).
Comparison with existing Methods
Other approaches presented in the literature using μCP for patterning either relied on facilities or techniques not readily available in a standard cell culture laboratory, or they did not specify means of precise pattern positioning.
Here we present a relatively simple device for reproducible and precise patterning in a standard cell-culture laboratory setting. The patterned neuronal islands on MEAs provide a basis for high throughput electrophysiology to study the dynamics of single neurons and neuronal networks.
Spontaneous bursting activity in cultured neuronal networks is initiated by leader neurons, which constitute a small subset of first-to-fire neurons forming a sub-network that recruits follower neurons into the burst. While the existence and stability of leader neurons is well established, the influence of stimulation on the leader-follower dynamics is not sufficiently understood. By combining multi-electrode array recordings with whole field optical stimulation of cultured Channelrhodopsin-2 transduced hippocampal neurons, we show that fade-in photo-stimulation induces a significant shortening of intra-burst firing rate peak delay of fol-lower electrodes after offset of the stimulation compared to unperturbed spontaneous activity. Our study shows that optogenetic stimulation can be used to change the dynamical fine structure of self-organized network bursts.
Many diverse studies have shown that a mechanical displacement of the axonal membrane accompanies the electrical pulse defining the action potential (AP). We present a model for these mechanical displacements as arising from the driving of surface wave modes in which potential energy is stored in elastic properties of the neuronal membrane and cytoskeleton while kinetic energy is carried by the axoplasmic fluid. In our model, these surface waves are driven by the travelling wave of electrical depolarization characterizing the AP, altering compressive electrostatic forces across the membrane. This driving leads to co-propagating mechanical displacements, which we term Action Waves (AWs). Our model allows us to estimate the shape of the AW that accompanies any travelling wave of voltage, making predictions that are in agreement with results from several experimental systems. Our model can serve as a framework for understanding the physical origins and possible functional roles of these AWs.
In the axons of cultured hippocampal neurons, actin forms various structures, including bundles, patches (involved in the preservation of neuronal polarity), and a recently reported periodic ring-like structure. Nevertheless, the overlaying organization of actin in neurons and in the axon initial segment (AIS) is still unclear, due mainly to a lack of adequate imaging methods. By harnessing live-cell stimulated emission depletion (STED) nanoscopy and the fluorescent probe SiR-Actin, we show that the periodic subcortical actin structure is in fact present in both axons and dendrites. The periodic cytoskeleton organization is also found in the peripheral nervous system, specifically at the nodes of Ranvier. The actin patches in the AIS co-localize with pre-synaptic markers. Cytosolic actin organization strongly depends on the developmental stage and subcellular localization. Altogether, the results of this study reveal unique neuronal cytoskeletal features.
We present a novel experimental paradigm "In vitro closed loop optical network electrophysiology (ivCLONE)". This seminar note gives an overview of the basics of optical neurostimulation, network electrophysiology and closed loop electrophysiology. Moreover, the notes discuss how combination of aforementioned techniques would help us to address network-level phenomenon and how single neuron properties are related to collective network dynamics.
Synchronized bursting is found in many brain areas and has also been implicated in the pathophysiology of neuropsychiatric disorders such as epilepsy, Parkinson’s disease, and schizophrenia. Despite extensive studies of network burst synchronization, it is insufficiently understood how this type of network wide synchronization can be strengthened, reduced, or even abolished. We combined electrical recording using multi-electrode array with optical stimulation of cultured channelrhodopsin-2 transducted hippocampal neurons to study and manipulate network burst synchronization. We found low frequency photo-stimulation protocols that are sufficient to induce potentiation of network bursting, modifying bursting dynamics, and increasing interneuronal synchronization. Surprisingly, slowly fading-in light stimulation, which substantially delayed and reduced light-driven spiking, was at least as effective in reorganizing network dynamics as much stronger pulsed light stimulation. Our study shows that mild stimulation protocols that do not enforce particular activity patterns onto the network can be highly effective inducers of network-level plasticity.
Central neurons operate in a regime of constantly fluctuating conductances, induced by thousands of presynaptic cells. Channelrhodopsins have been almost exclusively used to imprint a fixed spike pattern by sequences of brief depolarizations. Here we introduce continuous dynamic photostimulation (CoDyPs), a novel approach to mimic in-vivo like input fluctuations noninvasively in cells transfected with the weakly inactivating channelrhodopsin variant ChIEF. Even during long-term experiments, cultured neurons subjected to CoDyPs generate seemingly random, but reproducible spike patterns. In voltage clamped cells CoDyPs induced highly reproducible current waveforms that could be precisely predicted from the light-conductance transfer function of ChIEF. CoDyPs can replace the conventional, flash-evoked imprinting of spike patterns in in-vivo and in-vitro studies, preserving natural activity. When combined with non-invasive spike-detection, CoDyPs allows the acquisition of order of magnitudes larger data sets than previously possible, for studies of dynamical response properties of many individual neurons.
Dynamic oscillatory coherence is believed to play a central role in flexible communication between brain circuits. To test this communication-through-coherence hypothesis, experimental protocols that allow a reliable control of phase-relations between neuronal populations are needed. In this modeling study, we explore the potential of closed-loop optogenetic stimulation for the control of functional interactions mediated by oscillatory coherence. The theory of non-linear oscillators predicts that the efficacy of local stimulation will depend not only on the stimulation intensity but also on its timing relative to the ongoing oscillation in the target area. Induced phase-shifts are expected to be stronger when the stimulation is applied within specific narrow phase intervals. Conversely, stimulations with the same or even stronger intensity are less effective when timed randomly. Stimulation should thus be properly phased with respect to ongoing oscillations (in order to optimally perturb them) and the timing of the stimulation onset must be determined by a real-time phase analysis of simultaneously recorded local field potentials (LFPs). Here, we introduce an electrophysiologically calibrated model of Channelrhodopsin 2 (ChR2)-induced photocurrents, based on fits holding over two decades of light intensity. Through simulations of a neural population which undergoes coherent gamma oscillations—either spontaneously or as an effect of continuous optogenetic driving—we show that precisely-timed photostimulation pulses can be used to shift the phase of oscillation, even at transduction rates smaller than 25%. We consider then a canonic circuit with two inter-connected neural populations oscillating with gamma frequency in a phase-locked manner. We demonstrate that photostimulation pulses applied locally to a single population can induce, if precisely phased, a lasting reorganization of the phase-locking pattern and hence modify functional interactions between the two populations.
In the presented thesis work, an “Optical Network Electrophysiology “ system that combines optical stimulation using optogenetic tools and multisite neuronal recording using microelectrode arrays was developed and its applicability to address questions related to neuronal network dynamics was demonstrated. The system was used to modify the intrinsic collective dynamics of a cultured neuronal network potentially maximizing spike synchronization using mild whole field photostimulation. It offers an attractive alternative to stimulation paradigms that externally control neuronal networks. Another application of the system was to drive neurons in a naturalistic in-vivo like fashion where fluctuating light waveforms where used to put neurons in “fluctuation driven regime”. This regime is crucial to characterize basic computational properties of neurons such as frequency-input curves, spike triggered average and correlation gain.