Evolution Equations on the Finite Interval. Asymptotics and a Novel Numerical Technique. II Analytical Inversion of Integrals 4. Riemann—Hilbert and d -Bar Problems. The Fourier Transform and Its Variations. Formulation of Riemann—Hilbert Problems.
A unified approach to model peripheral nerves across different animal species
A Collocation Method in the Fourier Plane. Linearizable Boundary Conditions. The Generalized Dirichlet to Neumann Map. Asymptotics of Oscillatory Riemann—Hilbert Problems. Back Matter. Banner art adapted from a figure by Hinke M. Front Matter pp. Introduction pp. Evolution Equations on the Half-Line pp.
Evolution Equations on the Finite Interval pp. Asymptotics and a Novel Numerical Technique pp. Riemann—Hilbert and d -Bar Problems pp. In the same way as evolution shapes animals, is it possible to use artificial evolution to automatically design robots? This attractive vision gave birth to Evolutionary Robotics ER , an interdisciplinary field that incorporates ideas from Biology, Robotics and Artificial Intelligence.
This tutorial will give an overview of the various questions addressed in ER, relying on examples from the literature. Past achievements and major contributions, as well as specific challenges in ER will be described. The tutorial will in particular focus on:.
- Sweet Revenge: A Last Chance Rescue Novel!
- mathematics and statistics online.
- Account Options.
- Alevi Identity: Cultural, Religious and Social Perspectives (Transactions (Svenska Forskningsinstitutet I Istanbul), V. 8.).
- PacMan: Behind the Scenes with Manny Pacquiao--the Greatest Pound-for-Pound Fighter in the World.
His field of research is mostly concerned with Evolutionary Computation and Complex Systems self-adaptive collective robotic systems, generative and developmental systems, evolutionary robotics. Nicolas Bredeche is author of more than 30 papers in journals and major conferences in the field. He is also a member of the french Evolution Artificielle association and has been regularly co-organizing the french Artificial Evolution one-day seminar JET since He has also organized several international workshops in Evolution, Robotics, and Development of Artificial Neural Networks.
His research is mainly concerned with the use of evolutionary algorithms in the context of optimization or synthesis of robot controllers. He worked in a robotics context to design, for instance, controllers for flying robots, but also in the context of modelling where he worked on the use of multi-objective evolutionary algorithms to optimize and study computational models.
More recently, he focused on the use of multi-objective approaches to tackle learning problems like premature convergence or generalization. Researchers of the team work on different aspects of learning in the context of motion control and cognition, both from a computational neuroscience perspective and a robotics perspective. Jean-Baptiste Mouret is a senior researcher "Directeur de recherche" at Inria, the French research institute dedicated to computer science and mathematics.
Overall, J. Mouret conducts researches that intertwine evolutionary algorithms, neuro-evolution, and machine learning to make robots more adaptive. His work was recently featured on the cover of Nature Cully et al. The automatic design of algorithms has been an early aim of both machine learning and AI, but has proved elusive. The aim of this tutorial is to introduce hyper-heuristics as a principled approach to the automatic design of algorithms.
Hyper-heuristics are metaheuristics applied to a space of algorithms; i. In particular, this tutorial will demonstrate how to mine existing algorithms to obtain algorithmic primitives for the hyper-heuristic to compose new algorithmic solutions from, and to employ various types of genetic programming to execute the composition process; i. This tutorial will place hyper-heuristics in the context of genetic programming - which differs in that it constructs solutions from scratch using atomic primitives - as well as genetic improvement - which takes a program as starting point and improves on it a recent direction introduced by William Langdon.
The approach proceeds from the observation that it is possible to define an invariant framework for the core of any class of algorithms often by examining existing human-written algorithms for inspiration. The variant components of the algorithm can then be generated by genetic programming. Each instance of the framework therefore defines a family of algorithms. While this allows searches in constrained search spaces based on problem knowledge, it does not in any way limit the generality of this approach, as the template can be chosen to be any executable program and the primitive set can be selected to be Turing-complete.
Typically, however, the initial algorithmic primitive set is composed of primitive components of existing high-performing algorithms for the problems being targeted; this more targeted approach very significantly reduces the initial search space, resulting in a practical approach rather than a mere theoretical curiosity. Iterative refining of the primitives allows for gradual and directed enlarging of the search space until convergence.
This leads to a technique for mass-producing algorithms that can be customised to the context of end-use. This is perhaps best illustrated as follows: typically a researcher might create a travelling salesperson algorithm TSP by hand. When executed, this algorithm returns a solution to a specific instance of the TSP. We will describe a method that generates TSP algorithms that are tuned to representative instances of interest to the end-user. This tutorial will provide a step-by-step guide which takes the novice through the distinct stages of automatic design.
Examples will illustrate and reinforce the issues of practical application. This technique has repeatedly produced results which outperform their manually designed counterparts, and a theoretical underpinning will be given to demonstrate why this is the case. Automatic design will become an increasingly attractive proposition as the cost of human design will only increase in-line with inflation, while the speed of processors increases in-line with Moore's law, thus making automatic design attractive for industrial application.
Basic knowledge of genetic programming will be assumed. Daniel R. He received his Ph. For several years he has served on the GECCO GA track program committee, the Congress on Evolutionary Computation program committee, and a variety of other international conference program committees. His research interests include the design of hyper-heuristics and self-configuring evolutionary algorithms and the application of computational intelligence techniques in cyber security, critical infrastructure protection, and program understanding.
He was granted a US patent for an artificially intelligent rule-based system to assist teams in becoming more effective by improving the communication process between team members. John R. Genetic programming emerged in the early 's as one of the most exciting new evolutionary algorithm paradigms. It has rapidly grown into a thriving area of research and application. While sharing the evolutionary inspired algorithm principles of a genetic algorithm, it differs by exploiting an executable genome.
Genetic programming evolves a 'program' to solve a problem rather than a single solution. This tutorial introduces the basic genetic programming paradigm. It explains how the powerful capability of genetic programming is derived from modular algorithmic components: executable representations such as a parse tree, variation operators that preserve syntax and explore a variable length, hierarchical solution space, appropriately chosen programming functions and fitness function specification.
It provides demos and walks through an example of GP software. ALFA focuses on scalable machine learning, evolutionary algorithms, and frameworks for large scale knowledge mining, prediction and analytics. The group has projects in clinical medicine knowledge discovery: arterial blood pressure forecasting and pattern recognition, diuretics in the ICU; wind energy: turbine layout optimization, resource prediction, cable layout; and MOOC Technology: MoocDB, student persistence and resource usage analysis.
Her research is in the design of scalable Artificial Intelligence systems that execute on a range of hardware systems: GPUs, workstations, grids, clusters, clouds and volunteer compute networks. These systems include machine learning components such as evolutionary algorithms e. Una-May has a patent for a original genetic algorithm technique applicable to internet-based name suggestions. She holds a B. He has 10 years of experience in EC focusing on the use of programs with grammatical representations, estimation of distribution and coevolution.
His work has been applied to networks, tax avoidance, and Cyber Security.
A Unified Approach to Boundary Value Problems | Society for Industrial and Applied Mathematics
Learning Classifier Systems LCSs emerged in the late s and since then have attracted a lot of research attention. Nowadays, several variations capable of dealing with most modern ML tasks exist, including online and batch-wise supervised learning, single- and multi-step reinforcement learning, and even unsupervised learning. This great flexibility, which is due to their modular and clearly defined algorithmic structure paving the way for simple and straight-forward adaptations, is unique in the field of Evolutionary Machine Learning EML — yielding the LCS paradigm an indisputable permanent place in the EML field.
Despite this well-known blueprint comprising the building blocks bringing LCSs to function, gaining theoretical insights regarding the interplay between them has been a crucial research topic for a long time, and this still constitutes a pursued subject of active research. In this tutorial, the main goal is to introduce exactly these building blocks of LCS-based EML and to conceptually develop a modern, generic Michigan-style LCS step by step from scratch.
Past and recent theoretical advances in XCS research will be the subject of further discussions to provide the attendees with a feeling for the fundamental challenges and also the bounds of what can be achieved by XCS under which circumstances. To nevertheless provide a holistic view on LCSs, the tutorial starts with a sketch on the lineage and historical developments in the field, but will then quickly focus on the more prominent Michigan-style systems. The third part of the tutorial is then devoted to the state of the art in LCS research in terms of their real world applicability.
The most recent advances that have led to modern systems such as XCSF for online function approximation or ExSTraCS for large-scale supervised data mining will thus be the subject of elaboration. With the intention to provide the audience with an impression about where LCS research stands these days and which open questions are still around, a review of most recent endeavors to tackle unsolved issues constitutes the end of the tutorial.
Anthony Stein is a research associate and Ph. He received his B. He then switched to the University of Augsburg to proceed with his master's degree M. Within his master's thesis, he dived into the nature of Learning Classifier Systems for the first time. Since then, he is a passionate follower and contributor of ongoing research in this field. His research focuses on the applicability of EML techniques in self-learning adaptive systems which are asked to act in real world environments that exhibit challenges such as data imbalance and non-stationarity.
Therefore, in his work he investigates the utilization of interpolation and active learning methods to change the means of how classifiers are initialized, insufficiently covered problem space niches are filled, or adequate actions are selected. A further aspect he investigates is the question how Learning Classifier Systems can be enhanced toward a behavior of proactive knowledge construction. In model-building evolutionary algorithms the variation operators are guided by the use of a model that conveys as much problem-specific information as possible so as to best combine the currently available solutions and thereby construct new, improved, solutions.
Such models can be constructed beforehand for a specific problem, or they can be learnt during the optimization process. Well-known types of algorithms of the latter type are Estimation-of-Distribution Algorithms EDAs where probabilistic models of promising solutions are built and subsequently sampled to generate new solutions.
- ISBN 13: 9780125593502.
- A unified approach to regularization methods for linear ill-posed problems;
- Dynamical systems theory - Wikipedia!
- Beyond Human: Engineering Our Future Evolution.
In general, replacing traditional crossover and mutation operators by building and using models enables the use of machine learning techniques for automatic discovery of problem regularities and exploitation of these regularities for effective exploration of the search space. Using machine learning in optimization enables the design of optimization techniques that can automatically adapt to the given problem. This is an especially useful feature when considering optimization in a black-box setting. Successful applications include Ising spin glasses in 2D and 3D, graph partitioning, MAXSAT, feature subset selection, forest management, groundwater remediation design, telecommunication network design, antenna design, and scheduling.
This tutorial will provide an introduction and an overview of major research directions in this area. He has been involved in genetic algorithm research since His current research interests are mainly focused on the design and application of model learning techniques to improve evolutionary search. Dirk is has been a member of the Editorial Board of the journals Evolutionary Computation, Evolutionary Intelligence, IEEE Transactions on Evolutionary Computation, and a member of the program committee of the major international conferences on evolutionary computation.
Peter A. Peter was formerly affiliated with the Department of Information and Computing Sciences at Utrecht University, where also he obtained both his MSc and PhD degrees in Computer Science, more specifically on the design and application of estimation-of-distribution algorithms EDAs. He has co- authored over 90 refereed publications on both algorithmic design aspects and real-world applications of evolutionary algorithms. In recent years, there has been a resurgence of interest in reinforcement learning RL , particularly in the deep learning community. While much of the attention has been focused on using Value-function learning approaches i.
Q-Learning or Estimated Policy Gradient-based approaches to train neural-network policies, little attention has been paid to Neuroevolution NE for policy search. The larger research community may have forgotten about previous successes of Neuroevolution. Some of the most challenging reinforcement learning problems are those where reward signals are sparse and noisy. For many of these problems, we only know the outcome at the end of the task, such as whether the agent wins or loses, whether the robot arm picks up the object or not, or whether the agent has survived.
Since NE only require the final cumulative reward that an agent gets at the end of its rollout in an environment, these are the types of problems where NE may have an advantage over traditional RL methods. In this tutorial, I show how Neuroevolution can be successfully applied to Deep RL problems to help find a suitable set of model parameters for a neural network agent. Using popular modern software frameworks for RL TensorFlow, OpenAI Gym, pybullet, roboschool , I will apply NE to continuous control robotic tasks, and show we can obtain very good results to control bipedal robot walkers, Kuka robot arm for grasping tasks, Minitaur robot, and also various existing baseline locomotion tasks common in the Deep RL literature.
I will even show that NE can even obtain state-of-the-art results over Deep RL methods, and highlight ways to use NE that can lead to more stable and robust policies compared to traditional RL methods. I will also describe how to incorporate NE techniques into existing RL research pipelines taking advantage of distributed processing on Cloud Compute. I will also discuss how to combine techniques from deep learning, such as the use of deep generative models, with Neuroevolution to solve more challenging Deep Reinforcement Learning problems that rely on high dimensional video inputs for continous robotics control, or for video game simulation tasks.
We will look at combining model-based reinforcement learning approaches with Neuroevolution to tackle these problems, using TensorFlow, OpenAI Gym, and pybullet environments. A case study will be presented when researchers prepare to tackle both areas, and we end with a group discussion about issues with cross-community collaborations with the audience. Prior to joining Google, He worked at Goldman Sachs as a Managing Director, where he co-ran the fixed-income trading business in Japan.
He obtained undergraduate and graduate degrees in Engineering Science and Applied Math from the University of Toronto. Successful and efficient use of evolutionary algorithms EA depends on the choice of the genotype, the problem representation mapping from genotype to phenotype and on the choice of search operators that are applied to the genotypes. These choices cannot be made independently of each other. The question whether a certain representation leads to better performing EAs than an alternative representation can only be answered when the operators applied are taken into consideration.
The reverse is also true: deciding between alternative operators is only meaningful for a given representation. Research in the last few years has identified a number of key concepts to analyse the influence of representation-operator combinations on EA performance. Relevant concepts are the locality and redundancy of representations.
Locality is a result of the interplay between the search operator and the genotype-phenotype mapping. Representations have high locality if the application of variation operators results in new solutions similar to the original ones. Representations are redundant if the number of phenotypes exceeds the number of possible genotypes. Redundant representations can lead to biased encodings if some phenotypes are on average represented by a larger number of genotypes or search operators favor some kind of phenotypes.
The tutorial gives a brief overview about existing guidelines for representation design, illustrates the different aspects of representations, gives a brief overview of models describing the different aspects, and illustrates the relevance of the aspects with practical examples. Since , he is professor of Information Systems at the University of Mainz. He has published more than 90 technical papers in the context of planning and optimization, evolutionary computation, e-business, and software engineering, co-edited several conference proceedings and edited books, and is author of the books ""Representations for Genetic and Evolutionary Algorithms"" and ""Design of Modern Heuristics"".
His main research interests are the application of modern heuristics in planning and optimization systems. He has been organizer of many workshops and tracks on heuristic optimization issues, chair of EvoWorkshops in and , co-organizer of the European workshop series on ""Evolutionary Computation in Communications, Networks, and Connected Systems"", co-organizer of the European workshop series on ""Evolutionary Computation in Transportation and Logistics"", and co-chair of the program committee of the GA track at GECCO Evolutionary algorithm theory has studied the time complexity of evolutionary algorithms for more than 20 years.
Different aspects of this rich and diverse research field were presented in three different advanced or specialized tutorials at last year's GECCO. This tutorial presents the foundations of this field. We introduce the most important notions and definitions used in the field and consider different evolutionary algorithms on a number of well-known and important example problems.
Through a careful and thorough introduction of important analytical tools and methods, including fitness-based partitions, typical events and runs and drift analysis, by the end of the tutorial the attendees will be able to apply these techniques to derive relevant runtime results for non-trivial evolutionary algorithms. Moreover, the attendees will be fully prepared to follow the more advanced tutorials that cover more specialized aspects of the field, including the new advanced runtime analysis tutorial on realistic population-based EAs.
To assure the coverage of the topics required in the specialised tutorials, this introductory tutorial will be coordinated with the presenters of the more advanced ones. In addition to custom-tailored methods for the analysis of evolutionary algorithms we also introduce the relevant tools and notions from probability theory in an accessible form. This makes the tutorial appropriate for everyone with an interest in the theory of evolutionary algorithms without the need to have prior knowledge of probability theory and analysis of randomized algorithms.
From , he was a Lecturer in the School of Computer Science at the University of Nottingham, until , when he returned to Birmingham. Dr Lehre's research interests are in theoretical aspects of nature-inspired search heuristics, in particular, runtime analysis of population-based evolutionary algorithms. He was the coordinator of the successful 2M euro EU-funded project SAGE which brought together the theory of evolutionary computation and population genetics.
Pietro S. Ingo Wegener's research group. His main research interest is the time complexity analysis of randomized search heuristics for combinatorial optimization problems. This tutorial addresses GECCO attendees who do not regularly use theoretical methods in their research. For these, we give a smooth introduction to the theory of evolutionary computation EC. Complementing other introductory theory tutorials, we do not discuss mathematical methods or particular results, but explain. Benjamin Doerr is a full professor at the Ecole Polytechnique France.
He also is an adjunct professor at Saarland University Germany. His research area is the theory both of problem-specific algorithms and of randomized search heuristics like evolutionary algorithms. Major contributions to the latter include runtime analyses for evolutionary algorithms and ant colony optimizers, as well as the further development of the drift analysis method, in particular, multiplicative and adaptive drift. In the young area of black-box complexity, he proved several of the current best bounds. In , he chaires the Hot-off-the-press track.
The Covariance-Matrix-Adaptation Evolution Strategy is nowadays considered as the state-of-the art continuous stochastic search algorithm, in particular for optimization of non-separable, ill-conditioned and rugged functions. The CMA-ES consists of different components that adapt the step-size and the covariance matrix separately. This tutorial will focus on CMA-ES and provide the audience with an overview of the different adaptation mechanisms used within CMA-ES and the scenarios where their relative impact is important.
We will in particular present the rank-one update, rank-mu update, active CMA for the covariance matrix adaptation. We will address important design principles as well as questions related to parameter tuning that always accompany algorithm design. The input parameters such as the initial mean, the initial step-size, and the population size will be discussed in relation with the ruggedness of functions. Restart strategies that automatize the input parameter tuning will be presented.
Youhei Akimoto is an associate professor at University of Tsukuba, Japan. He received his diploma in computer science and his master degree and PhD in computational intelligence and systems science from Tokyo Institute of Technology, Japan. Since , he was also a research fellow of Japan Society for the Promotion of Science for one year. He was an assistant professor at Shinshu University from to He started working at the current position in April, His research interests include design principle and theoretical analysis of stochastic search heuristics in continuous domain, in particular, the Covariance Matrix Adaptation Evolution Strategy.
Evolutionary multi-objective optimization EMO has been a major research topic in the field of evolu- tionary computation for many years. It has been generally accepted that combination of evolutionary algorithms and traditional optimization methods should be a next generation multi-objective optimization solver. As the name suggests, the basic idea of the decomposition-based technique is to transform the original complex problem into simplified subproblem s so as to facilitate the optimization.
Decomposition methods have been well used and studied in traditional multi-objective optimization. It has been a commonly used evolutionary algorithmic framework in recent years. In particular, it is self-contained that foundations of multi-objective optimization and the basic working principles of EMO algorithms will be included for those without experience in EMO to learn. Open questions will be posed and highlighted for discussion at the latter session of this tutorial. Afterwards, he spent a year as a postdoctoral research associate at Michigan State University.
Then, he moved to the UK and took the post of research fellow at University of Birmingham. His current research interests include the evolutionary multi-objective optimization, automatic problem solving, machine learning and applications in water engineering and software engineering. His main research interests include evolutionary computation, optimization, neural networks, data analysis, and their applications.
He is on the list of the Thomson Reuters and highly cited researchers in computer science. He is an IEEE fellow. One of the most challenging problems in solving optimization problems with evolutionary algorithms EAs is the choice of the parameters, which allow to adjust the behavior of the algorithms to the problem at hand. Suitable parameter values need to be found, for example, for the population size, the mutation strength, the crossover rate, the selective pressure, etc.
The choice of these parameters can have a crucial impact on the performance of the algorithm and need thus to be executed with care. The descriptive analysis and statistical modeling suggest specific features of the observed spike train data, namely long refractory periods for cells 1 and 2, and multiple time courses of spiking for cells 3 and 4. To address the underlying mechanisms supporting these activities, we develop a biophysical model of Hodgkin-Huxley type and search for a common intrinsic current that can support both types of observed activity.
The locations of converged parameter estimates for each cell are shown in Figure 3. We note that for all the cells, the parameter estimates converge to a small region of parameter space. Differences in the sizes of these regions might be attributed to differences in the observation times of the cells 60 seconds for cells 1—3, 30 s for cell 4 or to differences in the information content in the spiking activity about specific parameters or dynamic variables.
A—C : The blue, black, red, and green dots indicate converged particles for cells 1—4 respectively. The three coordinate spaces for each data set span the initial parameter value space in the estimation procedure. From the converged parameter estimates, the biophysical properties of the mystery current can be ascertained by comparing the estimates of. The estimates of these parameters are listed in Table 1. For each cell and parameter to be estimated, the first number indicates the mean of the converged particle values and the second number, in parentheses, indicates the standard deviation of the particle values.
These results suggest that the mystery current exhibits slow dynamics. However, we do not find overlap for these parameter estimates. This may suggest that the biophysical model is insufficient to capture all features of the observed spike times. This is of course reasonable: the biophysical model consists only of a single compartment with three intrinsic currents. We might improve upon these models by adding multiple compartments, or additional known currents or dynamics.
However, since we know that these simplified or more advanced models are incomplete, we do not interpret the parameter estimates as the actual biophysical values corresponding to these currents. Instead, they provide insight into the types and features of current necessary to produce the observed spiking within the selected class of biophysical models. Our results suggest that the mystery current for the proposed model would need to be a slow, depolarization activated, hyperpolarizing current in all four cells.
This is consistent with the known slow current in these IB cells, a muscarine receptor suppressed potassium current or M-current . We note that the initial assumptions regarding the mystery current are weak, and that other potential currents with different dynamics are attainable. In fact, many other types of currents — fast and slow, depolarization activated and inactivated, hyperpolarizing and depolarizing — are represented in the initial particle values. However, the estimation procedure eliminates these nonconforming particles and reveals in all four cells the characteristics of an M-current.
This result suggests that the same type of current species could be responsible for the observed activity in all four cells, even though these cells exhibit very different spiking characteristics e. For the cells with similar spiking activity cells 1 and 2, or cells 3 and 4 , the particle clouds of these two parameters are similar and even overlap. However, for the differently spiking cells, such as cells 1 and 3 or cells 2 and 4, the particle clouds of these two parameter estimates remain separate.
Our proposed method is not only able to estimate the model parameters but also identifies the characteristics of a mystery current whose specific biophysical mechanisms support the observed activity. To evaluate the estimation results, we simulate spike times using the converged parameter estimates in the biophysical model Eq. Using the parameter estimates for cells 1 and 2, the simulated spike trains produce tonic spiking activity consistent with the observed spike trains shown in Figure 4.
The average ISI histogram over all the particles in Figure 4. In addition, the histogram of the observed ISIs falls within the confidence intervals of the simulated ISIs for most times. Finally, the average spectrum of the simulated spike trains over all particles in Figure 4. At most frequencies the spectrum of the observed spike trains lie within the confidence intervals of the simulated spectrum.
The bottom row red represents the observed spike times of cells 1—4 for 1 s of data. The other three rows black represent the simulated spike trains from 3 converged particles of cells 1—4. The red line is the average histogram over all the converged particles of cells 1—4. The cyan band indicates the confidence intervals of the average histogram. The red line is the average spectrum estimate over all converged particles of cells 1—4.
The cyan band indicates the confidence intervals of the average spectrum estimate. Using the parameter estimates for cells 3 and 4, the simulated spike trains in Figure 4. The simulated ISI histogram in Figure 4.
Finally, the simulated spectra and observed spectra both possess similar low frequency peaks near 10 Hz, but dissimilar broad peaks at higher frequencies. The real spectra possess broad peaks centered near Hz, but the simulated spectra possess broad peaks centered near Hz, which is consistent with the fact that the simulated ISI histograms show a faster spiking mode than the observed ISI histograms in Figure 4. The differences in the fast spiking activity between the model and observed spike time data likely reflect limitations in the biophysical model used.
In order to capture exact features of the fast activity more accurately, we may require a model with additional intrinsic currents or more complicated structure e. On the whole, small discrepancies distinguish the descriptive statistics of the real and simulated data. Yet, the estimated biophysical model, consisting of only a single compartment and 3 dynamic currents, still generates spike trains similar to the observed data in terms of the distribution of the inter-spike intervals and the point process spectrum.
Connecting real-world data with sophisticated computational models is a fundamental issue in modern science. Here, we have extended a method we previously presented  to link observed neural spike time data with a conductance based computational model. An initial descriptive and statistical analysis of the spike time data observed in four IB cells revealed two classes of behavior: regular spiking activity and bursting.
According to the estimates of the parameters, the two classes of spiking activity derive from the same type of intrinsic current: a slow, depolarization activated, hyperpolarizing current, consistent with the known M-current in the IB cell. Different biophysical features — the drive current and maximum conductance of the mystery current — explain the two different classes of behavior. By combining the observed spike time data with the computational model, the SMC method suggests the specific biophysical mechanisms producing the observed activity and identifies the regions of the 6-dimension parameter space capable of reproducing the observed data.
We note that these two classes of behavior may represent states within a continuum of dynamics. In this case, with additional data, we expect the particle filtering approach to reveal the biophysical model parameters supporting such a continuum. We note that the simulated ISI distributions estimated from the biophysical model possess some inconsistencies with the real ISI distribution Fig. For the two bursting cells, the simulated spiking is faster than the observed spiking. In general, such inconsistencies may result from model misspecification, which may occur in multiple ways.
For example, the model may lack an intrinsic current or additional compartments whose inclusion would better fit the data. Biologically, the reduced high frequency firing observed in vitro may result from failures in back-propagation of axonal spikes into the large-capacitance somatic compartment; a more accurate model could include a multi-compartment geometry.
Alternatively, direct recordings near the axon may alleviate this issue, although such recordings are experimentally difficult. In general, all computational models are misspecified, and can always be modified to incorporate further biological realism. However, even the single compartment model implemented here provides biological insight. This model successfully captures the essential features of the observed neuronal data, without representing a true generative model of the data.
Given only the spike time data, the proposed model suggests the type of slow current known to play an important role in these cells. Therefore, the value of this model is the successful identification of an unknown ionic current species vital to the cell dynamics although the model does not capture all biophysics of the cell or changes to the biological system inherent in the experimental recording process.
The proposed approach to parameter estimation, although successful in this case, is limited in two important ways. First, the approach requires some knowledge of the underlying equations that govern the neuronal dynamics. In this case, we knew that an intrinsic current paced the observed activity, and therefore developed a model to exploit this knowledge.
In general, model development will be more successful when supported by knowledge of the features to be studied. A model inconsistent with the neuronal system under investigation will lead to inaccurate biophysical conclusions, even if the parameter estimation converges. However, because the model is biophysical, the resulting estimates are testable in experiments.
Through interactions between this parameter estimation procedure and experiments, an inaccurate model can be refuted experimentally and a more accurate model proposed. In this work, the parameter estimation results were compared to the known biophysical mechanism pacing the observed activity an M-current and found to be consistent. Second, the model was limited to a single compartment cell, and a limited number of the parameters were estimated.
As computational resources continue to improve, estimation will become more feasible for larger, more biophysically realistic models of single cells, and networks of interacting cells. As computational resources improve, we propose that a closed-loop analysis will become possible, in which the SMC method combined with computational models can be used to propose the existence of possible candidate currents in real time from observed spike train data.
The proposed candidate currents can then be tested in pharmacological experiments. In this way, the SMC method identifies candidate biophysical mechanisms that are experimentally testable, potentially reducing dramatically the numbers of experiments required to identify unknown mechanisms. Such an approach will become increasingly vital as high density recordings and observations from many simultaneous neurons become more common.
We note that the SMC method easily extends to include network models of interacting neurons. The approach in this paper outlines a general strategy for a practical data analysis paradigm of spike train data. Both statistical modeling and biophysical modeling characterize neuronal spike train data, but from different points of view, and these two approaches are typically applied independently.
The proposed SMC method goes beyond standard analysis and modeling approaches by combing statistical and biophysical methods: the statistical analysis guides the biophysical modeling and the biophysical modeling lends mechanistic features to the statistical analysis. The resulting technique connects spike train data directly to a biophysical model and provides a principled link between the fields of experimental, statistical, and computational neuroscience. Our goal is, given a list of the spike times produced by a neuron, to identify biophysical mechanisms that could support the observed spiking activity.
To do so, we use the observed spike times to constrain the parameters in a biophysical model of neural activity . Briefly, this technique utilizes a sequential Monte Carlo SMC method, which incorporates biophysical modeling and point process theory into a state space framework. As we will show, this analysis links the observed spiking activity directly to specific biophysical mechanisms that are not immediately observable.
To apply this SMC method to the observed neural spike times of interest here, we must construct a biophysical model capable of reproducing the observed spike train dynamics. To that end, we first perform visual data analysis and statistical modeling of the spike train data to characterize the spiking activity. The inferences arising from these analyses inform the biophysical model to which we apply the SMC method to estimate model parameters and dynamic variables, and draw inferences about the biophysical mechanisms generating the observed spike times.
Neocortical slices containing auditory areas and secondary somatosensory cortical areas were maintained at 34 C at the interface between warm wetted and artificial cerebrospinal fluid aCSF containing 3 mM KCl, 1. Extracellular recordings from secondary somatosensory cortex were obtained by using glass micropipettes containing the above aCSF resistance.
Intracellular recordings were taken with sharp microelectrodes filled with potassium acetate resistance 30— Signals were analog filtered at 2 kHz and digitized at 10 kHz. All neuronal recordings illustrated were taken from layer V.
Get this edition
Neurons are shown to be intrinsically bursting by prior step-wise injection of depolarizing current through the recording electrode. Experimental conditions included the addition of nM kainate to the bathing medium to generate a stable, persistent beta2 20—30 Hz rhythm visible in the local extracellular recordings. For additional details about the data collection, please see .
Descriptive statistics provide a powerful and simple technique to characterize spike time data. Here we apply two visualizations of the spike time data: the inter-spike interval ISI histogram and the power spectrum. To compute the ISI histogram, we choose a bin size of 6 ms. Next, we compute the power spectrum of the point process data to characterize the rhythmic features of the spiking.
We use the multitaper framework  —  and choose the time-frequency product, , to preserve a frequency resolution near 1 Hz for. More specifically, we choose the time-frequency product to be and make the standard choice for the number of tapers to be. We compute confidence bounds using a jackknife method . As a second method to characterize the spike times, we consider a history-dependent statistical model of the data.
To do so, we utilize a point process model by specifying a conditional intensity of spiking as a function past spiking activity. We first introduce notation for a discretized point process and second present a specific conditional intensity model that incorporates only the spiking history. We choose a large integer and partition the observation interval into subintervals each of length.
The integer is chosen to be sufficiently large to guarantee that there is at most one spike per subinterval. Let be the number of spikes counted in the time interval. A discretized point process can be completely characterized by its conditional intensity function  ,  which defines the instantaneous probability of spiking at time given the past spike history and other relevant covariates.
Here, we construct a history-dependent conditional intensity model using cardinal spline functions of the past spike data .