<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:slash="http://purl.org/rss/1.0/modules/slash/">
  <channel>
    <title>Mathematical Neuroscience and Applications - Latest Publications</title>
    <description>Latest articles</description>
    
    <pubDate>Sun, 15 Mar 2026 05:17:16 +0000</pubDate>
    <generator>episciences.org</generator>
    <link>https://mna.episciences.org</link>
    <author>Mathematical Neuroscience and Applications</author>
    <dc:creator>Mathematical Neuroscience and Applications</dc:creator>
    <atom:link rel="self" type="application/rss+xml" href="https://mna.episciences.org/rss/papers"/>
    <atom:link rel="hub" href="http://pubsubhubbub.appspot.com/"/>
    <item>
      <title>Ordinal Characterization of Similarity Judgments</title>
      <description><![CDATA[Characterizing judgments of similarity within a perceptual or semantic domain, and making inferences about the underlying structure of this domain from these judgments, has an increasingly important role in cognitive and systems neuroscience. We present a new framework for this purpose that makes limited assumptions about how perceptual distances are converted into similarity judgments. The approach starts from a dataset of empirical judgments of relative similarities: the fraction of times that a subject chooses one of two comparison stimuli to be more similar to a reference stimulus. These empirical judgments provide Bayesian estimates of underling choice probabilities. From these estimates, we derive indices that characterize the set of judgments in three ways: compatibility with a symmetric dis-similarity, compatibility with an ultrametric space, and compatibility with an additive tree. Each of the indices is derived from rank-order relationships among the choice probabilities that, as we show, are necessary and sufficient for local consistency with the three respective characteristics. We illustrate this approach with simulations and example psychophysical datasets of dis-similarity judgments in several visual domains and provide code that implements the analyses at https://github.com/jvlab/simrank.]]></description>
      <pubDate>Thu, 07 Aug 2025 07:43:10 +0000</pubDate>
      <link>https://doi.org/10.46298/mna.12457</link>
      <guid>https://doi.org/10.46298/mna.12457</guid>
      <author>Victor, Jonathan D.</author>
      <author>Aguilar, Guillermo</author>
      <author>Waraich, Suniyya A.</author>
      <dc:creator>Victor, Jonathan D.</dc:creator>
      <dc:creator>Aguilar, Guillermo</dc:creator>
      <dc:creator>Waraich, Suniyya A.</dc:creator>
      <content:encoded><![CDATA[Characterizing judgments of similarity within a perceptual or semantic domain, and making inferences about the underlying structure of this domain from these judgments, has an increasingly important role in cognitive and systems neuroscience. We present a new framework for this purpose that makes limited assumptions about how perceptual distances are converted into similarity judgments. The approach starts from a dataset of empirical judgments of relative similarities: the fraction of times that a subject chooses one of two comparison stimuli to be more similar to a reference stimulus. These empirical judgments provide Bayesian estimates of underling choice probabilities. From these estimates, we derive indices that characterize the set of judgments in three ways: compatibility with a symmetric dis-similarity, compatibility with an ultrametric space, and compatibility with an additive tree. Each of the indices is derived from rank-order relationships among the choice probabilities that, as we show, are necessary and sufficient for local consistency with the three respective characteristics. We illustrate this approach with simulations and example psychophysical datasets of dis-similarity judgments in several visual domains and provide code that implements the analyses at https://github.com/jvlab/simrank.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>A Quasi-Stationary Approach to Metastability in a System of Spiking Neurons with Synaptic Plasticity</title>
      <description><![CDATA[After reviewing the behavioral studies of working memory and of the cellular substrate of the latter, we argue that metastable states constitute candidates for the type of transient information storage required by working memory. We then present a simple neural network model made of stochastic units whose synapses exhibit short-term facilitation. The Markov process dynamics of this model was specifically designed to be analytically tractable, simple to simulate numerically and to exhibit a quasi-stationary distribution (QSD). Since the state space is finite this QSD is also a Yaglom limit, which allows us to bridge the gap between quasi-stationarity and metastability by considering the relative orders of magnitude of the relaxation and absorption times. We present first analytical results: characterization of the absorbing region of the Markov process, irreducibility outside this absorbing region and consequently existence and uniqueness of a QSD. We then apply Perron-Frobenius spectral analysis to obtain any specific QSD, and design an approximate method for the first moments of this QSD when the exact method is intractable. Finally we use these methods to study the relaxation time toward the QSD and establish numerically the memorylessness of the time of extinction.]]></description>
      <pubDate>Thu, 02 Jan 2025 17:01:25 +0000</pubDate>
      <link>https://doi.org/10.46298/mna.7668</link>
      <guid>https://doi.org/10.46298/mna.7668</guid>
      <author>André, Morgan</author>
      <author>Pouzat, Christophe</author>
      <dc:creator>André, Morgan</dc:creator>
      <dc:creator>Pouzat, Christophe</dc:creator>
      <content:encoded><![CDATA[After reviewing the behavioral studies of working memory and of the cellular substrate of the latter, we argue that metastable states constitute candidates for the type of transient information storage required by working memory. We then present a simple neural network model made of stochastic units whose synapses exhibit short-term facilitation. The Markov process dynamics of this model was specifically designed to be analytically tractable, simple to simulate numerically and to exhibit a quasi-stationary distribution (QSD). Since the state space is finite this QSD is also a Yaglom limit, which allows us to bridge the gap between quasi-stationarity and metastability by considering the relative orders of magnitude of the relaxation and absorption times. We present first analytical results: characterization of the absorbing region of the Markov process, irreducibility outside this absorbing region and consequently existence and uniqueness of a QSD. We then apply Perron-Frobenius spectral analysis to obtain any specific QSD, and design an approximate method for the first moments of this QSD when the exact method is intractable. Finally we use these methods to study the relaxation time toward the QSD and establish numerically the memorylessness of the time of extinction.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>A mean-field model of Integrate-and-Fire neurons: non-linear stability of the stationary solutions</title>
      <description><![CDATA[We investigate a stochastic network composed of Integrate-and-Fire spiking neurons, focusing on its mean-field asymptotics. We consider an invariant probability measure of the McKean-Vlasov equation and establish an explicit sufficient condition to ensure the local stability of this invariant distribution. Furthermore, we prove a conjecture proposed initially by J. Touboul and P. Robert regarding the bistable nature of a specific instance of this neuronal model.]]></description>
      <pubDate>Fri, 27 Sep 2024 12:15:32 +0000</pubDate>
      <link>https://doi.org/10.46298/mna.12583</link>
      <guid>https://doi.org/10.46298/mna.12583</guid>
      <author>Cormier, Quentin</author>
      <dc:creator>Cormier, Quentin</dc:creator>
      <content:encoded><![CDATA[We investigate a stochastic network composed of Integrate-and-Fire spiking neurons, focusing on its mean-field asymptotics. We consider an invariant probability measure of the McKean-Vlasov equation and establish an explicit sufficient condition to ensure the local stability of this invariant distribution. Furthermore, we prove a conjecture proposed initially by J. Touboul and P. Robert regarding the bistable nature of a specific instance of this neuronal model.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Neural Coding as a Statistical Testing Problem</title>
      <description><![CDATA[We take the testing perspective to understand what the minimal discrimination time between two stimuli is for different types of rate coding neurons. Our main goal is to describe the testing abilities of two different encoding systems: place cells and grid cells. In particular, we show, through the notion of adaptation, that a fixed place cell system can have a minimum discrimination time that decreases when the stimuli are further away. This could be a considerable advantage for the place cell system that could complement the grid cell system, which is able to discriminate stimuli that are much closer than place cells.]]></description>
      <pubDate>Fri, 29 Dec 2023 10:36:36 +0000</pubDate>
      <link>https://doi.org/10.46298/mna.10002</link>
      <guid>https://doi.org/10.46298/mna.10002</guid>
      <author>Ost, Guilherme</author>
      <author>Reynaud-Bouret, Patricia</author>
      <dc:creator>Ost, Guilherme</dc:creator>
      <dc:creator>Reynaud-Bouret, Patricia</dc:creator>
      <content:encoded><![CDATA[We take the testing perspective to understand what the minimal discrimination time between two stimuli is for different types of rate coding neurons. Our main goal is to describe the testing abilities of two different encoding systems: place cells and grid cells. In particular, we show, through the notion of adaptation, that a fixed place cell system can have a minimum discrimination time that decreases when the stimuli are further away. This could be a considerable advantage for the place cell system that could complement the grid cell system, which is able to discriminate stimuli that are much closer than place cells.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Propagation of chaos in mean field networks of FitzHugh-Nagumo neurons</title>
      <description><![CDATA[In this article, we are interested in the behavior of a fully connected network of $N$ neurons, where $N$ tends to infinity. We assume that the neurons follow the stochastic FitzHugh-Nagumo model, whose specificity is the non-linearity with a cubic term. We prove a result of uniform in time propagation of chaos of this model in a mean-field framework. We also exhibit explicit bounds. We use a coupling method initially suggested by A. Eberle (arXiv:1305.1233), and recently extended in (1805.11387), known as the reflection coupling. We simultaneously construct a solution of the $N$-particle system and $N$ independent copies of the non-linear McKean-Vlasov limit in such a way that, considering an appropriate semi-metric that takes into account the various possible behaviors of the processes, the two solutions tend to get closer together as $N$ increases, uniformly in time. The reflection coupling allows us to deal with the non-convexity of the underlying potential in the dynamics of the quantities defining our network, and show independence at the limit for the system in mean field interaction with sufficiently small Lipschitz continuous interactions.]]></description>
      <pubDate>Mon, 19 Jun 2023 08:04:38 +0000</pubDate>
      <link>https://doi.org/10.46298/mna.9748</link>
      <guid>https://doi.org/10.46298/mna.9748</guid>
      <author>Colombani, Laetitia</author>
      <author>Bris, Pierre Le</author>
      <dc:creator>Colombani, Laetitia</dc:creator>
      <dc:creator>Bris, Pierre Le</dc:creator>
      <content:encoded><![CDATA[In this article, we are interested in the behavior of a fully connected network of $N$ neurons, where $N$ tends to infinity. We assume that the neurons follow the stochastic FitzHugh-Nagumo model, whose specificity is the non-linearity with a cubic term. We prove a result of uniform in time propagation of chaos of this model in a mean-field framework. We also exhibit explicit bounds. We use a coupling method initially suggested by A. Eberle (arXiv:1305.1233), and recently extended in (1805.11387), known as the reflection coupling. We simultaneously construct a solution of the $N$-particle system and $N$ independent copies of the non-linear McKean-Vlasov limit in such a way that, considering an appropriate semi-metric that takes into account the various possible behaviors of the processes, the two solutions tend to get closer together as $N$ increases, uniformly in time. The reflection coupling allows us to deal with the non-convexity of the underlying potential in the dynamics of the quantities defining our network, and show independence at the limit for the system in mean field interaction with sufficiently small Lipschitz continuous interactions.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Gabor frames from contact geometry in models of the primary visual cortex</title>
      <description><![CDATA[We analyze the interplay between contact geometry and Gabor filters signal analysis in geometric models of the primary visual cortex. We show in particular that a specific framed lattice and an associated Gabor system is determined by the Legendrian circle bundle structure of the $3$-manifold of contact elements on a surface (which models the V1-cortex), together with the presence of an almost-complex structure on the tangent bundle of the surface (which models the retinal surface). We identify a scaling of the lattice, also dictated by the manifold geometry, that ensures the frame condition is satisfied. We then consider a $5$-dimensional model where receptor profiles also involve a dependence on frequency and scale variables, in addition to the dependence on position and orientation. In this case we show that a proposed profile window function does not give rise to frames (even in a distributional sense), while a natural modification of the same generates Gabor frames with respect to the appropriate lattice determined by the contact geometry.]]></description>
      <pubDate>Tue, 06 Jun 2023 13:56:08 +0000</pubDate>
      <link>https://doi.org/10.46298/mna.9766</link>
      <guid>https://doi.org/10.46298/mna.9766</guid>
      <author>Liontou, Vasiliki</author>
      <author>Marcolli, Matilde</author>
      <dc:creator>Liontou, Vasiliki</dc:creator>
      <dc:creator>Marcolli, Matilde</dc:creator>
      <content:encoded><![CDATA[We analyze the interplay between contact geometry and Gabor filters signal analysis in geometric models of the primary visual cortex. We show in particular that a specific framed lattice and an associated Gabor system is determined by the Legendrian circle bundle structure of the $3$-manifold of contact elements on a surface (which models the V1-cortex), together with the presence of an almost-complex structure on the tangent bundle of the surface (which models the retinal surface). We identify a scaling of the lattice, also dictated by the manifold geometry, that ensures the frame condition is satisfied. We then consider a $5$-dimensional model where receptor profiles also involve a dependence on frequency and scale variables, in addition to the dependence on position and orientation. In this case we show that a proposed profile window function does not give rise to frames (even in a distributional sense), while a natural modification of the same generates Gabor frames with respect to the appropriate lattice determined by the contact geometry.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>On the long time behaviour of single stochastic Hodgkin-Huxley neurons with constant signal, and a construction of circuits of interacting neurons showing self-organized rhythmic oscillations</title>
      <description><![CDATA[The stochastic Hodgkin-Huxley neurons considered in this paper replace time-constant deterministic input $a dt$ of the classical deterministic model by increments $\vartheta dt + dX_t$ of a stochastic process: $X$ is Ornstein-Uhlenbeck with volatility $\sigma>0$ and backdriving force $\tau>0$, and we call $\vartheta>0$ the signal. We have ergodicity and strong laws of large numbers for various functionals of the process, and characterize 'quiet behaviour' and 'regular spiking' as events whose probability depends on the parameters $(\tau,\sigma)$ and on the signal $\vartheta$. The notions of quiet behaviour and regular spiking allow for a construction of circuits of interacting stochastic Hodgkin-Huxley neurons, combining excitation with inhibition according to a bloc structure along the circuit, on which self-organized rhythmic oscillations can be observed.]]></description>
      <pubDate>Mon, 30 Jan 2023 08:47:01 +0000</pubDate>
      <link>https://doi.org/10.46298/mna.9279</link>
      <guid>https://doi.org/10.46298/mna.9279</guid>
      <author>Höpfner, Reinhard</author>
      <dc:creator>Höpfner, Reinhard</dc:creator>
      <content:encoded><![CDATA[The stochastic Hodgkin-Huxley neurons considered in this paper replace time-constant deterministic input $a dt$ of the classical deterministic model by increments $\vartheta dt + dX_t$ of a stochastic process: $X$ is Ornstein-Uhlenbeck with volatility $\sigma>0$ and backdriving force $\tau>0$, and we call $\vartheta>0$ the signal. We have ergodicity and strong laws of large numbers for various functionals of the process, and characterize 'quiet behaviour' and 'regular spiking' as events whose probability depends on the parameters $(\tau,\sigma)$ and on the signal $\vartheta$. The notions of quiet behaviour and regular spiking allow for a construction of circuits of interacting stochastic Hodgkin-Huxley neurons, combining excitation with inhibition according to a bloc structure along the circuit, on which self-organized rhythmic oscillations can be observed.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Theoretical study of the emergence of periodic solutions for the inhibitory NNLIF neuron model with synaptic delay</title>
      <description><![CDATA[Among other models aimed at understanding self-sustained oscillations in neural networks, the NNLIF model with synaptic delay was developed almost twenty years ago to model fast global oscillations in networks of weakly firing inhibitory neurons. Periodic solutions were numerically observed in this model, but despite its intensive study by researchers in PDEs and probability, there is up-to-date no analytical result on this topic. In this article, we propose to approximate formally these solutions by a Gaussian wave whose periodic movement is described by an associate delay differential equation (DDE). We prove the existence of a periodic solution for this DDE and we give a rigorous asymptotic result on these solutions when the connectivity parameter $b$ goes to $-\infty$. Lastly, we provide heuristic and numerical evidence of the validity of our approximation.]]></description>
      <pubDate>Wed, 26 Oct 2022 14:57:14 +0000</pubDate>
      <link>https://doi.org/10.46298/mna.7256</link>
      <guid>https://doi.org/10.46298/mna.7256</guid>
      <author>Ikeda, Kota</author>
      <author>Roux, Pierre</author>
      <author>Salort, Delphine</author>
      <author>Smets, Didier</author>
      <dc:creator>Ikeda, Kota</dc:creator>
      <dc:creator>Roux, Pierre</dc:creator>
      <dc:creator>Salort, Delphine</dc:creator>
      <dc:creator>Smets, Didier</dc:creator>
      <content:encoded><![CDATA[Among other models aimed at understanding self-sustained oscillations in neural networks, the NNLIF model with synaptic delay was developed almost twenty years ago to model fast global oscillations in networks of weakly firing inhibitory neurons. Periodic solutions were numerically observed in this model, but despite its intensive study by researchers in PDEs and probability, there is up-to-date no analytical result on this topic. In this article, we propose to approximate formally these solutions by a Gaussian wave whose periodic movement is described by an associate delay differential equation (DDE). We prove the existence of a periodic solution for this DDE and we give a rigorous asymptotic result on these solutions when the connectivity parameter $b$ goes to $-\infty$. Lastly, we provide heuristic and numerical evidence of the validity of our approximation.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Mean field system of a two-layers neural model in a diffusive regime</title>
      <description><![CDATA[We study a model of interacting neurons. The structure of this neural system is composed of two layers of neurons such that the neurons of the first layer send their spikes to the neurons of the second one: if $N$ is the number of neurons of the first layer, at each spiking time of the first layer, every neuron of both layers receives an amount of potential of the form $U/\sqrt{N},$ where $U$ is a centered random variable. This kind of structure of neurons can model a part of the structure of the visual cortex: the first layer represents the primary visual cortex V1 and the second one the visual area V2. The model consists of two stochastic processes, one modelling the membrane potential of the neurons of the first layer, and the other the membrane potential of the neurons of the second one. We prove the convergence of these processes as the number of neurons~$N$ goes to infinity and obtain a convergence speed. The proofs rely on similar arguments as those used in [Erny, L\"ocherbach, Loukianova (2022)]: the convergence speed of the semigroups of the processes is obtained from the convergence speed of their infinitesimal generators using a Trotter-Kato formula, and from the regularity of the limit semigroup.]]></description>
      <pubDate>Thu, 25 Aug 2022 17:55:31 +0000</pubDate>
      <link>https://doi.org/10.46298/mna.7570</link>
      <guid>https://doi.org/10.46298/mna.7570</guid>
      <author>Erny, Xavier</author>
      <dc:creator>Erny, Xavier</dc:creator>
      <content:encoded><![CDATA[We study a model of interacting neurons. The structure of this neural system is composed of two layers of neurons such that the neurons of the first layer send their spikes to the neurons of the second one: if $N$ is the number of neurons of the first layer, at each spiking time of the first layer, every neuron of both layers receives an amount of potential of the form $U/\sqrt{N},$ where $U$ is a centered random variable. This kind of structure of neurons can model a part of the structure of the visual cortex: the first layer represents the primary visual cortex V1 and the second one the visual area V2. The model consists of two stochastic processes, one modelling the membrane potential of the neurons of the first layer, and the other the membrane potential of the neurons of the second one. We prove the convergence of these processes as the number of neurons~$N$ goes to infinity and obtain a convergence speed. The proofs rely on similar arguments as those used in [Erny, L\"ocherbach, Loukianova (2022)]: the convergence speed of the semigroups of the processes is obtained from the convergence speed of their infinitesimal generators using a Trotter-Kato formula, and from the regularity of the limit semigroup.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Neural Field Models: A mathematical overview and unifying framework</title>
      <description><![CDATA[Mathematical modelling of the macroscopic electrical activity of the brain is highly non-trivial and requires a detailed understanding of not only the associated mathematical techniques, but also the underlying physiology and anatomy. Neural field theory is a population-level approach to modelling the non-linear dynamics of large populations of neurons, while maintaining a degree of mathematical tractability. This class of models provides a solid theoretical perspective on fundamental processes of neural tissue such as state transitions between different brain activities as observed during epilepsy or sleep. Various anatomical, physiological, and mathematical assumptions are essential for deriving a minimal set of equations that strike a balance between biophysical realism and mathematical tractability. However, these assumptions are not always made explicit throughout the literature. Even though neural field models (NFMs) first appeared in the literature in the early 1970's, the relationships between them have not been systematically addressed. This may partially be explained by the fact that the inter-dependencies between these models are often implicit and non-trivial. Herein we provide a review of key stages of the history and development of neural field theory and contemporary uses of this branch of mathematical neuroscience. First, the principles of the theory are summarised throughout a discussion of the pioneering models by Wilson and Cowan, Amari and Nunez. Upon thorough review of these models, we then present a unified mathematical framework in which all neural field models can be derived by applying different assumptions. We then use this framework to i) derive contemporary models by Robinson, Jansen and Rit, Wendling, Liley, and Steyn-Ross, and ii) make explicit the many significant inherited assumptions that exist in the current literature.]]></description>
      <pubDate>Sat, 19 Mar 2022 15:24:28 +0000</pubDate>
      <link>https://doi.org/10.46298/mna.7284</link>
      <guid>https://doi.org/10.46298/mna.7284</guid>
      <author>Cook, Blake J.</author>
      <author>Peterson, Andre D. H.</author>
      <author>Woldman, Wessel</author>
      <author>Terry, John R.</author>
      <dc:creator>Cook, Blake J.</dc:creator>
      <dc:creator>Peterson, Andre D. H.</dc:creator>
      <dc:creator>Woldman, Wessel</dc:creator>
      <dc:creator>Terry, John R.</dc:creator>
      <content:encoded><![CDATA[Mathematical modelling of the macroscopic electrical activity of the brain is highly non-trivial and requires a detailed understanding of not only the associated mathematical techniques, but also the underlying physiology and anatomy. Neural field theory is a population-level approach to modelling the non-linear dynamics of large populations of neurons, while maintaining a degree of mathematical tractability. This class of models provides a solid theoretical perspective on fundamental processes of neural tissue such as state transitions between different brain activities as observed during epilepsy or sleep. Various anatomical, physiological, and mathematical assumptions are essential for deriving a minimal set of equations that strike a balance between biophysical realism and mathematical tractability. However, these assumptions are not always made explicit throughout the literature. Even though neural field models (NFMs) first appeared in the literature in the early 1970's, the relationships between them have not been systematically addressed. This may partially be explained by the fact that the inter-dependencies between these models are often implicit and non-trivial. Herein we provide a review of key stages of the history and development of neural field theory and contemporary uses of this branch of mathematical neuroscience. First, the principles of the theory are summarised throughout a discussion of the pioneering models by Wilson and Cowan, Amari and Nunez. Upon thorough review of these models, we then present a unified mathematical framework in which all neural field models can be derived by applying different assumptions. We then use this framework to i) derive contemporary models by Robinson, Jansen and Rit, Wendling, Liley, and Steyn-Ross, and ii) make explicit the many significant inherited assumptions that exist in the current literature.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Analysis of Activity Dependent Development of Topographic Maps in Neural Field Theory with Short Time Scale Dependent Plasticity</title>
      <description><![CDATA[Topographic maps are a brain structure connecting pre-synpatic and post-synaptic brain regions. Topographic development is dependent on Hebbian-based plasticity mechanisms working in conjunction with spontaneous patterns of neural activity generated in the pre-synaptic regions. Studies performed in mouse have shown that these spontaneous patterns can exhibit complex spatial-temporal structures which existing models cannot incorporate. Neural field theories are appropriate modelling paradigms for topographic systems due to the dense nature of the connections between regions and can be augmented with a plasticity rule general enough to capture complex time-varying structures. We propose a theoretical framework for studying the development of topography in the context of complex spatial-temporal activity fed-forward from the pre-synaptic to post-synaptic regions. Analysis of the model leads to an analytic solution corroborating the conclusion that activity can drive the refinement of topographic projections. The analysis also suggests that biological noise is used in the development of topography to stabilise the dynamics. MCMC simulations are used to analyse and understand the differences in topographic refinement between wild-type and the $\beta2$ knock-out mutant in mice. The time scale of the synaptic plasticity window is estimated as $0.56$ seconds in this context with a model fit of $R^2 = 0.81$.]]></description>
      <pubDate>Fri, 11 Mar 2022 08:48:32 +0000</pubDate>
      <link>https://doi.org/10.46298/mna.8390</link>
      <guid>https://doi.org/10.46298/mna.8390</guid>
      <author>Gale, Nicholas</author>
      <author>Rodger, Jennifer</author>
      <author>Small, Michael</author>
      <author>Eglen, Stephen</author>
      <dc:creator>Gale, Nicholas</dc:creator>
      <dc:creator>Rodger, Jennifer</dc:creator>
      <dc:creator>Small, Michael</dc:creator>
      <dc:creator>Eglen, Stephen</dc:creator>
      <content:encoded><![CDATA[Topographic maps are a brain structure connecting pre-synpatic and post-synaptic brain regions. Topographic development is dependent on Hebbian-based plasticity mechanisms working in conjunction with spontaneous patterns of neural activity generated in the pre-synaptic regions. Studies performed in mouse have shown that these spontaneous patterns can exhibit complex spatial-temporal structures which existing models cannot incorporate. Neural field theories are appropriate modelling paradigms for topographic systems due to the dense nature of the connections between regions and can be augmented with a plasticity rule general enough to capture complex time-varying structures. We propose a theoretical framework for studying the development of topography in the context of complex spatial-temporal activity fed-forward from the pre-synaptic to post-synaptic regions. Analysis of the model leads to an analytic solution corroborating the conclusion that activity can drive the refinement of topographic projections. The analysis also suggests that biological noise is used in the development of topography to stabilise the dynamics. MCMC simulations are used to analyse and understand the differences in topographic refinement between wild-type and the $\beta2$ knock-out mutant in mice. The time scale of the synaptic plasticity window is estimated as $0.56$ seconds in this context with a model fit of $R^2 = 0.81$.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Investigating the integrate and fire model as the limit of a random discharge model: a stochastic analysis perspective</title>
      <description><![CDATA[In the mean field integrate-and-fire model, the dynamics of a typical neuron within a large network is modeled as a diffusion-jump stochastic process whose jump takes place once the voltage reaches a threshold. In this work, the main goal is to establish the convergence relationship between the regularized process and the original one where in the regularized process, the jump mechanism is replaced by a Poisson dynamic, and jump intensity within the classically forbidden domain goes to infinity as the regularization parameter vanishes. On the macroscopic level, the Fokker-Planck equation for the process with random discharges (i.e. Poisson jumps) are defined on the whole space, while the equation for the limit process is on the half space. However, with the iteration scheme, the difficulty due to the domain differences has been greatly mitigated and the convergence for the stochastic process and the firing rates can be established. Moreover, we find a polynomial-order convergence for the distribution by a re-normalization argument in probability theory. Finally, by numerical experiments, we quantitatively explore the rate and the asymptotic behavior of the convergence for both linear and nonlinear models.]]></description>
      <pubDate>Tue, 30 Nov 2021 09:12:22 +0000</pubDate>
      <link>https://doi.org/10.46298/mna.7203</link>
      <guid>https://doi.org/10.46298/mna.7203</guid>
      <author>Liu, Jian-Guo</author>
      <author>Wang, Ziheng</author>
      <author>Xie, Yantong</author>
      <author>Zhang, Yuan</author>
      <author>Zhou, Zhennan</author>
      <dc:creator>Liu, Jian-Guo</dc:creator>
      <dc:creator>Wang, Ziheng</dc:creator>
      <dc:creator>Xie, Yantong</dc:creator>
      <dc:creator>Zhang, Yuan</dc:creator>
      <dc:creator>Zhou, Zhennan</dc:creator>
      <content:encoded><![CDATA[In the mean field integrate-and-fire model, the dynamics of a typical neuron within a large network is modeled as a diffusion-jump stochastic process whose jump takes place once the voltage reaches a threshold. In this work, the main goal is to establish the convergence relationship between the regularized process and the original one where in the regularized process, the jump mechanism is replaced by a Poisson dynamic, and jump intensity within the classically forbidden domain goes to infinity as the regularization parameter vanishes. On the macroscopic level, the Fokker-Planck equation for the process with random discharges (i.e. Poisson jumps) are defined on the whole space, while the equation for the limit process is on the half space. However, with the iteration scheme, the difficulty due to the domain differences has been greatly mitigated and the convergence for the stochastic process and the firing rates can be established. Moreover, we find a polynomial-order convergence for the distribution by a re-normalization argument in probability theory. Finally, by numerical experiments, we quantitatively explore the rate and the asymptotic behavior of the convergence for both linear and nonlinear models.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
    <item>
      <title>Perceptual spaces and their symmetries: The geometry of color space</title>
      <description><![CDATA[Our sensory systems transform external signals into neural activity, thereby producing percepts. We are endowed with an intuitive notion of similarity between percepts, that need not reflect the proximity of the physical properties of the corresponding external stimuli. The quantitative characterization of the geometry of percepts is therefore an endeavour that must be accomplished behaviorally. Here we characterized the geometry of color space using discrimination and matching experiments. We proposed an individually tailored metric defined in terms of the minimal chromatic difference required for each observer to differentiate a stimulus from its surround. Next, we showed that this perceptual metric was particularly adequate to describe two additional experiments, since it revealed the natural symmetry of perceptual computations. In one of the experiments, observers were required to discriminate two stimuli surrounded by a chromaticity that differed from that of the tested stimuli. In the perceptual coordinates, the change in discrimination thresholds induced by the surround followed a simple law that only depended on the perceptual distance between the surround and each of the two compared stimuli. In the other experiment, subjects were asked to match the color of two stimuli surrounded by two different chromaticities. Again, in the perceptual coordinates the induction effect produced by surrounds followed a simple, symmetric law. We conclude that the individually-tailored notion of perceptual distance reveals the symmetry of the laws governing perceptual computations.]]></description>
      <pubDate>Thu, 15 Jul 2021 14:46:02 +0000</pubDate>
      <link>https://doi.org/10.46298/mna.7108</link>
      <guid>https://doi.org/10.46298/mna.7108</guid>
      <author>Vattuone, Nicolás</author>
      <author>Wachtler, Thomas</author>
      <author>Samengo, Inés</author>
      <dc:creator>Vattuone, Nicolás</dc:creator>
      <dc:creator>Wachtler, Thomas</dc:creator>
      <dc:creator>Samengo, Inés</dc:creator>
      <content:encoded><![CDATA[Our sensory systems transform external signals into neural activity, thereby producing percepts. We are endowed with an intuitive notion of similarity between percepts, that need not reflect the proximity of the physical properties of the corresponding external stimuli. The quantitative characterization of the geometry of percepts is therefore an endeavour that must be accomplished behaviorally. Here we characterized the geometry of color space using discrimination and matching experiments. We proposed an individually tailored metric defined in terms of the minimal chromatic difference required for each observer to differentiate a stimulus from its surround. Next, we showed that this perceptual metric was particularly adequate to describe two additional experiments, since it revealed the natural symmetry of perceptual computations. In one of the experiments, observers were required to discriminate two stimuli surrounded by a chromaticity that differed from that of the tested stimuli. In the perceptual coordinates, the change in discrimination thresholds induced by the surround followed a simple law that only depended on the perceptual distance between the surround and each of the two compared stimuli. In the other experiment, subjects were asked to match the color of two stimuli surrounded by two different chromaticities. Again, in the perceptual coordinates the induction effect produced by surrounds followed a simple, symmetric law. We conclude that the individually-tailored notion of perceptual distance reveals the symmetry of the laws governing perceptual computations.]]></content:encoded>
      <slash:comments>0</slash:comments>
    </item>
  </channel>
</rss>
