Dezhe Jin Group

Department of Physics , Center for Neural Engineering
Penn State University


Research Overview

The brain, composed of a vast number of intricately connected neurons, stands as one of the most sophisticated dynamical systems in nature. Understanding the mechanisms of brain computation is at the forefront of contemporary scientific research. Our group's research focuses on the theoretical analysis of biological neural networks and the development of computational models of neural systems. This modeling is conducted in close collaboration with experimental groups.

Computational model of birdsong syntax

Songbirds serve as accessible model systems for studying vocal communication. Male songbirds learn to sing from their fathers and use their songs to attract females. Birdsong consists of sequences of stereotypical syllables, which can be either largely fixed, as seen in the songs of the zebra finch, or variable, as in the songs of the Bengalese finch. The syllable sequences in variable songs adhere to probabilistic rules, known as birdsong syntax. However, the neural mechanisms underlying birdsong syntax remain not understood (PDF).

Single-unit recordings in singing zebra finches have revealed that projection neurons in HVC (proper name), the sensory-motor area within the song system, spike sequentially with precise timing relative to the song. A simple explanation for this phenomenon is that the projection neurons form chain networks, thereby supporting synfire chain activity. We examined this mechanism in detail using computational models and discovered that such network activity is prone to instability unless individual neurons possess active dendritic processes that normalize the spiking activity between successive groups in the chain (PDF). This prediction was later confirmed through intracellular recordings of projection neurons in singing birds, which also strongly supported the chain network hypothesis (PDF).

We extended the chain network model to develop a network model capable of generating variable birdsong sequences. In this model, a chain network encodes a single syllable. The end of one chain connects to the beginning of multiple chains, enabling branching connectivity. This setup allows spike activity to probabilistically flow into one of the connected chains at the branching point, with the selection being governed by winner-take-all mechanisms via inhibitory interneurons. The transition probability between chains is determined by the strengths of the connections to the branches, as well as external inputs from the thalamic nucleus and auditory feedback (PDF).

The model predicted that syllable transitions would follow Markovian statistics. To test this prediction, we analyzed the song syntax of the Bengalese finch. The analysis largely supported the prediction but indicated two significant modifications. First, variable birdsong often contains long repeated syllables, which do not adhere to Markovian dynamics. Through computational analysis, we identified the source of these non-Markovian repeats as stimulus-specific adaptation of auditory feedback to HVC. As the syllables repeat, the feedback weakens, thereby reducing the probability of further repetition. The model also predicted that deafening would reduce syllable repetition, a prediction confirmed by experimental data (PDF). Another key modification suggested by the findings is that multiple chains could encode the same syllable. Although indirect evidence supports this idea, direct experimental verification is still pending. With these modifications, the resulting network model corresponds to a partially observable Markov model with adaptation, which accurately describes the song syntax of the Bengalese finch (PDF).

Our computational model of birdsong syntax is expected to contribute to a broader understanding of syntactic structures and neural mechanisms underlying vocalizations in other animal species.

Dynamics of spiking neural networks

Neurons interact with discrete spikes. In certain regimes, the network dynamics can be approximated by rate models, in which the interactions between the neurons are described in terms firing rates of neurons. The resulting network equations are continuous. The well-known Hopfield model is one example. However, rate models leave out the possibility that the discreteness of spiking interactions lead to unique network properties. We took up the challenge and analyzed the spiking dynamics of leaky integrate and fire neuron models in the pulse-coupled regimes. We developed a novel nonlinear mapping technique to mathematically analyze such networks. We proved that, when the network is dominated by feedback inhibition and the neurons are driven by constant external inputs, the network dynamics flows into spike sequence attractors from any initial conditions and for arbitrary connectivity between the neurons, regardless of the inhomogeneity in neuron properties and the external drives. The attractors are characterized by precise timings. In small networks, the spike sequence attractors are periodic spiking patterns, and the convergence to them requires a few transient spikes. Our theory suggests that stable spike sequences are ubiquitous in spiking neural networks (PDF).

A special case of this theory is the winner-take-all dynamics between competing neurons. Our analysis showed that the winner-take-all dynamics requires very few transient spikes. Indeed, in certain regime, whoever spikes first will be the winner, with no transient dynamics at all. The winner-take-all dynamics is one of the most important mechanisms for decision-making and object recognition. Although this dynamics exists in the rate models, the transient dynamics is often long, leading to objections that recurrent dynamics cannot explain phenomenon such as fast object recognition in the visual system. Our analysis of the spiking networks clarified these misconceptions (PDF).

With our mapping technique, we further proved the existence of a winner-take-all competition between chain networks, which is the basis of our computational model of the variable birdsong syntax (PDF). The technique also led to efficient simulation technique, with which we demonstrated formation of chain networks through synaptic plasticity and spontaneous activity (PDF,PDF).

Auditory object recognition and robust speech recognition

Humans and animals can recognize auditory objects such as speech or conspecific vocalizations despite noise and other interfering sounds. The robustness of the auditory systems in humans and animals is unmatched by current artificial speech recognition algorithms, which usually fail in noisy conditions such as in loud bars. We examined the possibility that sparse coding, often observed in the auditory system and other sensory modalities, contributes to the noise robustness. We developed an algorithm of training detectors that respond to features in the speech signals within small time windows. Driven by speech signals, the feature detectors produce sparse spatiotemporal spike responses. Speech can be recognized through matching the patterns with stored templates. We demonstrated that such a scheme outperforms the state-of-the-art artificial speech recognition systems in the standard task of recognizing spoken digits in noisy conditions, especially when the noise level is comparable to that of the signal. Our results suggest that sparse spike coding can be crucial for the robustness of the auditory system (PDF).

The spike sequences generated by the feature detectors can be recognized by network of neurons with stable transient plateau potentials, or UP state, often observed in dendrites of pyramidal neurons. The states of the network can be defined by which neurons are in the UP state. Transitions between the network states can be driven by the inputs from the feature detectors and the connectivity between the neurons. Different inputs drive the network into different states. Auditory object can thus be recognized by identifying the network states achieved by the auditory inputs (PDF,PDF).

Neural coding in the basal ganglia

The basal ganglia is a critical structure for motor control and learning, and is extensively connected with many areas in the brain. The striatum is the input station of the basal ganglia. Dopamine signals, which are a reward signal for reinforcement learning of implicit motor skills and sensory-motor associations, target the striatum. It is thus believed that the striatum is a key structure for reinforcement learning. Temporal difference learning is a standard reinforcement learning mechanism. It explains how delayed rewards can be credited to correct actions or sensory inputs that happen early and eventually lead to the rewards. The mechanism required populations of neurons firing sequentially in between the actions or inputs to the rewards. But whether such dynamics exists in the striatum was unknown.

We analyzed thousands of neurons recorded in the striatum and the prefrontal cortex in monkeys during a simple visually guided saccade take. We applied a clustering technique to categorize neuron response profiles. We found that neurons in both structures encoded all aspects of the task, including the visual signals on the screen and the motor signals generated by the subjects. The timings of these neural responses are dispersed. Most interestingly, we found a subset of neurons in the striatum and the prefrontal cortex that responded with single peaks with different delays relative to the onset of the visual signals. These neurons thus formed sequential firing pattern that filled gaps between the visual inputs. With the population of neurons in the both structures, all time points during the task period can be precisely decoded. Our results suggest that time is encoded in disperses responses profiles in population of neurons in the prefrontal cortex and the striatum. Furthermore, the sequential firing of neurons conjectured by the temporal difference learning mechanism does exists in the striatum, further supporting the possibility that this mechanism guides reinforcement learning in the basal ganglia (PDF).

Population coding in the visual cortex

Tilt after effect is a visual illusion. Long exposure to gratings or bars in one orientation makes other oriented bars appear rotated away from the exposed orientation. The neural mechanism of such "repulsive" effects of visual adaptation was unknown. Single unit recordings in the cat primary visual cortex revealed that adaptation to single orientation led to changes in the tuning properties of neurons. The preferred orientations moved away from the adapting orientation. The response magnitudes were also decreased. At a first glance, the repulsive shifts of the preferred orientations explained the tilt after effect. We analyze the population-coding model of the visual cortex, and showed that in fact the opposite is usually true. Repulsive shifts of the preferred orientations alone in fact leads to attractive shift in the orientation perception, opposite of the tilt after effect. Only when the suppression of neural responses near the adapting orientation is strong enough the repulsive perception occurs. We analyzed the amount of shifts in the preferred orientations and the suppression of the neural responses in the neurons recorded in the primary visual cortex. The combined effects quantitatively matched the amounts of tilt-after effect typical observed. Our analysis revealed the importance of the interplay between the shifts in preferred orientation and neural response suppression, and suggested that these two effects tend to cancel each other to preserve the perception fidelity in normal conditions. Prolonged exposure to a single orientation break this balance, leading to errors in perception manifested as the tilt after effect (PDF).

Reconstruction of neuron morphology from microscopic images

We have developend ShuTu (Chinese for "dendrite"), a software platform for semi-automated reconstruction of neuronal morphology. It is designed for neurons stained following patch-clamp recording and biocytin filling/staining. With an additional preprocessing step, it can also handle fluorescence images from confocal stacks.

Even when there is only a single neuron in the imaged volume, automated reconstruction is tricky, because of background. Thus, ShuTu was developed to facilitate a two-step process: automated reconstruction, followed by manual correction. The software uses sophisticated algorithms to perform the automated reconstruction, as well as a convenient user interface to facilitate efficient completion of the reconstruction via manual annotation.


People

FACULTY

  • Dezhe Jin, Associate Professor of Physics

  • GRADUATE STUDENTS

  • Autumn Zender
  • Aayush Khare
  • Derek Sederman
  • Joseph Schuessler
  • UNDERGRADUATE STUDENTS

  • Tristan Gyure
  • Nikita Kiselov

  • FORMER MEMBERS

  • Kevin Sargent (PhD, 2024)
  • Jiali Lu (PhD, 2023)
  • Leonardo Tavares (PhD, 2022)
  • Yevhen Tupikov (PhD, 2019)
  • Sumithra Surendralal (PhD, 2016)
  • Phillip Schafer (PhD, 2014)
  • Jason Wittenbach (PhD, 2014)
  • Aaron Miller (PhD, 2012)
  • David Fraser (PhD, 2009)
  • Linli Wang (PhD, 2009)
  • Joseph Jun (postdoc, 2004-2006)
  • Wonil Chang (Visitor from KAIST, Korea, 2006-2007)
  • XiaoMing Zhao (Undergraduate, Lafayette College, Summer 2024)
  • Kaisha Garvin-Darby (Undergraduate student, 2023-2024)
  • Madison Gillner (Undergraduate student, 2023-24)
  • Zhiruo Zhang (Undergraduate student, Summer 2022)
  • Justin Kim (Undergraduate student, 2022)
  • Austin Marcus (Undergraduate student, 2019-2020)
  • Sarah Greberman (Undergraduate REU student, Summer 2019)
  • Xiaofun Mou (Undergraduate honors thesis, 2018-2019)
  • Guangda Shi (Undergraduate student from Franklin and Marshall College, 2018)
  • Zixin Tang (Undergraduate student, 2016-2017)
  • Collin Van Son (Undergraduate student, 2017)
  • Yosuke Ota (Undergraduate student, 2013)
  • Tarik Salameh (Undergraduate student, 2011-2012)
  • David Vidmar (Undergraduate Honors Thesis, 2011-2012)
  • Greg Diehl (Undergraduate student, 2006-2007)
  • David Van Maaden (Undergraduate Honors Thesis, 2008-2009)


  • Courses

  • Phys557, Electrodynamics I (graduate level) Syllabus
  • Phys530, Classical Mechanics (graduate level) Syllabus
  • Phys597B, Computational Neuroscience (graduate level) Syllabus
  • Phys212, Electricity and Magnetism (undergraduate level) Syllabus
  • Phys420, Thermal Physics (undergraduate level) Syllabus


    Publications

  • Aayush Khare, Derek Sederman, and Dezhe Z. Jin, "Temperature robustness of the timing network within songbird premotor nucleus HVC", bioRxiv, doi: https://doi.org/10.1101/2025.03.06.641874 (2025). (PDF)
  • Jiali Lu, Sumithra Surendralal, Kristofer E Bouchard, and Dezhe Z. Jin, "Partially observable Markov models inferred using statistical tests reveal context-dependent syllable transitions in Bengalese finch songs", Journal of Neuroscience, 8, e0522242024 (2025). (PDF; download the POMM python code used in the paper.)
  • B. M. Zemel, A. A. Nevue, L. E. S. Tavares, A. Dagostin, P. V. Lovell, D. Z. Jin, C. V. Mello, and H. von Gersdorff, "Motor cortex analogue neurons in songbirds utilize Kv3 channels to generate ultranarrow spikes.", eLife, 12: e81992 (2023). (PDF)
  • Yevhen Tupikov and Dezhe Z. Jin, "Addition of new neurons and the emergence of a local neural circuit for precise timing", PLoS Computational Biology, 17, e1008824 (2021). (PDF)
  • Robert Egger *, Yevhen Tupikov *, Margot Elmaleh, Kalman A. Katlowitz, Sam E. Benezra, Michel A. Picardo, Felix Moll, Jorgen Kornfeld, Dezhe Z. Jin, and Michael A. Long, "Local axonal conduction shapes the spatiotemporal properties of neural sequences", Cell, 183, 537-548.e12 (2020). * Co-first authors. (PDF)
  • Dezhe Z. Jin, Ting Zhao, David L. Hunt, Rachel P. Tillage, Ching-Lung Hsu, and Nelson Spruston, "ShuTu: Open-Source Software for Efficient and Accurate Reconstruction of Dendritic Morphology", Frontiers in Neuroinformatics, 13, doi: 10.3389/fninf.2019.00068 (2019). (PDF) (Download software and data from ShuTu Website)
  • Dezhe Z. Jin and Phillip B. Schafer, "System and method for automated speech recognition", US Patent, US20160260429A1, awarded (2019). (Google Patents)
  • Yisi S. Zhang, Jason D. Wittenbach, Dezhe Z. Jin, and Alexay A. Kozhevnikov, "Temperature manipulation in songbird brain implicates the premotor nucleus HVC in birdsong syntax", Journal of Neuroscience, 37, 2517-2523 (2017). (PDF)
  • Jason D. Wittenbach, Kristofer E. Bouchard, Michael S. Brainard, and Dezhe Z. Jin, "An adapting auditory-motor feedback loop can contribute to generating vocal repetition", PLoS Computational Biology, 11, e1004471 (2015). (PDF) (Download the dataset used in the paper.)
  • Phillip B. Schafer and Dezhe Z. Jin, "Noise-robust speech recognition through auditory feature detection and spike sequence decoding", Neural Computation, 26, 523 (2014). (PDF)
  • Arik Kershenbaum, Ann E. Bowles, Todd M. Freeberg, Dezhe Z. Jin, Adriano R. Lameira, and Kirsten Bohn, "Animal vocal sequences: not the Markov chains we thought they were", Proceedings of the Royal Society B: Biological Sciences, 281, 20141370 (2014). (PDF)
  • Arik Kershenbaum et al, "Acoustic sequences in non-human animals: a tutorial review and prospectus”, Biological Reviews, doi: 10.1111/brv.12160 (2014). (PDF)
  • Aaron Miller and Dezhe Z. Jin, "Potentiation decay of synapses and the length distributions of synfire chains self-organized in recurrent neural networks", Physical Review E, 80, 062716 (2013). (PDF)
  • Dezhe Z. Jin, "The Neural Basis of Birdsong Syntax", in Progress in Cognitive Science: From Cellular Mechanisms to Computational Theories, Edited by Zhong-lin Lu and Yuejia Luo, Peking University Press (2013). (PDF)
  • Dezhe Z. Jin and Alexay A. Kozhevnikov, "A compact statistical model of the song syntax in Bengalese finch", PLoS Computational Biology, 7, e1001108 (2011). (PDF) (Download syllable sequences Bird 1, Bird 2)
  • Michael A. Long, Dezhe Z. Jin, and Michale S. Fee, "Support for a synaptic chain model of sequence generation from intracellular recordings in the singing bird", Nature, 468, 394 (2010). (PDF, SI)
  • Theresa M. Desrochers, Dezhe Z. Jin, Noah D. Goodman, and Ann M. Graybiel, "Optimal habits can develop spontaneously through sensitivity to local cost", Proceedings of National Academy of Science, 107, 20512 (2010). (PDF, SI)
  • Dezhe Z. Jin, "Generating variable birdsong syllable sequences with branching chain networks in avian premotor nucleus HVC", Physical Review E, 80, 051902 (2009). (PDF)
  • Dezhe Z. Jin, Naotaka Fujii, and Ann M. Graybiel, "Neural representation of time in cortico-basal ganglia circuits", Proceedings of National Academy of Science, 106, 19156 (2009). (PDF, SI)
  • Wonil Chang and Dezhe Z. Jin, "Spike propagation in driven chain networks with dominant global inhibition", Physical Review E, 79, 051917 (2009). (PDF)
  • Dezhe Z. Jin, "Decoding spatiotemporal spike sequences via the finite state automata dynamics of spiking neural networks", New Journal of Physics, 10, 015010 (2008). (PDF)
  • Joseph K. Jun and Dezhe Z. Jin, "Development of neural circuitry for precise temporal sequences through spontaneous activity, axon remodeling, and synaptic plasticity", PLoS ONE, 2, e723 (2007). (PDF). Download the code related to the paper, written by Aaron Miller, from the Model Database maintained at Yale University.
  • Dezhe Z. Jin, Fethi M. Ramazanoglu, and H. Sebastian Seung, "Intrinsic bursting enhances the robustness of a neural network model of sequence generation by avian brain area HVC", Journal of Computational Neuroscience, 23(3), 283-99 (2007). (PDF)
  • Brandon J. Farley, Hongbo Yu, Dezhe Z. Jin, and Mriganka Sur, "Alteration of visual input results in a coordinated reorganization of multiple visual cortex maps", Journal of Neuroscience, 19, 10299-10310 (2007). (PDF)
  • Dezhe Z. Jin, Valentin Dragoi, Mriganka Sur, and H. Sebastian Seung, "The tilt aftereffect and adaptation-induced changes in orientation tuning in visual cortex", Journal of Neurophysiology, 94, 4038-4050 (2005). (PDF)
  • Terra D. Barnes, Yasuo Kubota, Dan Hu, Dezhe Z. Jin and Ann M. Graybiel, "Activity of striatal neurons reflects dynamic encoding and recoding of procedural memories", Nature, 437, 1158-1161 (2005). (PDF)
  • Hongbo Yu, Brandon J. Farley, Dezhe Z. Jin, and Mriganka Sur, "The coordinated mapping of visual space and response features in visual cortex", Neuron, 47, 267 (2005). (PDF)
  • Dezhe Z. Jin, "Spiking neural network for recognizing spatiotemporal sequences of spikes", Physical Review E, 69, 021905 (2004). (PDF)
  • Dezhe Z. Jin, "Fast convergence of spike sequences to periodic patterns in recurrent networks", Physical Review Letters, 89, 208102 (2002). (PDF)
  • Dezhe Z. Jin and H. Sebastian Seung, "Fast computation with spikes in a recurrent neural network", Physical Review E, 65, 051922 (2002). (PDF)
  • Carson C. Chow, Dezhe Z. Jin, and Alessandro Treves, "Is the world full of circles?", Journal of Vision, 2, 571 (2002). (PDF)
  • Daniel H. E. Dubin and Dezhe Z. Jin, "Collisional diffusion in a 2-dimensional point vortex gas", Physics Letters A, 284, 112 (2001). (PDF)
  • Dezhe Z. Jin and Daniel H. E. Dubin, "Point vortex dynamics within a background vorticity patch", Physics of Fluids, 13, 677 (2001). (PDF)
  • Dezhe Z. Jin and Daniel H. E. Dubin, "Characteristics of two-dimensional turbulence that self-organizes into vortex crystals", Physical Review Letters, 84, 1443 (2000). (PDF)
  • Dezhe Z. Jin and Daniel H. E. Dubin, "Theory of vortex crystal formation in two-dimensional turbulence", Physics of Plasmas, 7, 1719 (2000). (PDF)
  • Dezhe Z. Jin and Daniel H. E. Dubin, "Regional maximum entropy theory of vortex crystal formation", Physical Review Letters, 80, 4434 (1998). (PDF)


    Dezhe Jin
    Associate Professor of Physics
    contact: dzj2 psu.edu

    Department of Physics
    Penn State
    104 Davey Lab
    University Park, PA 16802-6300