Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Janusz Szczepanski is active.

Publication


Featured researches published by Janusz Szczepanski.


Neural Computation | 2004

Estimating the Entropy Rate of Spike Trains via Lempel-Ziv Complexity

José M. Amigó; Janusz Szczepanski; Elek Wajnryb; Maria V. Sanchez-Vives

Normalized Lempel-Ziv complexity, which measures the generation rate of new patterns along a digital sequence, is closely related to such important source properties as entropy and compression ratio, but, in contrast to these, it is a property of individual sequences. In this article, we propose to exploit this concept to estimate (or, at least, to bound from below) the entropy of neural discharges (spike trains). The main advantages of this method include fast convergence of the estimator (as supported by numerical simulation) and the fact that there is no need to know the probability law of the process generating the signal. Furthermore, we present numerical and experimental comparisons of the new method against the standard method based on word frequencies, providing evidence that this new approach is an alternative entropy estimator for binned spike trains.


IEEE Transactions on Circuits and Systems | 2006

Discrete Chaos-I: Theory

Ljupco Kocarev; Janusz Szczepanski; José M. Amigó; Igor Tomovski

We propose a definition of the discrete Lyapunov exponent for an arbitrary permutation of a finite lattice. For discrete-time dynamical systems, it measures the local (between neighboring points) average spreading of the system. We justify our definition by proving that, for large classes of chaotic maps, the corresponding discrete Lyapunov exponent approaches the largest Lyapunov exponent of a chaotic map when Mrarrinfin, where M is the cardinality of the discrete phase space. In analogy with continuous systems, we say the system has discrete chaos if its discrete Lyapunov exponent tends to a positive number, when Mrarrinfin. We present several examples to illustrate the concepts being introduced


Neurocomputing | 2004

Characterizing spike trains with Lempel–Ziv complexity

Janusz Szczepanski; José M. Amigó; Elek Wajnryb; Maria V. Sanchez-Vives

Abstract We review several applications of Lempel–Ziv complexity to the characterization of neural responses. In particular, Lempel–Ziv complexity allows to estimate the entropy of binned spike trains in an alternative way to the usual method based on the relative frequencies of words, with the definitive advantage of no requiring very long registers. We also use complexity to discriminate neural responses to different kinds of stimuli and to evaluate the number of states of neuronal sources.


Network: Computation In Neural Systems | 2003

Application of Lempel–Ziv complexity to the analysis of neural discharges

Janusz Szczepanski; José M. Amigó; Elek Wajnryb; Maria V. Sanchez-Vives

Pattern matching is a simple method for studying the properties of information sources based on individual sequences (Wyner et al 1998 IEEE Trans. Inf. Theory 44 2045–56). In particular, the normalized Lempel–Ziv complexity (Lempel and Ziv 1976 IEEE Trans. Inf. Theory 22 75–88), which measures the rate of generation of new patterns along a sequence, is closely related to such important source properties as entropy and information compression ratio. We make use of this concept to characterize the responses of neurons of the primary visual cortex to different kinds of stimulus, including visual stimulation (sinusoidal drifting gratings) and intracellular current injections (sinusoidal and random currents), under two conditions (in vivo and in vitro preparations). Specifically, we digitize the neuronal discharges with several encoding techniques and employ the complexity curves of the resulting discrete signals as fingerprints of the stimuli ensembles. Our results show, for example, that if the neural discharges are encoded with a particular one-parameter method (‘interspike time coding’), the normalized complexity remains constant within some classes of stimuli for a wide range of the parameter. Such constant values of the normalized complexity allow then the differentiation of the stimuli classes. With other encodings (e.g. ‘bin coding’), the whole complexity curve is needed to achieve this goal. In any case, it turns out that the normalized complexity of the neural discharges in vivo are higher (and hence carry more information in the sense of Shannon) than in vitro for the same kind of stimulus.


IEEE Transactions on Circuits and Systems | 2005

Cryptographically secure substitutions based on the approximation of mixing maps

Janusz Szczepanski; José M. Amigó; Tomasz Michałek; Ljupco Kocarev

In this paper, we explore, following Shannons suggestion that diffusion should be one of the ingredients of resistant block ciphers, the feasibility of designing cryptographically secure substitutions (think of S-boxes, say) via approximation of mixing maps by periodic transformations. The expectation behind this approach is, of course, that the nice diffusion properties of such maps will be inherited by their approximations, at least if the convergence rate is appropriate and the associated partitions are sufficiently fine. Our results show that this is indeed the case and that, in principle, block ciphers with close-to-optimal immunity to linear and differential cryptanalysis (as measured by the linear and differential approximation probabilities) can be designed along these guidelines. We provide also practical examples and numerical evidence for this approximation philosophy.


Open Systems & Information Dynamics | 2001

Pseudorandom Number Generators Based on Chaotic Dynamical Systems

Janusz Szczepanski; Zbigniew Kotulski

Pseudorandom number generators are used in many areas of contemporary technology such as modern communication systems and engineering applications. In recent years a new approach to secure transmission of information based on the application of the theory of chaotic dynamical systems has been developed. In this paper we present a method of generating pseudorandom numbers applying discrete chaotic dynamical systems. The idea of construction of chaotic pseudorandom number generators (CPRNG) intrinsically exploits the property of extreme sensitivity of trajectories to small changes of initial conditions, since the generated bits are associated with trajectories in an appropriate way. To ensure good statistical properties of the CPRBG (which determine its quality) we assume that the dynamical systems used are also ergodic or preferably mixing. Finally, since chaotic systems often appear in realistic physical situations, we suggest a physical model of CPRNG.


BioSystems | 2003

On the number of states of the neuronal sources

José M. Amigó; Janusz Szczepanski; Eligiusz Wajnryb; Maria V. Sanchez-Vives

In a previous paper (Proceedings of the World Congress on Neuroinformatics (2001)) the authors applied the so-called Lempel-Ziv complexity to study neural discharges (spike trains) from an information-theoretical point of view. Along with other results, it is shown there that this concept of complexity allows to characterize the responses of primary visual cortical neurons to both random and periodic stimuli. To this aim we modeled the neurons as information sources and the spike trains as messages generated by them. In this paper, we study further consequences of this mathematical approach, this time concerning the number of states of such neuronal information sources. In this context, the state of an information source means an internal degree of freedom (or parameter) which allows outputs with more general stochastic properties, since symbol generation probabilities at every time step may additionally depend on the value of the current state of the neuron. Furthermore, if the source is ergodic and Markovian, the number of states is directly related to the stochastic dependence lag of the source and provides a measure of the autocorrelation of its messages. Here, we find that the number of states of the neurons depends on the kind of stimulus and the type of preparation ( in vivo versus in vitro recordings), thus providing another way of differentiating neuronal responses. In particular, we observed that (for the encoding methods considered) in vitro sources have a higher lag than in vivo sources for periodic stimuli. This supports the conclusion put forward in the paper mentioned above that, for the same kind of stimulus, in vivo responses are more random (hence, more difficult to compress) than in vitro responses and, consequently, the former transmit more information than the latter.


IWIFSGN@FQAS | 2016

A Proposal for a Method of Defuzzification Based on the Golden Ratio—GR

Wojciech T. Dobrosielski; Janusz Szczepanski; Hubert Zarzycki

This article presents a proposal for a new method of defuzzification a fuzzy controller, which is based on the concept of the golden ratio, derived from the Fibonacci series [1]. The origin of the method was the observation of numerous instances of the golden ratio in such diverse fields as biology, architecture, medicine, and painting. A particular area of its occurrence is genetics, where we find the golden ratio in the very structure of the DNA molecule [2] (deoxyribonucleic acid molecules are 21 angstroms wide and 34 angstroms long for each full length of one double helix cycle). This fact makes the ratio in the Fibonacci series in some sense a universal design principle used by man and nature alike. In keeping with the requirements, the authors of the present study first explain the essential concepts of fuzzy logic, including in particular the notions of a fuzzy controller and a method of defuzzification. Then, they postulate the use of the golden ratio in the process of defuzzification and call the idea the Golden Ratio (GR) Method. In the subsequent part of the article, the proposed GR-based instrument is compared with the classical methods of defuzzification, including COG, FOM, and LOM. In the final part, the authors carry out numerous calculations and formulate conclusions which serve to classify the proposed method. At the end they present directions of further research.


Biological Cybernetics | 2011

Mutual information and redundancy in spontaneous communication between cortical neurons

Janusz Szczepanski; María Marta Arnold; Eligiusz Wajnryb; José M. Amigó; Maria V. Sanchez-Vives

An important question in neural information processing is how neurons cooperate to transmit information. To study this question, we resort to the concept of redundancy in the information transmitted by a group of neurons and, at the same time, we introduce a novel concept for measuring cooperation between pairs of neurons called relative mutual information (RMI). Specifically, we studied these two parameters for spike trains generated by neighboring neurons from the primary visual cortex in the awake, freely moving rat. The spike trains studied here were spontaneously generated in the cortical network, in the absence of visual stimulation. Under these conditions, our analysis revealed that while the value of RMI oscillated slightly around an average value, the redundancy exhibited a behavior characterized by a higher variability. We conjecture that this combination of approximately constant RMI and greater variable redundancy makes information transmission more resistant to noise disturbances. Furthermore, the redundancy values suggest that neurons can cooperate in a flexible way during information transmission. This mostly occurs via a leading neuron with higher transmission rate or, less frequently, through the information rate of the whole group being higher than the sum of the individual information rates—in other words in a synergetic manner. The proposed method applies not only to the stationary, but also to locally stationary neural signals.


Journal of Sleep Research | 2013

Information content in cortical spike trains during brain state transitions

María Marta Arnold; Janusz Szczepanski; Noelia Montejo; José M. Amigó; Eligiusz Wajnryb; Maria V. Sanchez-Vives

Even in the absence of external stimuli there is ongoing activity in the cerebral cortex as a result of recurrent connectivity. This paper attempts to characterize one aspect of this ongoing activity by examining how the information content carried by specific neurons varies as a function of brain state. We recorded from rats chronically implanted with tetrodes in the primary visual cortex during awake and sleep periods. Electro‐encephalogram and spike trains were recorded during 30‐min periods, and 2–4 neuronal spikes were isolated per tetrode off‐line. All the activity included in the analysis was spontaneous, being recorded from the visual cortex in the absence of visual stimuli. The brain state was determined through a combination of behavior evaluation, electroencephalogram and electromyogram analysis. Information in the spike trains was determined by using Lempel–Ziv Complexity. Complexity was used to estimate the entropy of neural discharges and thus the information content (Amigóet al. Neural Comput., 2004, 16: 717–736). The information content in spike trains (range 4–70 bits s−1) was evaluated during different brain states and particularly during the transition periods. Transitions toward states of deeper sleep coincided with a decrease of information, while transitions to the awake state resulted in an increase in information. Changes in both directions were of the same magnitude, about 30%. Information in spike trains showed a high temporal correlation between neurons, reinforcing the idea of the impact of the brain state in the information content of spike trains.

Collaboration


Dive into the Janusz Szczepanski's collaboration.

Top Co-Authors

Avatar

José M. Amigó

Polish Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Zbigniew Kotulski

Warsaw University of Technology

View shared research outputs
Top Co-Authors

Avatar

Eligiusz Wajnryb

Polish Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hubert Zarzycki

Kazimierz Wielki University in Bydgoszcz

View shared research outputs
Top Co-Authors

Avatar

Wojciech T. Dobrosielski

Kazimierz Wielki University in Bydgoszcz

View shared research outputs
Top Co-Authors

Avatar

Andrzej Paszkiewicz

Warsaw University of Technology

View shared research outputs
Top Co-Authors

Avatar

Elek Wajnryb

Polish Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Jacek M. Czerniak

Polish Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge