Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Wiesława Kuniszyk-Jóźkowiak is active.

Publication


Featured researches published by Wiesława Kuniszyk-Jóźkowiak.


computer recognition systems | 2007

Automatic Detection of Disorders in a Continuous Speech with the Hidden Markov Models Approach

Marek Wiśniewski; Wiesława Kuniszyk-Jóźkowiak; Elżbieta Smołka; Waldemar Suszyński

Hidden Markov Models are widely used for recognition of any patterns appearing in an input signal. In the work HMM’s were used to recognize two kind of speech disorders in an acoustic signal: prolongation of fricative phonemes and blockades with repetition of stop phonemes.


Folia Phoniatrica Et Logopaedica | 1996

Effect of Acoustical, Visual and Tactile Echo on Speech Fluency of Stutterers

Wiesława Kuniszyk-Jóźkowiak; Elżbieta Smołka; Bogdan Adamczyk

The study presents the comparison of the effects of echo transmitted via single and combined channels (auditory, visual and tactile) on the speech of stutterers. The dependence of stuttering intensity and speech velocity upon echo delay time was determined. For all transmission channels the stuttering intensities and the speech velocities decreased with the increase in the delay time of the echo. The results were analyzed statistically by means of the ANOVA method. It was proven that the corrective effects of visual echo and tactile echo were comparable. Echo transmitted via the auditory channel was more effective than when transmitted via the visual or tactile channels. The greatest efficiency could be observed by transmitting echo via three connected channels: auditory, visual and tactile. The results obtained show that in stuttering therapy it is justified to use echo transmitted via three connected channels (auditory, visual, tactile).


Folia Phoniatrica Et Logopaedica | 1997

Effect of Acoustical, Visual and Tactile Reverberation on Speech Fluency of Stutterers

Wiesława Kuniszyk-Jóźkowiak; Elżbieta Smołka; Bogdan Adamczyk

The study presents the comparison of the effects of reverberation transmitted via single and combined channels (auditory, visual and tactile) on the speech of stutterers. The dependence of stuttering intensity and speech velocity upon reverberation time was determined. For all transmission channels the stuttering intensities and the speech velocities decreased with the increase in reverberation time. The results were analyzed statistically by means of the ANOVA method. It was proven that the corrective effects of visual reverberation and tactile reverberation were comparable. Reverberation transmitted via the auditory channel was more effective than when transmitted via the visual or tactile channels. Connecting the visual and tactile channels with the auditory channel has no influence on the effectiveness of reverberation.


Neural Computing and Applications | 2009

Speech nonfluency detection using Kohonen networks

Izabela Szczurowska; Wiesława Kuniszyk-Jóźkowiak; Elżbieta Smołka

This work covers the problem of application of neural networks to recognition and categorization of non-fluent and fluent utterance records. Fifty-five 4-s speech samples where the blockade on plosives (p, b, t, d, k and g) occurred and 55 recordings of speech of fluent speakers containing the same fragments were applied. Two Kohonen networks were used. The purpose of the first network was to reduce the dimension of the vector describing the input signals. A result of the analysis was the output matrix consisting of the neurons winning in a particular time frame. This matrix was taken as an input for the next self-organizing map network. Various types of Kohonen networks were examined with respect to their ability to classify utterances correctly into two, non-fluent and fluent, groups. Good examination results were accomplished and classification correctness exceeded 76%.


Archive | 2009

Artificial Neural Networks in the Disabled Speech Analysis

Izabela Świetlicka; Wiesława Kuniszyk-Jóźkowiak; Elżbieta Smołka

Presented work is a continuation of conducted research concerning automatic detection of disfluency in the stuttered speech. So far, the experiments covered analysis of disorders consisted in syllable repetitions and blockades before words starting with stop consonants. Introduced work gives description of an artificial neural networks application to the recognition and clustering of prolongations, which are one of the most common disfluency that appears among stuttering people.The main aim of the research was to answer a question whether it is possible to create a model built with artificial neural networks that is able to recognize and classify disabled speech. The experiment proceeded in two phases. In the first stage, Kohonen network was applied. During the second phase, two various networks were used and next evaluated with respect to their ability to classify utterances correctly into two, non-fluent and fluent, groups.


computer recognition systems | 2007

Articulation Rate Recognition by Using Artificial Neural Networks

Izabela Szczurowska; Wiesława Kuniszyk-Jóźkowiak; Elżbieta Smołka

This works concerns the problem of the application of artificial neural networks in the modelling of the hearing process. The aim of the research was to answer the question whether artificial neural networks are able to evaluate speech rate. Speech samples, first recorded during reading of a story with normal and next with slow articulation rate were used as research material. The experiment proceeded in two phases. In the first stage Kohonen network was used. The purpose of that network was to reduce the dimensions of the vector describing the input signals and to obtain the amplitude-time relationship. As a result of the analysis, an output matrix consisting of the neurons winning in a particular time frame was received. The matrix was taken as input for the following networks in the second phase of the experiment. Various types of artificial neural networks were examined with respect to their ability to classify correctly utterances with different speech rates into two groups. Good examination results were accomplished and classification correctness exceeded 88%.


computer recognition systems | 2013

Automatic Disordered Syllables Repetition Recognition in Continuous Speech Using CWT and Correlation

Ireneusz Codello; Wiesława Kuniszyk-Jóźkowiak; Elżbieta Smołka; Adam Kobus

Automatic disorder recognition in speech can be very helpful for the therapist while monitoring therapy progress of the patients with disordered speech. In this article the syllables repetition are described. The signal was analyzed using Continuous Wavelet Transform with bark scales, the result was divided into vectors (using windowing) and then a correlation algorithm was used on this data. Quite large search analysis was performed during which, recognition above 80% was achieved. All the analysis was performed and the results were obtained using the authors program – WaveBlaster. It is very important that the recognition ratio above 80% was obtained by a fully automatic algorithm (without a teacher) from the continuous speech. The presented problem is part of our research aimed at creating an automatic disorders recognition system.


Annales Umcs, Informatica | 2012

Time–frequency Analysis of the EMG Digital Signals

Wiesława Kuniszyk-Jóźkowiak; Janusz Jaszczuk; Tomasz Sacewicz; Ireneusz Codello

In the article comparison of time-frequency spectra of EMG signals obtained by the following methods: Fast Fourier Transform, predictive analysis and wavelet analysis is presented. The EMG spectra of biceps and triceps while an adult man was flexing his arm were analysed. The advantages of the predictive analysis were shown as far as averaging of the spectra and determining the main maxima are concerned. The Continuous Wavelet Transform method was applied, which allows for the proper distribution of the scales, aiming at an accurate analysis and localisation of frequency maxima as well as the identification of impulses which are characteristic of such signals (bursts) in the scale of time. The modified Morlet wavelet was suggested as the mother wavelet. The wavelet analysis allows for the examination of the changes in the frequency spectrum in particular stages of the muscle contraction. Predictive analysis may also be very useful while smoothing and averaging the EMG signal spectrum in time.


Archive | 2009

Computer Visual-Auditory Diagnosis of Speech Non-fluency

Mariusz Dzieńkowski; Wiesława Kuniszyk-Jóźkowiak; Elżbieta Smołka; Waldemar Suszyński

The paper focuses on the visual-auditory method of analysis for utterances of stuttering people. The method can be classified as an intermediate solution which is in between a traditional auditory and automatic methods. The author prepared a special computer program DiagLog, with the aim of carrying out the visual-auditory analysis, which can be used by logopaedists to make a diagnosis. The speech disfluencies are assessed by means of the observation of the spectrum and the envelope of fragments of recordings with simultaneous listening to them. A collection of 120 a few-minute recordings of 15 stuttering people was used to verify the correctness of the method and to compare it with the traditional auditory technique. All the samples were analysed by means of the auditory and the visual-auditory method by two independent experts. Consequently, the diagnosis using an additional visual aspect proved itself to be more effective in detecting speech non-fluencies, in classifying and measuring them.


Annales Umcs, Informatica | 2008

Utterance intonation imaging using the cepstral analysis

Ireneusz Codello; Wiesława Kuniszyk-Jóźkowiak; Tomasz Gryglewicz; Waldemar Suszyński

Speech intonation consists mainly of fundamental frequency, i.e. the frequency of vocal cord vibrations. Finding those frequency changes can be very useful-for instance, studying foreign languages where speech intonation is an inseparable part of a language (like grammar or vocabulary). In our work we present the cepstral algorithm for F0 finding as well as an application for facilitating utterance intonation learning.

Collaboration


Dive into the Wiesława Kuniszyk-Jóźkowiak's collaboration.

Top Co-Authors

Avatar

Elżbieta Smołka

Maria Curie-Skłodowska University

View shared research outputs
Top Co-Authors

Avatar

Waldemar Suszyński

Maria Curie-Skłodowska University

View shared research outputs
Top Co-Authors

Avatar

Ireneusz Codello

Maria Curie-Skłodowska University

View shared research outputs
Top Co-Authors

Avatar

Marek Wiśniewski

Maria Curie-Skłodowska University

View shared research outputs
Top Co-Authors

Avatar

Adam Kobus

Maria Curie-Skłodowska University

View shared research outputs
Top Co-Authors

Avatar

Mariusz Dzieńkowski

Lublin University of Technology

View shared research outputs
Top Co-Authors

Avatar

Bogdan Adamczyk

Maria Curie-Skłodowska University

View shared research outputs
Top Co-Authors

Avatar

Janusz Jaszczuk

Józef Piłsudski University of Physical Education in Warsaw

View shared research outputs
Top Co-Authors

Avatar

Karol Kuczyński

Maria Curie-Skłodowska University

View shared research outputs
Top Co-Authors

Avatar

Rafał Stęgierski

Maria Curie-Skłodowska University

View shared research outputs
Researchain Logo
Decentralizing Knowledge