Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Peter M. Seligman is active.

Publication


Featured researches published by Peter M. Seligman.


Ear and Hearing | 1991

Performance of postlinguistically deaf adults with the Wearable Speech Processor (WSP III) and Mini Speech Processor (MSP) of the Nucleus Multi-Electrode Cochlear Implant.

Margaret W. Skinner; Laura K. Holden; Timothy A. Holden; Richard C. Dowell; Peter M. Seligman; Judith A. Brimacombe; Anne L. Beiter

Seven postlinguistically deaf adults implanted with the Nucleus Multi-Electrode Cochlear Implant participated in an evaluation of speech perception performance with three speech processors: the Wearable Speech Process (WSP III), a prototype of the Mini Speech Processor, and the Mini Speech Processor. The first experiment was performed with the prototype and Wearable Speech Processor both programmed using the F0F1F2 speech coding strategy. The second experiment compared performance with the Mini Speech Processor programmed with the Multi-Peak speech coding strategy and the Wearable Speech Processor programmed with the F0F1F2 speech coding strategy. Performance was evaluated in the sound-only condition using recorded speech tests presented in quiet and in noise. Questionnaires and informal reports provided information about use in everyday life. In experiment I, there was no significant difference in performance using the Wearable Speech Processor and prototype on any of the tests. Nevertheless, six out of seven subjects preferred the prototype for use in everyday life. In experiment II, performance on open-set tests in quiet and noise was significantly higher with the Mini Speech Processor (Multi-Peak speech coding strategy) than with the Wearable Speech Processor. Subjects reported an increase in their ability to communicate with other people using the Mini Speech Processor (Multi-Peak speech coding strategy) compared with the Wearable Speech Processor in everyday life.


Journal of the Acoustical Society of America | 1985

Speech processing method and apparatus

James F. Patrick; Peter M. Seligman; Y. C. Tong; Graeme M. Clark

Signal processing system for converting a speech signal into a data signal for controlling a hearing prosthesis having an implanted electrode array adapted to stimulate the auditory nerve fibers of a patient by the application of electrical currents to selected electrodes in the array. The system generates an input signal current corresponding to a received speech signal. The amplitude and frequency of the fundamental voicing component of the speech signal is approximated as are the amplitude and frequency of at least one formant of the speech signal. A programmable microprocessor produces instructions which cause the application of electrical currents to selected groups of electrodes in the array with or without delays between the stimulation of each electrode in the groups. The microprocessor is programmable with data defining a predetermined relationship between each group of electrodes and a selected range of at least one formant frequency and with data defining a predetermined relationship between another formant frequency and the delay between stimulation of each electrode in each said group based on psychophysical testing of the patient. Selection of electrodes based on the estimated frequency of the formants produces the desired percepts in the auditory-like sensations generated in the patient. The microprocessor is further programmable to produce data which determines the level of stimulation of each selected group of electrodes and determines the delay between stimulation of electrodes in each group dependent on the estimated amplitude of formants of the speech signal as well as on predetermined data relating to the sensitivity of each electrode implanted in the patient.


Journal of the Acoustical Society of America | 1987

Acoustic parameters measured by a formant-estimating speech processor for a multiple-channel cochlear implant

Peter J. Blamey; Richard C. Dowell; Graeme M. Clark; Peter M. Seligman

In order to assess the limitations imposed on a cochlear implant system by a wearable speech processor, the parameters extracted from a set of 11 vowels and 24 consonants were examined. An estimate of the fundamental frequency EF 0 was derived from the zero crossings of the low-pass filtered envelope of the waveform. Estimates of the first and second formant frequencies EF 1 and EF 2 were derived from the zero crossings of the waveform, which was filtered in the ranges 300-1000 and 800-4000 Hz. Estimates of the formant amplitudes EA 1 and EA 2 were derived by peak detectors operating on the outputs of the same filters. For vowels, these parameters corresponded well to the first and second formants and gave sufficient information to identify each vowel. For consonants, the relative levels and onset times of EA 1 and EA 2 and the EF 0 values gave cues to voicing. The variation in time of EA 1, EA 2, EF 1, and EF 2 gave cues to the manner of articulation. Cues to the place of articulation were given by EF 1 and EF 2. When pink noise was added, the parameters were gradually degraded as the signal-to-noise ratio decreased. Consonants were affected more than vowels, and EF 2 was affected more than EF 1. Results for three good patients using a speech processor that coded EF 0 as an electric pulse rate, EF 1 and EF 2 as electrode positions, and EA 1 and EA 2 as electric current levels confirmed that the parameters were useful for recognition of vowels and consonants. Average scores were 76% for recognition of 11 vowels and 71% for 12 consonants in the hearing-alone condition. The error rates were 4% for voicing, 12% for manner, and 25% for place.


Journal of the Acoustical Society of America | 1980

Speech processing for a multiple‐electrode cochlear implant hearing prosthesis

Y. C. Tong; Graeme M. Clark; Peter M. Seligman; J. F. Patrick

The basic problem of measuring the angular dependence ofreflection coefficient in solids is the d•culty of steering around the acoustic tmun. Although pure dectronic scanning is posablc too, only some kind of mechanical solution seems to be practicable. As for mechanical scanning, we may choose to movc the transduce• along a curved surface, •


Acta Oto-laryngologica | 1995

Evaluation of the Nucleus Spectra 22 Processor and New Speech Processing Strategy (SPEAK) in Postlinguistically Deafened Adults

Lesley A. Whitford; Peter M. Seligman; Colleen Everingham; Trisha Antognelli; M. Skok; R. Hollow; Kerrie Plant; Elvira S. Gerin; Steve Staller; Hugh J. McDermott; William P.R. Gibson; Graeme M. Clark

., a cylindrical one, a;ming the beam at the same point in the center and hitting the interface plane on the axis of the cylinder at different angles of incidence. On the other hand, we may choose to move the transducer along a plane surface using some device to convert his motion into rotation around the given interface point. Former experiments • howed that mechanical scanners based on the first method are inevitably very sensitive to mechanical misalignments and acceptable reproducibility can be achieved only by rather expensive and highly sophisticated quipment. We found it simpler to move the transducer along a plane surface and to convert this movement into rotation by a diffraction grating (geometrical means, such as a paraboloid reflector, are much more difficult to machine). The geometrical arrangement of the diffraction scanner is shown in Fig. 1. The aluminum scanner was designed so that it could measure reflection coelficients in a relatively large angular range between 30 and 60 deg. The two similar broadband contact transducers are moved symmetrically to the center. Their perpendicular alignment with respect o the grating is ensured by two simple rails. The angle of incidence of the through-transmitted signal is uniquely determined by the positions of the transducers and easily calculated from the measured distance between them. The output Sj_enal iS ira/ted to the •l•t burst arriving before many stray pulses due to multiple reflections. Only certain frequency components diffracted into the right direction reach the receiver in th• first burst, therefore, in spite of using broadband transducers, the measurement iS limited to a rather narrow bandwidth which changes with the angle of incidence. Although the elastic parameters are usually frequency independent inthe low MHz range, the interface between the two media is of•n some kind of thin layer with different acoustical properties from both neighboring media, which may result in frequencydependent behavior. To determine the reflection coefficient in aluminum for a particular acoustic load (including the second medium and the separating interface), we compare the level of the output signal with that of one received from an unloaded, i.e., free surface. The calcnlated reflection coefficient and the measured output signal for free surface are shown in Fig. 2. The difference between the two curves is due to such factors as the frequency response of the transducers, changing diffraction efficiency of the grating at different frequencies, and for different orders ofdiffruction, frequencyand distancedependent attenuation due to beam divergence and absorption. All of these factors are constant parameters of the scanner and can be taken into consideration by calibrating the system in the case of free surface (using the calculated, but probably fairly accurate data for aluminum ).


Acta Oto-laryngologica | 1987

Speech Perception Using a Two-formant 22-Electrode Cochlear Prosthesis in Quiet and in Noise

Richard C. Dowell; Peter M. Seligman; Peter J. Blamey; Graeme M. Clark

A new speech processing strategy (SPEAK) has been compared with the previous Multipeak (MPEAK) strategy in a study with 24 postlinguistically deafened adults. The results show that performance with the SPEAK coding strategy was significantly better for 58.31% of subjects on closed-set consonant identification, for 33.3% of subjects on closed-set vowel identification and open-set monosyllabic word recognition, and for 81.8% of subjects on open-set sentence recognition in quiet and in competing noise (+ 10 dB signal-to-noise ratio). By far the largest improvement observed was for sentence recognition in noise, with the mean score across subjects for the SPEAK strategy twice that obtained with MPEAK.


Journal of Laryngology and Otology | 1985

Telephone use by a multi-channel cochlear implant patient. An evaluation using open-set CID sentences.

A. M. Brown; Graeme M. Clark; Richard C. Dowell; L. F. Martin; Peter M. Seligman

A new speech-processing strategy has been developed for the Cochlear Pty. Ltd. 22-electrode cochlear prosthesis which codes an estimate of the first formant frequency in addition to the amplitude, voice pitch and second formant frequencies. Two groups of cochlear implant patients were tested 3 months after implant surgery, one group (n = 13) having used the old (F0F2) processing strategy and the other (n = 9) having used the new (F0F1F2) strategy. All patients underwent similar postoperative training programs. Results indicated significantly improved speech recognition for the F0F1F2 group, particularly on open set tests with audition alone. Additional testing with a smaller group of patients was carried out with competing noise (speech babble). Results for a closed set spondee test showed that patient performance was significantly degraded at a signal-to-noise ratio of 10 dB when using the F0F2 strategy, but was not significantly affected with the F0F1F2 strategy.


Acta Oto-laryngologica | 1984

A signal processor for a multiple-electrode hearing prosthesis

Peter M. Seligman; James F. Patrick; Y. C. Tong; Graeme M. Clark; Richard C. Dowell; P. A. Crosby

A totally deaf person with a multiple-channel cochlear prosthesis obtained open-set speech discrimination using the telephone. CID Everyday Sentences were presented by telephone to the patient, who repeated an average of 21 per cent of key words correctly on the first presentation, and 47 per cent when a repeat of the sentences was permitted. This result is consistent with the patients reports of telephone usage.


Annals of Otology, Rhinology, and Laryngology | 1981

A Multiple-Channel Cochlear Implant: An Evaluation using Nonsense Syllables

Graeme M. Clark; Y. C. Tong; L. F. Martin; P. A. Busby; Richard C. Dowell; Peter M. Seligman; James F. Patrick

A 22-electrode implantable hearing prosthesis uses a wearable speech processor which estimates three speech signal parameters. These are voice pitch, second formant frequency and flattened spectrum amplitude. The signal is monitored continuously for periodicity in the range 80-400 Hz and, if this is present, stimulation occurs at the same rate. Otherwise, as in the case of unvoiced sounds, it occurs at the random rate of fluctuation of the signal envelope. The second formant is obtained by filtering to extract the dominant peak in the midband region and by continuous measurement of the zero crossing rate. The amplitude measured is that of the whole speech spectrum, pre-emphasized by differentiation. The values that are presented to the patient are the parameter estimates immediately prior to the stimulation pulse. Second formant frequency is coded by selection of an appropriate electrode in the cochlea and amplitude by a suitably controlled current. Automatic gain control is used to keep the dynamic range of the amplitude estimate within the 30 dB range of the circuitry.


Hearing Research | 1998

Reduction in excitability of the auditory nerve following acute electrical stimulation at high stimulus rates. III Capacitive versus non-capacitive coupling of the stimulating electrodes

Christie Q. Huang; Robert K. Shepherd; Peter M. Seligman; Graeme M. Clark

A study using nonsense syllables has shown that a multiple-channel cochlear implant with speech processor is effective in providing information about voicing and manner and to a lesser extent place distinctions. These distinctions supplement lipreading cues. Furthermore, the average percentage improvements in overall identification scores for multiple-channel electrical stimulation and lipreading compared to lipreading alone were 71 % for a laboratory-based speech processor and 122% for a wearable unit.

Collaboration


Dive into the Peter M. Seligman's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Y. C. Tong

University of Melbourne

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David K Money

Medical University of South Carolina

View shared research outputs
Top Co-Authors

Avatar

A. M. Brown

University of Melbourne

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge