Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where William J. Strong is active.

Publication


Featured researches published by William J. Strong.


Journal of the Acoustical Society of America | 1974

Fifty‐four voices from two: the effects of simultaneous manipulations of rate, mean fundamental frequency, and variance of fundamental frequency on ratings of personality from speech

Bruce L. Brown; William J. Strong; Alvin C. Rencher

Utterances of two adults males were analyzed and synthesized by a fast Fourier Transforms method. Each of the two voices was synthesized in each of the twenty‐seven combinations of three levels each of rate, mean FO, and variance of FO (a total of fifty‐four “voices” generated from two). The effects of the rate, mean FO, and variance of FO manipulations, the interactive effects of rate and variance of FO, and the effects due to speaker were all statistically significant predictors of personality ratings given the voices. They accounted for 86%, 4%, 3%, 2%, and I% of the variance, respectively, in competence ratings and 48%, 1%, 6%, 1%, and 8% of the variance, respectively, in benevolence ratings. Increased speaking rate was found to decrease the benevolence ratings, and decreased rate was found to decrease competence ratings. Decreased variance of FO was found to decrease the ratings on both competence and benevolence. Increased mean FO in these male voices was also found to decrease competence and benevo...


Journal of the Acoustical Society of America | 1973

Perceptions of personality from speech: effects of manipulations of acoustical parameters.

Bruce L. Brown; William J. Strong; Alvin C. Rencher

A speech analysis‐synthesis system was used to manipulate variance of fundamental frequency and a mechanical rate changer was used to manipulate speech rate. The synthesized and altered voices were tested for realism. Synthesized voices were mistaken for normal 50% to 58% of the time and rate‐changed voices were mistaken for normal 78% of the time. Additional studies were conducted to test the effects of these acoustical manipulations on the adjective ratings judges made of speakers. Variance of intonation was increased and decreased by 50% for eight speakers. There was a significant trend for increased intonation to cause voices to be rated more “benevolent” by judges and decreased intonation to cause them to be rated less “benevolent.” In two additional studies, rate was decreased and increased by varying amounts. Slowing the voices caused them to be rated less “competent.” Speeding the voices caused them to be rated less “benevolent.” Results were more consistent over speakers for rate manipulations th...


Journal of the Acoustical Society of America | 1979

Speech coding hearing aid system utilizing formant frequency transformation

William J. Strong; Edward Paul Palmer

A hearing aid system and method includes apparatus for receiving a spoken speech signal, apparatus coupled to the receiving apparatus for determining at successive intervals in the speech signal the frequency and amplitude of the largest formants, apparatus for determining at successive intervals the fundamental frequency of the speech signal, and apparatus for determining at successive intervals whether or not the speed signal is voiced or unvoiced. Each successively determined formant frequency is divided by a fixed value, greater than 1, and added thereto is another fixed value, to obtain what are called transposed formant frequencies. The fundamental frequency is also divided by a fixed value, greater than 1, to obtain a transposed fundamental frequency. At the successive intervals, sine waves having frequencies corresponding to the transposed formant frequencies and the transposed fundamental frequency are generated, and these sine waves are combined to obtain an output signal which is applied to a transducer for producing an auditory signal. The amplitudes of the sine waves are functions of the amplitudes of corresponding formants. If it is determined that the speech signal is unvoiced, then no sine wave corresponding to the transposed fundamental frequency is produced and the other sine waves are noise modulated. The auditory signal produced by the transducer in effect constitutes a coded signal occupying a frequency range lower than the frequency range of normal speech and yet which is in the residual-hearing range of many hearing-impaired persons.


Journal of the Acoustical Society of America | 1967

Synthesis of Wind‐Instrument Tones

William J. Strong; Melville Clark

Clarinet, oboe, bassoon, tuba, flute, trumpet, trombone, French horn, and English horn tones have been synthesized with partials controlled by one spectral envelope (fixed for each instrument regardless of note frequency) and three temporal envelopes. Musically literate auditors identified natural tones with 85% accuracy and our synthesized tones with 66% accuracy; a number of the confusions were intrafamily. With intrafamily confusions tolerated in the scoring, the auditors identified natural tones with 94% accuracy and our synthetic ones with 77% accuracy.


Journal of the Acoustical Society of America | 1979

Numerical method for calculating input impedances of the oboe

George R. Plitnik; William J. Strong

The purpose of this study was to investigate a numerical method for obtaining input impedances of double‐reed instruments—the oboe in particular. To this end, the physical dimensions of an oboe were used to compute its input impedance as a function of frequency for several different fingerings. The numerically computed input impendances of the oboe were compared to experimentally measured curves with good agreement resulting in most cases. The reasons for the observed discrepancies are discussed and suggestions for improving the ageement between the predicted and experimental frequencies are given.


Journal of the Acoustical Society of America | 1996

A stroboscopic study of lip vibrations in a trombone

David C. Copley; William J. Strong

The purpose of the present study was to obtain detailed photographic sequences and lip motion data on which lip models for brass instruments may be more accurately based. The study expands upon an earlier study by Martin [J. Acoust. Soc. Am. 13, 305–307 (1942)] by using advanced fiber‐optic stroboscopy, a real instrument mouthpiece, and by studying two dynamic levels. The trombone was selected as representative of the brass family because its relatively large mouthpiece permitted the use of an optic probe. Lip motion was observed from the front and side for six notes (Bflat2, F3, Bflat3, D4, F4, Aflat4) played at loud and soft dynamic levels. The video sequences were used to obtain information on lip opening area, lip motion perpendicular to airflow, and lip motion parallel to airflow. The data are tabulated and represented in graphic form.


Journal of the Acoustical Society of America | 1986

Simulation of a player–clarinet system

Scott D. Sommerfeldt; William J. Strong

A time‐domain simulation model has been developed for investigating the player–clarinet system. The three components that constitute the simulation model consist of the player’s air column, reed, and the clarinet. The player’s air column is represented in terms of an analogous circuit model to obtain the mouth pressure. The reed is represented as a damped, driven, nonuniform bar. The clarinet is represented in terms of a scaled version of its input impedance impulse response. A convolution of the impulse response with the volume velocity determines the mouthpiece pressure. Use of the model is valid for both small‐ and large‐amplitude reed oscillations. Many of the nonlinearities associated with the clarinet are incorporated in the model in a rather natural way. Several vocal tract configurations are investigated to determine the influence of the vocal tract on the player’s air column impedance and the concomitant effect on the clarinet tone.


Journal of the Acoustical Society of America | 1983

A model for the synthesis of natural sounding vowels

Donald R. Allen; William J. Strong

A model has been developed which is designed to preserve some of the naturalness that is usually lost in speech synthesis. A parameterized function is used to produce an approximation to the cross‐sectional area through the glottis. A circuit model of the subglottal and glottal system is used to generate the volume velocity of the air through the glottis from the lung pressure and the time‐varying supraglottal pressure. The tract is represented by it input impedance impulse response which can be calculated from the area function of the tract. A convolution of the input impedance impulse response with the volume velocity determines the supraglottal pressure. The equations relating the above two onditions for the volume velocity are solved simultaneously. The output of the model is generated by convolving the resulting glottal volume velocity with the transfer function impulse response of the tract. A comparison is made between vowels synthesized with and without the vocal tract glottal flow interaction. Li...


Journal of the Acoustical Society of America | 1967

Perturbations of Synthetic Orchestral Wind‐Instrument Tones

William J. Strong; Melville Clark

The relative significance of spectral and temporal envelopes for the synthesis of orchestral wind‐instrument tones was evaluated by exchange of spectral and temporal envelopes among the wind instruments, by creation of artificial spectral envelopes, and by perturbation of the spectral envelopes. It was found that, for the oboe, clarinet, bassoon, tuba, and trumpet, where the spectral envelope is unique as regards the frequency of its maximum and the range in which the instrument is normally played, this envelope predominates in aural significance over the temporal envelope. Where the spectral envelope is not unique—as for the flute, trombone, and French horn the spectral envelope is equal or subordinate to the temporal one in aural significance. Interfamily confusions are fewer in those cases where the spectral envelope is of predominant importance: about 14% for the clarinet, oboe, bassoon, and tuba anti about 25% for the flute, trumpet, trombone, and French horn. The ratio between identification probabi...


IEEE Transactions on Acoustics, Speech, and Signal Processing | 1976

A comparison of three methods of extracting resonance information from predictor-coefficient coded speech

R. Christensen; William J. Strong; E. Palmer

Three methods of extracting resonance information from predictor-coefficient coded speech are compared. The methods are finding roots of the polynomial in the denominator of the transfer function using Newton iteration, picking peaks in the spectrum of the transfer function, and picking peaks in the negative of the second derivative of the spectrum. A relationship was found between the bandwidth of a resonance and the magnitude of the second derivative peak. Data, accumulated from a total of about two minutes of running speech from both female and male talkers, are presented illustrating the relative effectiveness of each method in locating resonances. The second-derivative method was shown to locate about 98 percent of the significant resonances while the simple peak-picking method located about 85 percent.

Collaboration


Dive into the William J. Strong's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bruce L. Brown

Brigham Young University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Brian E. Anderson

Los Alamos National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kent L. Gee

Brigham Young University

View shared research outputs
Top Co-Authors

Avatar

Neville H Fletcher

Australian National University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge