Lawrence C. Ng
Lawrence Livermore National Laboratory
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Lawrence C. Ng.
Journal of the Acoustical Society of America | 1998
John F. Holzrichter; Gregory C. Burnett; Lawrence C. Ng; Wayne A. Lea
Very low power electromagnetic (EM) wave sensors are being used to measure speech articulator motions as speech is produced. Glottal tissue oscillations, jaw, tongue, soft palate, and other organs have been measured. Previously, microwave imaging (e.g., using radar sensors) appears not to have been considered for such monitoring. Glottal tissue movements detected by radar sensors correlate well with those obtained by established laboratory techniques, and have used to estimate a voiced excitation function for speech processing applications. The noninvasive access, coupled with the small size, low power, and high resolution of these new sensors, permit promising research and development applications in speech production, communication disorders, speech recognition and related topics.
international conference on acoustics, speech, and signal processing | 2000
Lawrence C. Ng; Gregory C. Burnett; John F. Holzrichter; Todd J. Gable
Low power EM radar-like sensors have made it possible to measure properties of the human speech production system in real-time, without acoustic interference. This greatly enhances the quality and quantify of information for many speech related applications (see Holzrichter, Burnett, Ng, and Lea, J. Acoustic. Soc. Am. 103 (1) 622 (1998)). By using combined glottal-EM-sensor-and acoustic-signals, segments of voiced, unvoiced, and no-speech can be reliably defined. Real-time de-noising filters can be constructed to remove noise from the users corresponding speech signal.
Journal of the Acoustical Society of America | 2000
Ingo R. Titze; Brad H. Story; Gregory C. Burnett; John F. Holzrichter; Lawrence C. Ng; Wayne A. Lea
Newly developed glottographic sensors, utilizing high-frequency propagating electromagnetic waves, were compared to a well-established electroglottographic device. The comparison was made on four male subjects under different phonation conditions, including three levels of vocal fold adduction (normal, breathy, and pressed), three different registers (falsetto, chest, and fry), and two different pitches. Agreement between the sensors was always found for the glottal closure event, but for the general wave shape the agreement was better for falsetto and breathy voice than for pressed voice and vocal fry. Differences are attributed to the field patterns of the devices. Whereas the electroglottographic device can operate only in a conduction mode, the electromagnetic device can operate in either the forward scattering (diffraction) mode or in the backward scattering (reflection) mode. Results of our tests favor the diffraction mode because a more favorable angle imposed on receiving the scattered (reflected) signal did not improve the signal strength. Several observations are made on the uses of the electromagnetic sensors for operation without skin contact and possibly in an array configuration for improved spatial resolution within the glottis.
Journal of the Acoustical Society of America | 2005
John F. Holzrichter; Lawrence C. Ng; Gerry J. Burke; Nathan J. Champagne; Jeffrey S. Kallman; Robert M. Sharpe; James B. Kobler; Robert E. Hillman; John J. Rosowski
Low power, radarlike electromagnetic (EM) wave sensors, operating in a homodyne interferometric mode, are being used to measure tissue motions in the human vocal tract during speech. However, when these and similar sensors are used in front of the laryngeal region during voiced speech, there remains an uncertainty regarding the contributions to the sensor signal from vocal fold movements versus those from pressure induced trachea-wall movements. Several signal-source hypotheses are tested by performing experiments with a subject who had undergone tracheostomy, and who still was able to phonate when her stoma was covered (e.g., with a plastic plate). Laser-doppler motion-measurements of the subjects posterior trachea show small tissue movements, about 15 microns, that do not contribute significantly to signals from presently used EM sensors. However, signals from the anterior wall do contribute. EM sensor and air-pressure measurements, together with 3-D EM wave simulations, show that EM sensors measure movements of the vocal folds very well. The simulations show a surprisingly effective guiding of EM waves across the vocal fold membrane, which, upon glottal opening, are interrupted and reflected. These measurements are important for EM sensor applications to speech signal de-noising, vocoding, speech recognition, and diagnostics.
Journal of the Acoustical Society of America | 1983
Lawrence C. Ng; Robert A. LaTourette
Abstract : Random bearing error is a major performance measure of a sonar bearing tracker. Programs currently employed in calculating random bearing error from measured tracker bearing error data use a standard polynomial Least Mean Square Fit (LMSF) algorithm to remove an unknown time varying mean. Previously, the effect of the LMSF algorithm on the residuals of the measured tracker bearing error data was not fully accounted for. In addition, when processing correlated bearing error residuals, the optimum choice of the order of the LMSF and appropriate bias correction factor as a function of signal-to-noise ratio (SNR) were not known. This study investigates the properties of the LMSF in detial and shows that the LMSF behaves as a low-pass filter, the frequency response characteristics of which can be calculated exactly. The equivalent noise bandwidth of the LMSF is shown to be a function of the sample size, the sampling time and the order of the fit. The appropriate bias correction factor, when processing correlated data, is shown to be determined by the ratio of the LMSF bandwidth to the equivalent tracker bandwidth. Results of the analysis are verified by extensive simulation. Finally, an operational procedure is given to obtain an unbiased estimate of the variance for at-sea measured tracker data. (Author)
Journal of the Acoustical Society of America | 1999
Lawrence C. Ng; John F. Holzrichter; Gregory C. Burnett; Todd J. Gable
Recently, very low‐power EM radarlike sensors have been used to measure the macro‐ and micro‐motions of human speech articulators as human speech is produced [see Holzrichter et al., J. Acoust. Soc. Am. 103, 622 (1998)]. These sensors can measure tracheal wall motions, associated with the air pressure build up and fall as the vocal folds open and close, leading to a voiced speech excitation function. In addition, they provide generalized motion measurements of vocal tract articulator gestures that lead to speech formation. For example, tongue, jaw, lips, velum, and pharynx motions have been measured as speech is produced. Since the EM sensor information is independent of acoustic air pressure waves, it is independent of the state of the acoustic background noise spectrum surrounding the speaker. By correlating the two streams of information together, from a microphone and (one or more) EM sensor signals, to characterize a speaker’s speech signal, much of the background speaker noise can be eliminated in r...
Journal of the Acoustical Society of America | 1989
Lawrence C. Ng; Robert A. LaTourette; Adam Siconolfi
A new approach is developed to reduce the computational complexity of a moving average Least Mean Square Fit (LMSF) procedure. For a long data window, a traditional batch approach would result in a large number of multiplication and add operations (i.e., an order N, where N is the window length). This study shows that the moving average batch LMSF procedure could be made equivalent to a recursive process with identical filter memory length but at an order of reduction in computational load The increase in speed due to reduced computation make the moving average LMSF procedure competitive for many real time processing application. Finally, this paper also address the numerical accuracy and stability of the algorithm.
Journal of the Acoustical Society of America | 2013
John F. Holzrichter; Lawrence C. Ng; John T. Chang
Voice activity sensors commonly measure voiced-speech-induced skin vibrations using contact microphones or related techniques. We show that micro-power EM wave sensors have advantages over acoustic techniques by directly measuring vocal-fold motions, especially during closure. This provide 0.1 ms timing accuracy (i.e., ~10 kHz bandwidth) relative to the corresponding acoustic signal, with data arriving ~0.5 ms in advanced of the acoustic speech leaving the speaker’s mouth. Preceding or following unvoiced and silent speech segments can then be well defined. These characteristics enable anti-speech waves to be generated or prior recorded waves recalled, synchronized, and broadcast with high accuracy to mask the user’s real-time speech signal. A particularly useful masking process uses an acoustic voiced signal from the prior voiced speech period which is inverted, carefully timed, and rebroadcast in phase with the presently being spoken acoustic signal. This leads to real-time cancellation of a substantial ...
Journal of the Acoustical Society of America | 2002
John F. Holzricher; Lawrence C. Ng; Gerald J. Burke; James B. Kobler; John J. Rosowski
EM wave sensors are being used to measure human vocal tract movements during voiced speech. However, when used in the glottal region there remains uncertainty regarding the contributions to the sensor signal from the vocal fold opening and closing versus those from pressure induced trachea–wall movements. Several signal source hypotheses were tested on a subject who had undergone tracheostomy 4 years ago as a consequence of laryngeal paresis. Measurements of vocal fold and tracheal wall motions were made using an EM sensor, a laser‐Doppler velocimeter, and an electroglottograph. Simultaneous acoustic data came from a subglottal pressure sensor and a microphone at the lips. Extensive 3‐D numerical simulations of EM wave propagation into the neck were performed in order to estimate the amplitude and phase of the reflected EM waves from the 2 different sources. The simulations and experiments show that these sensors measure, depending upon location, both the opening and closing of the vocal folds and the mov...
Journal of the Acoustical Society of America | 1996
John F. Holzrichter; Wayne A. Lea; Lawrence C. Ng; Gregory C. Burnett
It has recently become possible to measure the positions and motions of the human speech organs, as speech is being articulated, by using micropower radars in a noninvasive manner. Using these instruments the vocalized excitation function of human speech is measured and thereby the transfer function of each constant vocalized speech unit is obtained by deconvolving the output acoustic pressure from the input excitation function. In addition, the positions of the tongue, lips, jaw, velum, and glottal tissues are measured for each speech unit. Using these data, very descriptive feature vectors for each acoustic speech unit were able to be formed. It is believed that these new data, in conjunction with presently obtained acoustic data, will lead to more efficient speech coding, recognition, synthesis, telephony, and prosthesis.