Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kaliappan Gopalan is active.

Publication


Featured researches published by Kaliappan Gopalan.


international conference on acoustics, speech, and signal processing | 2003

Audio steganography using bit modification

Kaliappan Gopalan

A method of embedding a covert audio message in a cover utterance for secure communication is presented. The covert message is represented in a compressed form with possibly encryption and/or encoding for added security. One bit in each of the samples of a given cover utterance is altered in accordance with the data bits and a key. The same key is used to retrieve the embedded bits at the receiver. The results, based on cover signals from a clean TIMIT utterance and a noisy aircraft cockpit utterance, show that the technique meets several major criteria for successful covert communication.


international conference on acoustics, speech, and signal processing | 2005

Audio steganography by cepstrum modification

Kaliappan Gopalan

A method of embedding information in the cepstral domain of a cover audio signal is described for audio steganography applications. The proposed technique combines the commonly employed psychoacoustical masking property of the human auditory system with the decorrelation property of the speech cepstrum, and achieves imperceptible embedding, large payload, and accurate data retrieval. Results of embedding using a clean and a noisy hot utterance show the embedded information is robust to additive noise and bandpass filtering.


ieee aerospace conference | 2003

Covert speech communication via cover speech by tone insertion

Kaliappan Gopalan; Stanley Wenndt; Andrew Noga; Darren Haddad; Scott Adams

0-7803-7651-X/03/


international conference on industrial technology | 2009

A unified audio and image steganography by spectrum modification

Kaliappan Gopalan

17.00


conference of the industrial electronics society | 2001

Speech coding using Fourier-Bessel expansion of speech signals

Kaliappan Gopalan

A method of embedding information in the spectral domain of a cover audio and a cover image that can be extended to video frames is proposed. The technique exploits the imperceptibility of human auditory and visual systems at low levels of spectral changes. By selectively altering the spectrum at a pair of one-dimensional frequencies by a small percentage of the average power of a segment of audio or image, the psychoacoustical or psychovisual masking property enables unnoticeable embedding with a large payload. Initial studies on the effect of Gaussian noise added to the stego demonstrate the robustness of the technique to noise in both the stego audio and image. The imperceptibility of the technique combined with high payload, robustness of embedded data and accurate data retrieval renders the proposed steganography suitable for covert communication and secure data transmission applications.


conference on security steganography and watermarking of multimedia contents | 2004

Cepstral domain modification of audio signals for data embedding: Preliminary results

Kaliappan Gopalan

Coding of speech signals using Bessel functions as orthogonal signals in the Fourier-Bessel (FB) expansion has been explored. It has been found that a reasonable quality of speech can be reconstructed using a set of 15 to 30 coefficients in the FB expansion of each frame of speech. At 80 frames per second and eight bits per coefficient, this corresponds to a bit rate of as low as 9600 bits/second when predetermined sequence of coefficients are used. The speech quality and the bit rate increase when higher number or a selected set of coefficients are used. Comparable results in perceptual speech quality and frame-to-frame signal-to-noise were observed for both male and female speakers.


electronic imaging | 2003

Audio steganography by amplitude or phase modification

Kaliappan Gopalan; Stanley J. Wenndt; Scott F. Adams; Darren M. Haddad

A method of embedding data in an audio signal using cepstral domain modification is described. Based on successful embedding in the spectral points of perceptually masked regions in each frame of speech, first the technique was extended to embedding in the log spectral domain. This extension resulted at approximately 62 bits /s of embedding with less than 2 percent of bit error rate (BER) for a clean cover speech (from the TIMIT database), and about 2.5 percent for a noisy speech (from an air traffic controller database), when all frames - including silence and transition between voiced and unvoiced segments - were used. Bit error rate increased significantly when the log spectrum in the vicinity of a formant was modified. In the next procedure, embedding by altering the mean cepstral values of two ranges of indices was studied. Tests on both a noisy utterance and a clean utterance indicated barely noticeable perceptual change in speech quality when lower range of cepstral indices - corresponding to vocal tract region - was modified in accordance with data. With an embedding capacity of approximately 62 bits/s - using one bit per each frame regardless of frame energy or type of speech - initial results showed a BER of less than 1.5 percent for a payload capacity of 208 embedded bits using the clean cover speech. BER of less than 1.3 percent resulted for the noisy host with a capacity was 316 bits. When the cepstrum was modified in the region of excitation, BER increased to over 10 percent. With quantization causing no significant problem, the technique warrants further studies with different cepstral ranges and sizes. Pitch-synchronous cepstrum modification, for example, may be more robust to attacks. In addition, cepstrum modification in regions of speech that are perceptually masked - analogous to embedding in frequency masked regions - may yield imperceptible stego audio with low BER.


international conference on signal processing | 2000

Pitch estimation using a modulation model of speech

Kaliappan Gopalan

This paper presents the results of embedding short covert message utterances on a host, or cover, utterance by modifying the phase or amplitude of perceptually masked or significant regions of the host. In the first method, the absolute phase at selected, perceptually masked frequency indices was changed to fixed, covert data-dependent values. Embedded bits were retrieved at the receiver from the phase at the selected frequency indices. Tests on embedding a GSM-coded covert utterance on clean and noisy host utterances showed no noticeable difference in the stego compared to the hosts in speech quality or spectrogram. A bit error rate of 2 out of 2800 was observed for a clean host utterance while no error occurred for a noisy host. In the second method, the absolute phase of 10 or fewer perceptually significant points in the host was set in accordance with covert data. This resulted in a stego with successful data retrieval and a slightly noticeable degradation in speech quality. Modifying the amplitude of perceptually significant points caused perceptible differences in the stego even with small changes of amplitude made at five points per frame. Finally, the stego obtained by altering the amplitude at perceptually masked points showed barely noticeable differences and excellent data recovery.


international symposium on circuits and systems | 2005

Robust watermarking of music signals by cepstrum modification

Kaliappan Gopalan

This paper presents a method of estimating pitch frequencies using a modulation model of speech. Both the amplitude envelope and instantaneous frequency deviation due to modulations present in speech carry fundamental frequency information. It is shown that demodulation of speech around any arbitrary frequency or in the vicinity of a formant brings out well resolved F0 in both the envelope and the instantaneous frequency. Values of F0 estimated from the modulation model for utterances under normal and stressed conditions were found consistent with those determined from a direct vocal tract model.


international midwest symposium on circuits and systems | 2013

Deception detection in speech using bark band and perceptually significant energy features

Muhammad Sanaullah; Kaliappan Gopalan

A method of embedding a predetermined watermark in an audio signal is described for audio music copyright protection applications. The proposed technique applies the psychoacoustical masking property of the human auditory system to imperceptibly embed the watermark in the cepstral domain of a host audio signal. The embedded watermark is extracted using an oblivious detection technique without resorting to any correlation procedure. Experimental results show that the watermark is robust to bandpass filtering and additive noise at low power levels.

Collaboration


Dive into the Kaliappan Gopalan's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Stanley J. Wenndt

Air Force Research Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Darren M. Haddad

Air Force Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Dongjian Cai

Purdue University Calumet

View shared research outputs
Top Co-Authors

Avatar

Jiajun Fu

Purdue University Calumet

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Qidong Shi

Purdue University Calumet

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge