Deepak Jayaswal
St. Francis Institute of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Deepak Jayaswal.
International Journal of Computer Applications | 2014
Purnima Chandrasekar; Santosh Chapaneri; Deepak Jayaswal
ABSTRACT Creating an accurate Speech Emotion Recognition (SER) system depends on extracting features relevant to that of emotions from speech. In this paper, the features that are extracted from the speech samples include Mel Frequency Cepstral Coefficients (MFCC), energy, pitch, spectral flux, spectral roll-off and spectral stationarity. In order to avoid the ‘curse of dimensionality’, statistical parameters, i.e. mean, variance, median, maximum, minimum, and index of dispersion have been applied on the extracted features. For classifying the emotion in an unknown test sample, Support Vector Machines (SVM) has been chosen due to its proven efficiency. Through experimentation on the chosen features, an average classification accuracy of 86.6% has been achieved using one-v/s-all multi-class SVM which is further improved to 100% when reduced to binary form problem. Classifier metrics viz. precision, recall, and F-score values show that the proposed system gives improved accuracy for Emo-DB.
computational intelligence communication systems and networks | 2009
Deepak Jayaswal; Mukesh A. Zaveri
We propose probabilistic method to determine motion vector (MV) for block matching algorithm (BMA). Proposed method allow us to exploit random distribution of motion vector in successive video frames for selection of initial search points for first iteration and refinement stage used in further iteration tracks motion vector in continuously changing video sequence. In our proposed algorithm due to adaptive step size it is able to track motion vectors of low motion as well as high motion video. The Simulation result shows that our proposed algorithm Probability based search motion estimation (PBSME) outperforms all sub-optimal motion estimation (ME) algorithms in terms of quality and speed up performance and in many cases PSNR of proposed algorithm is comparable to Full Search with several times faster.
international conference on information communication and embedded systems | 2016
Mildred Pereira; Santosh Chapaneri; Deepak Jayaswal
Conventionally, the spectral features are derived from the DFT spectrum using the Hamming window. The spectral leakage is reduced by windowing but the variance of the spectral estimate is high. Multitaper method emphasizes on using multiple windows and frequency domain averaging. In this paper we study the impact of introduction of multitapering on the performance of Speech Emotion Recognition. Various spectral features including MFCCs are taken into consideration while the classifier used is Support Vector Machine (SVM). For the spectral features, in case of multitapering an improvement of upto 2% was found as compared to traditional Hamming window when tested on Berlin database. Impact of variable frame size, different windows and variable taper number is also studied.
ieee international conference on power electronics intelligent control and energy systems | 2016
Flynn Jiu; Kevin Noronha; Deepak Jayaswal
In recent times biometric identification has become a very important part of secure identification. Biometrics are certain characteristics unique to an individual. Various biometric identifications include DNA, Finger prints, Iris, Retina, Voice, Face etc. The proposed method uses retinal vasculature for biometric identification. This method uses 2D Gabor Wavelet transform to enhance blood vessels which are then segmented using Adaptive thresholding. Bifurcation points and End points of vessels are then used as feature points. Validation process is carried out to eliminate falsely identified feature points. These valid feature points are used to establish the identity of a person.
international conference on advances in engineering technology research | 2014
Jeba Inbaraj Nadar; Jayasudha Koti; Deepak Jayaswal
Wireless communication is a promising technology for a wide range of applications from TV remote control to satellite based TV systems. As the need for high data rate is increasing day by day the need for multicarrier communication has come in to picture. Orthogonal frequency division multiplexing (OFDM) is a multicarrier transmission technique used for high data rate wireless transmission. In OFDM the transmitter modulates the message bit sequence in to symbols, performs IFFT, converts in to time domain signal and transmits through a wireless channel. The information usually get distorted due to characteristics of the channel So it is required to estimate the channel characteristics and compensate at the receiver to recover the information sent. In this paper we explore two estimation techniques Least Square (LS) and Minimum Mean Square Error (MMSE) using International Telecommunication Union (ITU) vehicular A channel model, further symbol error rate is calculated for different Doppler frequencies.
international conference & workshop on emerging trends in technology | 2010
Deepak Jayaswal; Mukesh A. Zaveri; R. E. Chaudhari
We propose a multi step motion estimation algorithm (MSME) that encompasses techniques such as motion vector prediction through initial search, refinement of motion vector to locate true motion vector and early termination criteria that suits to all type of video characteristic. This approach allows us to exploit random distribution of motion vector in successive video frames from which the initial candidate predictors are derived. The derived predictors are the most probable points in search window, which will assure that, the motion vectors in the vicinity of center point and at the edge of the search window does not miss out, as it does for earlier algorithms like Three step search(TSS), Four step search(FSS), Diamond(DS), etc and refinement stage used in the algorithm will allow us to extract true motion vector so that the picture quality is as good as Full search(FS) which is the optimal algorithm. The novelty of the proposed MSME algorithm is that the search pattern derived is not static but can dynamically shrink or enlarge to account for small and large motion. Fixed threshold used improves speed without sacrificing the quality of video. The Simulation result shows that our proposed algorithm outperforms all sub-optimal algorithms in terms of quality and speed up performance and in many cases PSNR of proposed algorithm is comparable to Full Search.
international conference on intelligent systems and control | 2017
Gauri Deshpande; Santosh Chapaneri; Deepak Jayaswal
Saliency is the quality by which any object or a pixel in an image stands out relative to its neighbours. Detecting such regions from an image is a crucial problem of research, since it has wide applications in advertising, automatic image compression, image thumbnailing, etc. In this paper, a salient region detection approach is proposed by using machine learning. In order to train the saliency model, low level features such as color channels and their probabilities, also probabilities using 3D color histograms, subband features along with statistical priors such as frequency prior, color prior, chance of happening (CoH) and center bias prior (CBP) are used. The proposed model is compared with existing state of art algorithms. Human eye fixation points are used to compare the models by estimating area under ROC curves. Other parameters such as precision, recall, F-measure are also used for comparison. This comparison shows that the proposed saliency model gives better performance than the existing salient region detection approaches.
2017 2nd International Conference on Communication Systems, Computing and IT Applications (CSCITA) | 2017
Renia Lopes; Santosh Chapaneri; Deepak Jayaswal
Automated musical genre classification using machine learning techniques has gained popularity for research and development of powerful tools to organize music collections available on web. Mel cepstral co-efficients (MFCCs) have been successfully used in music genre classification but they do not reflect the correlation between the adjacent co-efficients of Mel filters of a frame neither the relation between adjacent co-efficients of Mel filters of neighboring frames. This leads to loss of useful features. In this work, Hu moment based features are extracted from the spectrogram to study impact of energy concentration in the spectrogram. Under different musical genres the difference in rhythm in genres drastically changes the texture of spectrogram image. This alters the energy concentration in spectrogram. Hu moments being invariant to translation, scaling as well as rotation can capture useful features from spectrogram that are not considered by the MFCCs. Since the spectral moments are computed locally, they can assess the intensity of energy concentration at certain frequencies in spectrogram and prove as distinct features in characterizing different genres of music. Hu moment based features along with conventional music features lead to an accuracy of 83.33% for classifying 5 genres.
2016 IEEE International Conference on Advances in Computer Applications (ICACA) | 2016
Nazira Shaikh; Santosh Chapaneri; Deepak Jayaswal
Security is a major concern in digital image transmission applications. In this paper, a novel color image encryption scheme is proposed to enhance security and efficiency. The proposed scheme is a single round based hyper chaotic system due to bi-directional pixel diffusion which contributes towards increased security and improved efficiency. Security analysis such as key sensitivity, histogram, information entropy, correlation coefficient and diffusion is conducted.
International Journal of Computer Applications | 2015
Rahul Patil; Vaqar Ansari; Deepak Jayaswal
The process by which two or more images are merged into a single image is called image fusion, where important characteristics from each of the original image are revived. As images are acquired from different instrument modalities, in order to combine all the capture techniques fusion of image forms a fundamental process. Multifocus image fusion constructs an combined image from multiple source images having focus on different objects from same scene. To achieve this, a spatial domain algorithm is proposed which divides each source image into blocks of sizes varying adaptively. Edge information is extracted from the image by using edge detection techniques. The quality metrics will be obtained for each block, based on human visual perception instead of simple metrics like MSE and PSNR. For the purpose of testing the proposed work, a readily available database of Laboratory for Image and Video Engineering (LIVE) will be used. To demonstrate the quality of the final fused image, evaluation will be done based on the concepts of human visual perception.