Ch. Satyanarayana
Jawaharlal Nehru Technological University, Kakinada
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Ch. Satyanarayana.
international conference on signal processing | 2013
Manasa Nadipally; A. Govardhan; Ch. Satyanarayana
Often in forensic scenarios, need arises to map a partial or poor quality finger print to the identity of an individual. The general image matching methods found in computer vision perform poorly as they are sensitive to local distortions like broken ridge patterns and incomplete information. To address this issue, we propose to use a weak descriptor to capture local structures at a higher abstract level. The goal here is to mine a large set of initial correspondence through weak description and then rely on robust estimator scheme to prune false matches. By coupling weak local descriptor with robust estimator, we minimize the affect of broken ridge patterns and also obtain a dense set of matches for a given pair. We evaluate the performance of the proposed method against SIFT as per the Fingerprint Verification Competition guidelines. We also report superior performance of the current methods rotation, scale, noise and overlap handling capabilities.
Journal of Computer Applications in Technology | 2015
N. L. Manasa; A. Govardhan; Ch. Satyanarayana
Rising demand for recognition methods that accurately work on low-resolution images acquired from a web-camera in a real-time environment with dynamic backgrounds inspires us to propose a hybrid feature extraction and fusion approach for palmprint recognition based on texture information available in the palm. On the topographic surface, the intrinsic surface curvature descriptor Hessian is used to characterise the unique texture profiles in the palmprint of an individual at global level. Local binary pattern-histogram features on the other hand being grey-scale and rotation invariant, capture local fine textures effectively. These local features are sensitive to position and orientation of the palm image. Canonical correlation analysis CCA is used to combine the features at the descriptor level which ensures that the information captured from both the features are maximally correlated and eliminate the redundant information giving a more compact representation. Experimental results on two databases used in this paper yield comparable results. Besides challenges like rotation, scale, projection, cluttered backgrounds and illumination, proposed method also handles burns, boils, cuts, dirt and oil stains on palms as challenges. To our knowledge, as an inception in literature, challenge of detecting closed-palms in real-time images is accomplished.
Archive | 2017
P. Pavan Kumar; Ch. Satyanarayana; A. Ananda Rao
Multicore processors have become an exemplar for high-performance computing. Designing a real-time scheduler that enhances throughput of the entire system in load scheduling in shared caches is the midpoint of the paper. Real-time scheduling schemes are appropriate for shared caches that these methods are aware of these above-mentioned issues. The existing methods work using simple memory sets; all the priorities are static and non-scalable data structure makes task priorities less flexible. Our work addresses the above issues in the soft real-time system, where task scheduling systems require functionally accurate result, but the timing constraints are softened. In order to exhibit this we use: SESC (Super Escalar Simulator) and CACTI (Cache Access Cycle Time Indicator) for investigating the efficiency of our approach on various multicore platforms.
international conference on informatics and analytics | 2016
P. Pavan Kumar; Ch. Satyanarayana; A. Ananda Rao; P. Radhika Raju
Enacting task reprocessing -- an approach called rescheduling on multicore platform (which is an exemplar for high-performance computing). Many works has proven that realtime scheduling algorithms are efficient to enhance performance of shared cache regions in multicore platform. Reweight tasks with better average-case performance can be accomplished with partitioning-based scheduling algorithms, but suffer with higher overheads. It is proven that the performance of a job can be seriously infected by other co-schedled job, due to interference in the shared cache on multicore platform. In this paper, Our exceptional focus is rescheduling tasks which are neglected due to encauraging of other tasks. The ENhancing CAche Performance(ENCAP) reassembles the discouraged tasks based on the repeatedly scaling the simultaneous process. Additionally, an emparical comparision of performance of ENCAP on small to medium scale multicore platforms.
Bio-inspiring Cyber Security and Cloud Services | 2014
N. L Manasa; A. Govardhan; Ch. Satyanarayana
Biometric recognition protocols involving single sources of information for human authentication which are commonly termed as unimodal systems though show satisfying performance, still suffered from problems relating to non-universality, permanence, collectability, convenience and susceptibility to circumvention. This paper emphasizes the priority of biometric information fusion by analyzing two kinds of fusion: Fusion of multiple representations of single biometric trait and Fusion of multiple biometric traits. As biometric traits possess large variance between persons and small variance between samples of the same person, it is important to capture this information using multiple representations at both global-level and local-level and perform fusion at feature-level. As a feature set is a straightforward representation of raw biometric data, it is theoretically presumed to incorporate richer information. Hence, we propose to use a fusion method that maximally correlates information captured from both the features and eliminates the redundant information giving a more compact representation. Fusion of multiple biometric traits is realized using fingerprint, palmprint and iris modalities. We explore this kind of fusion using two architectures: Parallel architecture and Hierarchical-cascade architecture. Multi-biometric recognition systems designed with hierarchical architecture not only are robust, fast and highly secure but also mitigate problems like missing and noisy data associated with parallel and serial architectures respectively, not to be forgotten that parallel architectures are preferred in high security-demanding defense/military applications as they evidently provide more precision for the reason that they combine more modalities and evidences about the user for recognition. Parallel framework proposed in this work takes advantage of score-level fusion. Score-level fusion is widely put to use as it offers best trade-off between ease and efficiency. We propose two score-level fusion techniques which rely on Equal Error Rates of individual modalities. Since error rate is a percentage of misclassified samples, we attempt to minimize the overlapped area between genuine and imposter curves by choosing to maximize the stability of the modality with superior performance. The proposed rule addresses the fusion problem from error rate minimization point of view so as to increase the decisive efficiency of the fusion system. To take the advantage of feature-level fusion, serial/cascade architecture and hierarchical architectures, we also propose a two-stage cascading frame-work based on fusion of fingerprint and palmprint feature sets in the first stage and iris features to eliminate the ambiguity of false matches in the next stage. The proposed frame work takes advantage of both unimodal and multimodal architectures. Proportionate experimental results reported on both real and virtual databases in this work demonstrate the superior performance of a multimodal recognition system over a unimodal system but however infers that the design of a multimodal biometric system predominantly depends on the application criteria and so is difficult to anticipate the best fusion strategy. The review of biometric based recognition systems indicate that a number of factors including the accuracy, cost, and speed of the system may play vital role in assessing its performance. But today with the cost of biometric sensors constantly diminishing and high speed processors and parallel programming techniques widely available to affordable research, accuracy performance has become predominant focus of biometric system design. The main aim of the present work is to improve the accuracy of a multimodal biometric recognition system by reducing the error rates.
Archive | 2019
P. Pavan Kumar; Ch. Satyanarayana; A. Ananda Rao; P. Radhika Raju
Multi-core environments have become a substantial focus of research in recent years. However, many undermined hypothetical problems and pragmatic issues must be addressed if real-time environments are to be efficiently accommodated on multicore platforms. In multicore platform, every core can utilize the available shared cache resources. Numerous works have shown that real-time scheduling algorithms can be used efficiently with shared cache memory resources present on a single chip. Task reprocessing—Task rescheduling approach on multi-core platform (which is ideal for high-performance systems). This scheme is an improvement over existing cache-aware real-time scheduling, and encourage eligible task sets to be reprocessed based on heuristics called ENCAP [ENhancing shared CAche Performance]. We also address the arguments regarding the implementation of ENCAP on a Linux testbed called LITMUS\(^{RT}\), and an empirical evaluation of ENCAP is provided. An experimental exploration of ENCAP is presented on a unified Linux system for assessing the real-time schedulability of any task set performance on a medium/large-scale multicore environment under G-EDF.
Archive | 2016
M. Chinna Rao; A. V. S. N. Murty; Ch. Satyanarayana
This paper addresses an approach for identification of the emotion and conforming the emotions by fusing to the facial gestures. Various Techniques have been floated in the area of emotion recognition based speech signals. However these speech signals that are generated may not being in coherent with the actual inner feelings. Therefore in this paper a model is proposed by fusing the facial expression and the uttered speech voices. In ordered to tested developed model, synthesized data set is considered and performances evaluated using the metrics like precision and recalled.
Archive | 2014
N. L. Manasa; A. Govardhan; Ch. Satyanarayana
Iris, the most exclusive biometric trait, is a significant begetter of research since late 1980s. In this paper, we propose new feature fusion methodology based on Canonical Correlation Analysis to combine DTCW and LBP. Complex Wavelet Transform is used as an abstract level texture descriptor that gives a global scale invariant representation, while Local Binary Pattern (LBP) lay emphasis on local structures of the iris. In the proposed framework, CCA maximizes the information from the above two feature vectors which yield a more robust and compact representation for iris recognition. Experimental results demonstrate that fusion of Wavelet and LBP features using CCA attains 98.2% recognition accuracy and an EER of 1.8% on publicly available CASIA IrisV3-LAMP dataset [19].
international conference on mining intelligence and knowledge exploration | 2013
N. L. Manasa; A. Govardhan; Ch. Satyanarayana
As patterns in a palmprint have abundance of invariance, the inter-class and intra-class variability of these features makes it difficult for just one set of features to capture this variability. This inspires us to propose a hybrid feature extraction and fusion approach for palmprint recognition based on texture information available in the palm. Scale, shift and rotation (Affine) invariance, good directional sensitivity properties of Dual-tree Complex Wavelets makes it a choice to capture texture features at global level. Local Binary Pattern on the other hand being gray-scale and rotation invariant, captures local fine textures effectively. These local features are sensitive to position and orientation of the palm image. Canonical Correlation Analysis is used to combine the features at the descriptor level which ensures that the information captured from both the features are maximally correlated and eliminate the redundant information giving a more compact representation. Experimental results demonstrate an accuracy of 97.2% at an EER of 3.2% on CASIA palmprint database.
computational intelligence | 2007
Ch. Satyanarayana; L. Pratap Reddy
Face recognition is a complex numerical technology in computer systems that can assist recognition of human faces using principal component analysis (PCA) by comparing the given facial uniqueness with the already available face database. Human faces are usually straight, so they can be treated as 2D images relatively rather than 3D. The 2D facial image can be transformed into a 1D vector of pixels and projected into the principal components of the feature space called the eigenspace projection, which is evaluated from the eigenvectors of the covariance matrix derived from a set of facial images. The projections of the given face are then compared with the available training set and the face is identified. In this paper, we introduce a novel approach of dimensionality reduction of the covariance matrix and apply this algorithm with the training sets of JNTU face database and nonface database. Recognition rate versus Euclidean distance and number of eigenfaces versus recognition rate and eigenvalue variation is presented in this paper.