Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sanyam Shukla is active.

Publication


Featured researches published by Sanyam Shukla.


IEEE Access | 2015

Regularized Weighted Circular Complex-Valued Extreme Learning Machine for Imbalanced Learning

Sanyam Shukla; Ram N. Yadav

Extreme learning machine (ELM) is emerged as an effective, fast, and simple solution for real-valued classification problems. Various variants of ELM were recently proposed to enhance the performance of ELM. Circular complex-valued extreme learning machine (CC-ELM), a variant of ELM, exploits the capabilities of complex-valued neuron to achieve better performance. Another variant of ELM, weighted ELM (WELM) handles the class imbalance problem by minimizing a weighted least squares error along with regularization. In this paper, a regularized weighted CC-ELM (RWCC-ELM) is proposed, which incorporates the strength of both CC-ELM and WELM. Proposed RWCC-ELM is evaluated using imbalanced data sets taken from Keel repository. RWCC-ELM outperforms CC-ELM and WELM for most of the evaluated data sets.


international conference on advanced computing | 2016

Analysis of k-Fold Cross-Validation over Hold-Out Validation on Colossal Datasets for Quality Classification

Sanjay Yadav; Sanyam Shukla

While training a model with data from a dataset, we have to think of an ideal way to do so. The training should be done in such a way that while the model has enough instances to train on, they should not over-fit the model and at the same time, it must be considered that if there are not enough instances to train on, the model would not be trained properly and would give poor results when used for testing. Accuracy is important when it comes to classification and one must always strive to achieve the highest accuracy, provided there is not trade off with inexcusable time. While working on small datasets, the ideal choices are k-fold cross-validation with large value of k (but smaller than number of instances) or leave-one-out cross-validation whereas while working on colossal datasets, the first thought is to use holdout validation, in general. This article studies the differences between the two validation schemes, analyzes the possibility of using k-fold cross-validation over hold-out validation even on large datasets. Experimentation was performed on four large datasets and results show that till a certain threshold, k-fold cross-validation with varying value of k with respect to number of instances can indeed be used over hold-out validation for quality classification.


Neural Networks | 2018

Class-specific extreme learning machine for handling binary class imbalance problem

Bhagat Singh Raghuwanshi; Sanyam Shukla

Imbalance problem occurs when the majority class instances outnumber the minority class instances. Conventional extreme learning machine (ELM) treats all instances with same importance leading to the prediction accuracy biased towards the majority class. To overcome this inherent drawback, many variants of ELM have been proposed like Weighted ELM, class-specific cost regulation ELM (CCR-ELM) etc. to handle the class imbalance problem effectively. This work proposes class-specific extreme learning machine (CS-ELM), a variant of ELM for handling binary class imbalance problem more effectively. This work differs from weighted ELM as it does not require assigning weights to the training instances. The proposed work also has lower computational complexity compared to weighted ELM. This work uses class-specific regularization parameters. CCR-ELM also uses class-specific regularization parameters. In CCR-ELM the computation of regularization parameters does not consider class distribution and class overlap. This work uses class-specific regularization parameters which are computed using class distribution. This work also differ from CCR-ELM in the computation of the output weight, β. The proposed work has lower computational overhead compared to CCR-ELM. The proposed work is evaluated using benchmark real world imbalanced datasets downloaded from the KEEL dataset repository. The results show that the proposed work has better performance than weighted ELM, CCR-ELM , EFSVM, FSVM, SVM for class imbalance learning.


Expert Systems With Applications | 2018

Dynamic selection of Normalization Techniques using Data Complexity Measures

Sukirty Jain; Sanyam Shukla; Rajesh Wadhvani

Abstract Data preprocessing is an important step for designing classification model. Normalization is one of the preprocessing techniques used to handle the out-of-bounds attributes. This work develops 14 classification models using different learning algorithms for dynamic selection of normalization technique. This work extracts 12 data complexity measures for 48 datasets drawn from the KEEL dataset repository. Each of these datasets is normalized using min–max and z-score normalization technique. G-mean index is estimated for these normalized datasets using Gaussian Kernel Extreme Learning Machine (KELM) in order to determine the best-suited normalization technique. The data complexity measures along with the best-suited normalization technique are used as an input for developing the aforementioned dynamic models. These models predict the best suitable normalization technique based on the estimated data complexity measures of the dataset. The result shows that the model developed using Gaussian Kernel ELM (KELM) and Support Vector Machine (SVM) give promising results for most of the evaluated classification problems.


Engineering Applications of Artificial Intelligence | 2018

UnderBagging based reduced Kernelized weighted extreme learning machine for class imbalance learning

Bhagat Singh Raghuwanshi; Sanyam Shukla

Abstract Extreme learning machine (ELM) is one of the foremost capable, quick genuine esteemed classification algorithm with good generalization performance. Conventional ELM does not take into account the class imbalance problem effectively. Numerous variants of ELM-like weighted ELM (WELM), Boosting WELM (BWELM) etc. have been proposed in order to diminish the performance degradation which happens due to the class imbalance problem. This work proposed a novel Reduced Kernelized WELM (RKWELM) which is a variant of kernelized WELM to handle the class imbalance problem more effectively. The performance of RKWELM varies due to the arbitrary selection of the kernel centroids. To reduce this variation, this work uses ensemble method. The computational complexity of kernelized ELM (KELM) is subject to the number of kernels. KELM generally employ Gaussian kernel function. It employs all of the training instances to act as the centroid. This will lead to computation of the pseudoinverse of N × N matrix. Here, N represents the number of training instances. This operation becomes very slow for the large values of N . Moreover, for the imbalanced classification problems, using all the training instances as the centroid will result in more number of centroids representing the majority class compared to the centroids representing the minority class. This might lead to biased classification model, which favors the majority class instances. So, this work uses a subset of the training instances as the centroid of the kernels. RKWELM arbitrarily chooses N m i n instances from each class which acts as the centroid. The total number of centroids will be N ˜ = m × N m i n . Here, m represents the number of classes and N m i n is the number of instances belonging to the minority class which has the least number of instances. This reduction in the number of kernels will lead to reduced kernel matrix of size, N ˜ × N ˜ leading to decrease in the computational complexity. This work creates a number of balanced kernel subsets depending on the degree of class imbalance. A number of RKWELM based classification models are produced utilizing these balanced kernel subsets. The ultimate outcome is computed by the majority voting and the soft voting of these classification models. The proposed algorithm is assessed by using the benchmark real-world imbalanced datasets downloaded from the KEEL dataset repository. The experimental results indicate the superiority of the proposed work in contrast with the rest of classifiers for the imbalanced classification problems.


Applied Soft Computing | 2018

Class-specific kernelized extreme learning machine for binary class imbalance learning

Bhagat Singh Raghuwanshi; Sanyam Shukla

Abstract Class imbalance problem occurs when the training dataset contains significantly fewer samples of one class (minority-class) compared to another class (majority-class). Conventional extreme learning machine (ELM) gives equal importance to all the samples leading to the results which favor the majority-class. Numerous variants of ELM-like weighted ELM (WELM), class-specific cost regulation ELM (CCR-ELM), class-specific ELM (CS-ELM) etc. have been proposed in order to diminish the performance degradation which happens due to the class imbalance problem. ELM with Gaussian kernel outperforms the ELM with Sigmoid node. This work proposed a novel class-specific kernelized ELM (CSKELM) which is a variant of kernelized ELM to address the class imbalance problem more effectively. CSKELM with Gaussian kernel function avoids the non-optimal hidden node problem associated with CS-ELM and the other existing variants of ELM. This work is distinct from WELM because it does not require the assignment of weights to the training samples. In addition, the proposed work also has considerably lower computational cost in contrast with kernelized WELM. This work employs class-specific regularization in the same way as CS-ELM. This work differs from CS-ELM as the proposed CSKELM uses the Gaussian kernel function to map the input data to the feature space. The proposed work also has lower computational overhead in contrast with kernelized CCR-ELM. The proposed work is assessed by employing benchmark real-world imbalanced datasets downloaded from the KEEL dataset repository. The experimental results indicate the superiority of the proposed work in contrast with the rest of classifiers for the imbalanced classification problems.


International Journal of Advanced Computer Science and Applications | 2017

Block Wise Data Hiding with Auxilliary Matrix

Jyoti Bharti; R.K. Pateriya; Sanyam Shukla

This paper introduces a novel method based on auxiliary matrix to hide a text data in an RGB plane. To hide the data in RGB planes of image via scanning, encryption and decryption. To enhance the security, the scanning technique combines two different traversals – spiral and snake traversal. The encryption algorithm involves auxiliary matrix as a payload and consider the least significant bits of three planes. To embed the text message would in the form of ASCII values which are similar to the red plane values and least significant value of pixels in blue plane marks the position of pixels. The least significant bit of boundary values of green-plane signifies the message. These three planes are recombined to form the stego-image, to decrypt the message with the help of scanning in the red-plane and blue plane and green plane simultaneously. Performance evaluation is done using PSNR, MSE and entropy calculation and generated results are compared with some earlier proposed work to present its efficiency with respect to others.


International Journal of Advanced Computer Science and Applications | 2017

RIN-Sum: A System for Query-Specific Multi-Document Extractive Summarization

Rajesh Wadhvani; Rajesh Kumar Pateriya; Manasi Gyanchandani; Sanyam Shukla

In paper, we have proposed a novel summarization framework to generate a quality summary by extracting Relevant-Informative-Novel (RIN) sentences from topically related document collection called as RIN-Sum. In the proposed framework, with the aim to retrieve users relevant informative sentences conveying novel information, ranking of structured sentences has been carried out. For sentence ranking, Relevant-Informative-Novelty (RIN) ranking function is formulated in which three factors, i.e., the relevance of sentence with input query, informativeness of the sentence and the novelty of the sentence have been considered. For relevance measure instead of incorporating existing metrics, i.e., Cosine and Overlap which have certain limitations, a new relevant metric called as C-Overlap has been formulated. RIN ranking is applied on document collection to retrieve relevant sentences conveying significant and novel information about the query. These retrieved sentences are used to generate query-specific summary of multiple documents. The performance of proposed framework have been investigated using standard dataset, i.e., DUC2007 documents collection and summary evaluation tool, i.e., ROUGE.


international conference on computational intelligence and computing research | 2015

Analysis of statistical features for fault detection in ball bearing

Sanyam Shukla; Ram N. Yadav; Jivitesh Sharma; Shankul Khare

Fault detection in ball bearing has attracted attention of various researchers. Several Statistical features have been proposed and used by various researchers for fault detection in ball bearing. This work analyzes the importance of various available statistical features by different methods which includes graphical analysis, feature ranking using information gain and gain ratio. The results show that some of the statistical features can be used individually to distinguish between healthy and faulty ball bearings, i.e. we just need to use one statistical feature for distinguishing healthy and faulty ball bearings instead of using an ensemble of features, which is generally the case. This paper also proposes a new metric, which ranks the features based on how well the statistical features distinguish between healthy and faulty ball bearings, to identity the importance of statistical features for identifying faults.


international conference on computational intelligence and computing research | 2015

A novel sparse ensemble pruning algorithm using a new diversity measure

Sanyam Shukla; Jivitesh Sharma; Shankul Khare; Samruddhi Kochkar; Vanya Dharni

Extreme learning machine is state of art supervised machine learning technique for classification and regression. A single ELM classifier can however generate faulty or skewed results due to random initialization of weights between input and hidden layer. To overcome this instability problem ensemble methods can be employed. Ensemble methods may have problem of redundancy i.e. ensemble may contain several redundant classifiers which can be weak or highly correlated classifiers. Ensemble pruning can be used to remove these redundant classifiers. The pruned ensemble should not only be accurate but diverse as well in order to correctly classify boundary instances. This work proposes an ensemble pruning algorithm which tries to establish a tradeoff between accuracy and diversity. The paper also proposes a metric which scores classifiers based on their diversity and contribution towards the ensemble. The results show that the pruned ensemble performs equally well or in some cases even better as compared to the unpruned set in terms of accuracy and diversity. The results of the experiments show that the proposed algorithm performs better than VELM. The proposed algorithm reduces the ensemble size to less than 60 % of the original ensemble size (original ensemble size is set to 50).

Collaboration


Dive into the Sanyam Shukla's collaboration.

Top Co-Authors

Avatar

Bhagat Singh Raghuwanshi

Maulana Azad National Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Rajesh Wadhvani

Maulana Azad National Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Ram N. Yadav

Maulana Azad National Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Sukirty Jain

Maulana Azad National Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Jivitesh Sharma

Maulana Azad National Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Mona Singhal

Maulana Azad National Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Shankul Khare

Maulana Azad National Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

R.K. Pateriya

Maulana Azad National Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Samruddhi Kochkar

Maulana Azad National Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Sanjay Yadav

Maulana Azad National Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge