Rajiv Pandey
Amity University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Rajiv Pandey.
international conference on communication systems and network technologies | 2015
Rajiv Pandey; Nidhi Srivastava; Shahnaz Fatima
Big Data is the buzz word doing rounds in all areas of human existence be medical, social networks, research, it has also made inroads to education. The large size and complexity of datasets in Big Data need specialized statistical tools for analysis where R can come handy. This paper explores the analysis of Big Data in education using a contemporary statistical tool R. R provides multiple dimensions to statistical analysis of dataset, this paper however explores only the Box Plot feature to study the impact of outliers on the overall summary measure of the dataset. The feature of trimmed mean is incorporated to demonstrate its impact on outliers. The trimmed data set can be used in predictive analysis for a business intelligence prediction or educational context.
international conference cloud system and big data engineering | 2016
Chitresh Verma; Rajiv Pandey
Big Data is a large dataset displaying the features of volume, velocity and variety in an OR relationship. Big Data as a large dataset is of no significance if it cannot be exposed to strategic analysis and utilization. There are many software and hardware solutions available in the technological landscape that enable capturing, storing and subsequently analysis of Big Data. Hadoop and its associated technological solution is one of them. Hadoop is the software framework for computing large amount of data. It is made up of four main modules. These modules are Hadoop Common, Hadoop Distributed File System (HDFS), Hadoop YARN, and Hadoop MapReduce. Hadoop MapReduce divides large problem into smaller sub problems under the control of JobTracker. This paper suggests a Big Data representation for grade analytics in an educational context. The study and the experiments can be implemented on R or AWS the cloud infrastructure provided by Amazon.
international conference on computational intelligence and communication networks | 2015
Agnivesh; Rajiv Pandey
The data generated from both men and machines are exponentially multiplying the size and the structural definition of the data. Such a voluminous, dynamic and unstructured data termed as Big Data is analyzed and maintained and can be used for various purposes and applications. Big Data is generated from sources like social media, cyber physical system and business entities. This enormous data generation leads to problems of data storage and analysis. The Big Data with its diverse features calls for various tools, technologies and algorithms to make an inference which shall render strategically advantage to any entity. A typical data analytical scenario is a multidimensional problem and data clustering can lead to multi spatial analysis. Cluster can be a result of various algorithms. In this paper k means clustering is applied to generate clusters using R statistical tool and recommend elective on the basis of students performance.
international conference on communication systems and network technologies | 2015
Rajiv Pandey; Manoj Dhoundiyal; Amrendra Kumar
The large size and complexity of datasets in Big Data need specialized statistical tools for analysis and we use R for correlation analysis of our data set. This paper explores the correlation analysis through best fit linear regression of quantitative variables with help of the demonstration based on scatter plots and linear regression best fit line. The analysis demonstrated in this paper is scalable to Big Data in any other context where the quantitative variables are clearly delineated. R provides multiple techniques and inferences to statistical analysis of dataset, this paper however explores the correlation between quantitative variable establishing the extent of dependability between them using R functions. The correlation and best fit line functions of R i.e. Cor () and abline(lmout) respectively are significantly explored.
International Journal of Computer Applications | 2017
Mrinal Pandey; Rajiv Pandey
The Semantic web envisioned by Tim Berners lee, provides for intelligent knowledge retrieval. Although in addition to knowledge retrieval there is also a need to make the knowledge so derived as trustworthy. This requires the incorporation of trust or provenance information in the semantic web. Provenance serves as a crucial factor in enhancing the trust ability of the semantic web. This paper aims at the creation of trustable semantic web by creating provenance assertions and provides for verifying the trust ability of these assertions by providing provenance of provenance descriptions for the same. This is shown using Bundles a special data structure required for the linking of provenance bundles. Also, we have tried to illustrate how the provenance descriptions created by one application can be effectively manipulated by other application by the use of these bundles General Terms Semantic Web; Provenance; Ontology; Bundles; MentionOf
International Journal of Computer Applications | 2017
Alok Tripathi; Rajiv Pandey
Lot of research has been done in the previous years to deal with threat of collusion attacks on finger printing codes. Digital fingerprints are code inserted in the media contents before distribution. Each fingerprinting code is assigned to an intended recipient. This fingerprinting code is used to track the culprit in case of illegal distribution of media contents by users. It is now possible for a group of users with different printing codes of the same content to collude together and collectively mount attack against fingerprints. Thus collusion attack poses a real challenge to protect the copyright of digital media. This paper presents an analysis of Boneh-Shaw finger printing codes under Majority Value collusion attacks.
international conference on computational intelligence and communication networks | 2016
Komal Verma; Rajiv Pandey; Arpit Gupta
Make in India initiative by the Govt. of India is an endeavor to enable transfer of technology and boost the production across India. It is optimally desired to employ workforce that geographically maps the industry location of a produce. The paper focus es on analyzing make in India Big Data through K-means algorithm using R-studio. Analyzing the said data set shall enable decision makers to identify the workforce and deploy the same to enhance the cost incurred by the setup. The proposed analytics shall capture, store and analyze the dataset to form region wise clusters based on the skill sets that may be possessed. This paper describes data analytics using the R-tool. The tool is used for organizing the data, giving a statistical and tabular description stating how the optimal skillset based allotment can be done. Through R-Tool the entire work force shall be grouped into clusters which are represented region wise and thus it would be financially viable to allocate skillset based job allocation in the prescribed region.
international conference on computational intelligence and communication networks | 2015
Mrinal Pandey; Rajiv Pandey
The Semantic Web considered an intelligent web is an effective way to retrieve data. Though the semantic data is returned by the semantic web it may lack trust and cannot be suitable for consumption by man or machine without the evaluation of provenance. The integration of Provenance in the Trust layer of Semantic web not only allows for intelligent knowledge retrieval but also leads to retrieving trustworthy data. This can be efficiently achieved if we bring to use the various layers of the Provenance Stack thereby creating valid provenance instances. This paper explores the need for information trustworthiness in OWL Ontology and has deliberated the rules to incorporate trustworthiness so as to develop valid provenance instances by means of the constraints provided in the PROV-CONSTRAINTS layer. The resultant Ontology can thus be used by machines for learning & making semantic inferences.
grid computing | 2014
Mrinal Pandey; Rajiv Pandey
Semantic web is a web that allows for intelligent knowledge representation and retrieval. However intelligent information retrieval must also cater to the credibility of data being delivered to a seeker of information. Provenance, which tracks the lineage of data items present in the web, is an efficient way to deliver trustworthy data. Provenance is crucial as it allows users to be sure about the content that is being delivered to them. It also aids in reproducibility, result analysis and problem diagnosis. In this paper we have made an endeavor to describe Provenance Architecture Stack and state its relevance to OWL Ontology.
grid computing | 2014
Mrinal Pandey; Rajiv Pandey
Intelligent information retrieval and data credibility is maintained by the use of Provenance in semantic web. However there is still a need to embed provenance data in a simpler form so that it is trusted and is easily available for human consumption. This Objective is achieved by the use of PROV-ASN that provides multiple expression assertions for the use of abstract notations to represent provenance information on the semantic web. In this paper we have made an endeavour to create valid PROV-ASN instances by embedding entity, agent and activity expressions with reference to University People Program Ontology.