Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kalpdrum Passi is active.

Publication


Featured researches published by Kalpdrum Passi.


electronic commerce and web technologies | 2002

A Model for XML Schema Integration

Kalpdrum Passi; Louise Lane; Sanjay Kumar Madria; Bipin C. Sakamuri; Mukesh K. Mohania; Sourav S. Bhowmick

We define an object-oriented data model called XSDM (XML Schema Data Model) and present a graphical representation of XML Schema integration. The three layers included are, namely, pre-integration, comparison and integration. During pre-integration, the schema present in XML Schema notation is read and is converted into the XSDM notation. During the comparison phase of integration, correspondences as well as conflicts between elements are identified. During the integration phase, conflict resolution, restructuring and merging of the initial schemas take place to obtain the global schema.


data and knowledge engineering | 2008

An XML Schema integration and query mechanism system

Sanjay Kumar Madria; Kalpdrum Passi; Sourav S. Bhowmick

The availability of large amounts of heterogeneous distributed web data necessitates the integration of XML data from multiple XML sources for many reasons. For example, currently, there are many e-commerce companies, which offer similar products but use different XML Schemas with possibly different ontologies. When any two such companies merge, or make an effort to service customers in cooperation, there is a need for an integrated schema and query mechanism for the interoperability of applications. In applications like comparison-shopping, there is a need for an illusionary centralized homogeneous information system. In this paper, we propose XML Schema integration and querying methodology. We define an object-oriented data model called XSDM (XML Schema Data Model) and present a graphical representation of XML Schema for the purpose of schema integration. We use a three-layered architecture for XML Schema integration. The three layers included are namely pre-integration, comparison, and integration. The three layers can conceptually be regarded as three phases of the integration process. During pre-integration, the schemas present in XML Schema notation are read and converted into the XSDM notation. During the comparison phase of integration, correspondences as well as conflicts between elements are identified. During the integration phase, conflict resolution, restructuring and merging of the initial schemas takes place to obtain the global schema. We define integration policies for integrating element definitions as well as their datatypes and attributes. An integrated global schema forms the basis for querying a set of local XML documents. We discuss various strategies for rewriting the global query over the global schema into the sub-queries over local schemas. Their respective local schemas validate the sub-queries over the local XML documents. This requires the identification and use of mapping rules and relationships between the local schemas.


Information Systems | 2007

Efficient processing of XPath queries using indexes

Sanjay Kumar Madria; Yan Chen; Kalpdrum Passi; Sourav S. Bhowmick

A number of indexing techniques have been proposed in recent times for optimizing the queries on XML and other semi-structured data models. Most of the semi-structured models use tree-like structures and query languages (XPath, XQuery, etc.) which make use of regular path expressions to optimize the query processing. In this paper, we propose two algorithms called Entry-point algorithm (EPA) and Two-point Entry algorithms that exploit different types of indices to efficiently process XPath queries. We discuss and compare two approaches namely, Root-first and Bottom-first in implementing the EPA. We present the experimental results of the algorithms using XML benchmark queries and data and compare the results with that of traditional methods of query processing with and without the use of indexes, and ToXin indexing approach. Our algorithms show improved performance results than the traditional methods and Toxin indexing approach.


international database engineering and applications symposium | 2009

Maintaining integrated XML schema

Kalpdrum Passi; Don Morgan; Sanjay Kumar Madria

When e-commerce companies merge there is a need to integrate their local schema into a uniform global source that is easily generated and maintained. In this paper we explore the incremental maintenance of global XML schema against updates to local schemas with the use of three simple operations, Add, Remove, and Change. These operations are designed to work as an extension to the AXIS model [12], which currently does not have a way to maintain the global schema once an underlying source schema is updated.


international conference on conceptual modeling | 2003

AXIS: A XML Schema Integration System

Bipin C. Sakamuri; Sanjay Kumar Madria; Kalpdrum Passi; Eric Chaudhry; Mukesh K. Mohania; Sourav S. Bhowmick

The availability of large amounts of heterogeneous distributed web data necessitates the integration and querying of XML data from multiple XML sources for many reasons. For example, currently many government agencies in US such as IRS, INS, FBI, CIA are integrating their system to deal with new security threats, and these different departments uses legacy database systems including relational data, flat files, spreadsheets, and html pages, and simple text data. Similarly, there are many e-commerce companies, which sell similar products but represent data using different XML schemas. When any two such companies merge, or make an effort to service customers in cooperation, there is a need for a uniform schema integration methodology [1,2]. In some applications like comparison-shopping, there is a need for an illusionary centralized homogeneous information system. Such systems need a uniform data representation and access platform, which is provided by XML. However, the XML schema and data are still heterogeneous and represent their constraints differently. To avoid the overhead of system integration and system specific data access mechanisms, applications should be provided with data in an integrated form. The idea is to use XML as an intermediate medium to achieve date integration from heterogeneous data resources. There are many efforts currently on generating views or representing data in only XML format, but internally stored in legacy databases. Using wrappers, applications can view the data in XML, instead of moving the data from their original format to XML. However, wrappers fail if the structure of the data is dynamically changed. Our approach is two phase; the integration of the local XML schemas into a global schema, and the integration of the resultant XML data produced in response to the queries to the local XML data sources. A global schema eliminates data model differences by integrating local schemas. The heterogeneous XML data sources need not be represented in an integrated fashion. This is because integrating the XML data and storing it in the new integrated schema occupies extra resources, and may result in duplication, and thus, creates the problems of multiple updates and data inconsistencies. For this reason, we present a dynamic mechanism, which can interface the different XML data and can present an integrated representation of the XML sources, rather than physically integration of data.


international multiconference on computer science and information technology | 2008

Assessing the properties of the World Health Organization’s Quality of Life Index

Tamar Kakiashvili; Waldemar W. Koczkodaj; Phyllis Montgomery; Kalpdrum Passi; Ryszard Tadeusiewicz

This methodological study demonstrates how to strengthen the commonly used world health organizationpsilas quality of life index (WHOQOL) by using the consistency-driven pairwise comparisons (CDPC) method. From a conceptual view, there is little doubt that all 26 items have exactly equal importance or contribution to assessing quality of life. Computing new weights for all individual items, however, would be a step forward since it seems reasonable to assume that all individual questions have equal contribution to the measure of quality of life. The findings indicate that incorporating differences of importance of individual questions into the model is essential enhancement of the instrument.


cooperative information systems | 2004

A Global-to-Local Rewriting Querying Mechanism Using Semantic Mapping for XML Schema Integration

Kalpdrum Passi; Eric Chaudhry; Sanjay Kumar Madria; Sourav S. Bhowmick

We have proposed a methodology for integrating local XML Schemas in [12]. An integrated global schema forms the basis for querying a set of local XML documents. In this paper, we discuss various strategies for rewriting the global query over the global schema into the sub-queries over local schemas. Their respective local schemas validate the sub-queries over the local XML documents. This requires the identification and use of mapping rules and relationships between the local schemas


bioinformatics and biomedicine | 2016

Improved microarray data analysis using feature selection methods with machine learning methods

Jing Sun; Kalpdrum Passi; Chakresh Kumar Jain

Microarray data analysis directly relates with the state of disease through gene expression profile, and is based upon several feature extractions to classification methodologies. This paper focuses on the study of 8 different ways of feature selection preprocess methods from 4 different feature selection methods. They are Minimum Redundancy-Maximum Relevance (mRMR), Max Relevance (MaxRel), Quadratic Programming Feature Selection (QPFS) and Partial Least Squared (PLS) methods. In this study, microarray datasets of colon cancer and leukemia cancer were used for implementing and testing four different classifiers i.e. K-Nearest-Neighbor (KNN), Random Forest (RF), Support Vector Machine (SVM) and Neural Network (NN). The performance was measured by accuracy and AUC (area under the curve) value. The experimental results show that discretization can somehow improve performance of microarray data analysis, and mRMR gives the best performance of microarray data analysis on the colon and leukemia datasets. We also list some results on comparative performance of methods for the specific (data-ratio) number of features.


health information science | 2013

Semantic web and ontology engineering for the colorectal cancer follow-up clinical practice guidelines

Hongtao Zhao; Kalpdrum Passi

Follow-up care for Cancer patients is provided by the oncologist at the cancer center. There are administrative and cost advantages in providing the follow-up care by family physicians or nurses. This paper presents a Semantic Web approach to develop a decision support system for the Colorectal Cancer Follow-up care that can be used to provide the follow-up care by the physicians. The decision support system requires the development of Ontology for the follow-up care suggested by the Clinical Practice Guidelines (CPG). We present the ontology for the Colorectal Cancer based on the follow-up CPG. This formalized and structured CPGs ontology can then be used by the semantic web framework to provide patient specific recommendation. In this paper, we present the details on the design and implementation of this ontology and querying the ontology to generate knowledge and recommendations for the patients.


atlantic web intelligence conference | 2007

Attacking the Web Cancer with the Automatic Understanding Approach

Ratvinder Singh Grewal; Ryszard Janicki; Tamar Kakiashvili; K. Kielan; Waldemar W. Koczkodaj; Kalpdrum Passi; Ryszard Tadeusiewicz

The new method, based on automatic understanding is proposed for fighting spam in web information exchange (especially email correspondence). The web cancer term is used to reflect the variety and sophistication of web contaminations. The notable oncology achievements in medicine could inspire more research towards finding solutions to what can easily turn into an analogous civilization crisis. Automatic understanding is appropriate for the semantic-level content analysis and is expected to substantially reduce the wasted user time for semi-automatic analysis needed for the massive processing as most filters are too tight or too loose.

Collaboration


Dive into the Kalpdrum Passi's collaboration.

Top Co-Authors

Avatar

Sanjay Kumar Madria

Missouri University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Sourav S. Bhowmick

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ryszard Tadeusiewicz

AGH University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Bipin C. Sakamuri

Missouri University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge