Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Karamjit Kaur is active.

Publication


Featured researches published by Karamjit Kaur.


international conference on big data | 2013

Modeling and querying data in NoSQL databases

Karamjit Kaur; Rinkle Rani

Relational databases are providing storage for several decades now. However for todays interactive web and mobile applications the importance of flexibility and scalability in data model can not be over-stated. The term NoSQL broadly covers all non-relational databases that provide schema-less and scalable model. NoSQL databases which are also termed as Internetage databases are currently being used by Google, Amazon, Facebook and many other major organizations operating in the era of Web 2.0. Different classes of NoSQL databases namely key-value pair, document, column-oriented and graph databases enable programmers to model the data closer to the format as used in their application. In this paper, data modeling and query syntax of relational and some classes of NoSQL databases have been explained with the help of an case study of a news website like Slashdot.


IEEE Computer | 2015

Managing Data in Healthcare Information Systems: Many Models, One Solution

Karamjit Kaur; Rinkle Rani

Because healthcare data comes from multiple, vastly different sources, databases must adopt a range of models to process and store it. A polyglot-persistent framework combines relational, graph, and document data models to accommodate information variety.


ieee international advance computing conference | 2015

SQL2Neo: Moving health-care data from relational to graph databases

Manpreet Singh; Karamjit Kaur

De-facto storage model being used by health-care information systems is Relational Database Management Systems (RDBMS). Albeit relational storage model is mature and widely used; they are incompetent to store and query data encompassing high degree of relationships. Health-care data is heavily annotated with relationships and hence are a suitable candidate for a specialized data model - Graph databases. Graph databases will empower health-care professionals to discover and manage new and useful relationships and also provides speed when querying highly-related data. To query related data, relational databases employ massive joins which are very expensive, in contrast graph data-stores have direct pointers to their adjacent nodes. Hence achieving much needed scalability to handle huge amount of medical data being generated at a very high velocity. Also, healthcare data is primarily semi/un-structured - inciting the need of a schema-less database. In this proposal a methodology to convert a relational to a graph database by exploiting the schema and the constraints of the source, is proposed. The approach supports the translation of conjunctive SQL queries over the source into graph traversal operations over the target. The experimental results are provided to show the feasibility of the solution and the efficiency of query answering over the target database. Tuples are mapped to nodes and foreign key is mapped into edges. Software have been implemented in Java to convert a sample medical relational database with 24 tables to a graph database. During transformation, constraints were preserved. MySQL as relational database and popular graph database - Neo4j was used for the implementation of proposed system - SQL2Neo.


Journal of optical communications | 2017

Analysis of Single-Mode Fiber Link Performance for Attenuation in Long-Haul Optical Networks

Karamjit Kaur; Hardeep Singh

Abstract In the past decades, optical fiber has been widely used in communication system owing to low transmission losses, large information carrying capacity, small size, immunity to electrical interference and increased signal security. Focusing on increasing the network transmission capacity, control on the quality of transmission was the field that withdraws attention of research community. For this reason, fiber losses and their compensation remain the important design issue. In the present work, an effort is put in to design a system capable of doing error analysis of system for power losses taking place in the presence of attenuation effect. Attenuation is one of the important phenomena that determine the maximum possible distance between a transmitter and receiver or quantity and position of amplifiers and repeaters in optical networks. The mathematical model equations are obtained representing variation trends of bit error rate BER and Q-value with varying attenuation, which has been verified by different wavelength sources and network conditions.


international conference on advances in computer engineering and applications | 2015

Integration of heterogeneous databases

Binny Garg; Karamjit Kaur

Database integration implies the integration and aggregation of data from different databases within or outside the organization and use that integrated data in many real time applications. Today, due to cloud computing come into the picture there is a need of sharing the resources and need to achieve consistency also. But there are some problems like we have different platforms, different query languages, different data models, different dependencies exist among databases and applications. So, integration solves all above problems and provides a transparent environment to the user.


Journal of optical communications | 2017

Performance Improvement of WDM Optical Network using Optimal Regenerator Placement Strategy

Karamjit Kaur; Anil Kumar; Hardeep Singh

Abstract As the optical networks are moving towards transparent networks, the Optical-to-Electrical (O-E) conversion taking place within the link is reduced to minimal. This results into accumulation of physical layer impairments along the light path, thereby degrading the signal quality. Due to enormous data traffic carried by optical links, any link failure or non-recovery to data at the destination end may result in huge loss. To improve the system performance, the optimum placement of regenerators is one of the solutions for the same, where signal regeneration may takes place at certain pre-specified nodes. Three different strategies of regenerator placement are discussed in the present work. The improvement in system performance with this is also presented.


Journal of Systems and Software | 2017

Analyzing inconsistencies in software product lines using an ontological rule-based approach

Megha Bhushan; Shivani Goel; Karamjit Kaur

Abstract Software product line engineering (SPLE) is an evolving technical paradigm for generating software products. Feature model (FM) represents commonality and variability of a group of software products that appears within a specific domain. The quality of FMs is one of the factors that impacts the correctness of software product line (SPL). Developing FMs might also incorporate inaccurate relationships among features which cause numerous defects in models. Inconsistency is one of such defect that decreases the benefits of SPL. Existing approaches have focused in identifying inconsistencies in FMs however, only a few of these approaches are able to provide their causes. In this paper FM is formalized from an ontological view by converting model into a predicate-based ontology and defining a set of first-order logic based rules for identifying FM inconsistencies along with their causes in natural language in order to assist developers with solutions to fix defects. A FM available in software product lines online tools repository has been used to explain the presented approach and validated using 24 FMs of varied sizes up to 22,035 features. Evaluation results demonstrate that our approach is effective and accurate for the FMs scalable up to thousands of features and thus, improves SPL.


International Conference on Information, Communication and Computing Technology | 2017

Method to Resolve Software Product Line Errors

Megha; Arun Negi; Karamjit Kaur

Feature models (FMs) are of utmost importance when representing variability in Software Product Line (SPL) by focusing on the set of valid combinations of features that a software product can have. FMs quality is one of the factors that impacts the quality of SPL. There are several types of errors in FMs that reduces the benefits of SPL. Although FM errors is a mature topic but it has not been completely achieved yet. In this paper, disparate studies for the FM errors in SPL are summarized and a method based on rules to fix these errors is proposed and explained with the help of case studies. The results of evaluation with FMs up to 1000 features show the scalability and accuracy of the given method which improves the SPL quality.


International Conference on Advanced Informatics for Computing Research | 2017

Impact of Higher Order Impairments on QoS Aware Connection Provisioning in 10 Gbps NRZ Optical Networks

Karamjit Kaur; Hardeep Singh; Anil Kumar

Invent of the optical amplifier has increased the data traffic being carried by each fiber such that even a brief disruption could result in enormous data loss. As we are moving towards transparent networks, connection provisioning is becoming quite challenging task due to additive nature of physical layer impairments and decreased O-E-O conversions taking place inside the routes. In static connection provisioning, the cost function representing the signal quality must consider not only the linear impairments but also, the effect of higher order impairments as they significantly influence the quality of transmission. Present work describes the design of a system with capability to consider the effect of higher order impairments on OSNR (optical signal to noise ratio) and BER (bit error rate) for connection provisioning in optical networks with the focus on comprehensiveness, transparency, and scalability of the system. The processing is carried out in time domain by offline digital signal processing using eye diagram analysis.


2016 Third International Conference on Digital Information Processing, Data Mining, and Wireless Communications (DIPDMWC) | 2016

Application of Data Mining for high accuracy prediction of breast tissue biopsy results

Divyansh Kaushik; Karamjit Kaur

In todays world where awareness for Breast Cancer is being carried out at a large scale, we still lack the diagnostic tools to suggest whether a person is suffering from Breast Cancer or not. Mammography remains the most significant method of diagnosing someone with Breast Cancer. However, mammograms sometimes are not definite due to which a radiologist cannot pronounce his/her decision based solely on them and has to resort to a biopsy. This paper proposes a data mining technique based on Ensemble of classifiers following data pre-processing, to predict the outcomes of the biopsy using the features extracted from the mammograms. The results achieved in this paper on the Mammographic Masses dataset are highly promising and have an accuracy of 83.5% and an ROC (Receiver Operating Characteristics) area of 0.907 which is higher than the existing approaches.

Collaboration


Dive into the Karamjit Kaur's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge