Sukhjit Singh Sehra
Guru Nanak Dev Engineering College, Ludhiana
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Sukhjit Singh Sehra.
international conference on information technology: new generations | 2014
Sukhjit Singh Sehra; Jaiteg Singh; Hardeep Singh Rai
The evolution of web has changed the way of interaction with the user. Web 2.0 encouraged more contribution from the user of varying level of mapping experience and is called Crowd Sourcing. Open Street Map is also the outcome of Crowd Sourcing. It is collecting huge data with help of general public, researchers have started analysing the data rather than collecting it. The aim of this study is to review the research work for assessment of Open Street Map Data. It is concluded that the most of research work on assessment of Open Street map data has been done for countries like Germany, UK & USA. But the authenticity and accuracy of reference data still unanswered. Another issue that is concluded by this review, in context to Indian subcontinent, is the requirement of through analysis of Open Street Map data.
International Journal of Computer Applications | 2013
Sukhjit Singh Sehra; Jaiteg Singh; Hardeep Singh Rai
The meaning and purposes of web has been changing and evolving day by day. Web 2.0 encouraged more contribution by the end users. This movement provided revolutionary methods of sharing and computing data by crowdsourcing such as OpenStreetmap, also called ”the wikification of maps” by some researchers. When crowdsourcing collects huge data with help of general public with varying level of mapping experience, the focus of researcher should be on analysing the data rather than collecting it. Researchers have assessed the quality of OpenStreetMap data by comparing it with proprietary data or data of governmental map agencies. This study reviews the research work for assessment of OpenStreetMap Data and also discusses about the future directions.
International Journal of Computer Applications | 2014
Ranbir Kaur; Sukhjit Singh Sehra
detection is one of the essential challenges in crime mapping and analysis. Data mining can be used to explore crime detection problems. A cluster technique is an effective method for determining areas with high concentrations of localized events. Conversely, it remains a particularly demanding task to detect hotspots with mapping methods in view of the vulnerability connected with the suitable number of groups to create and additionally securing significance of individual clusters identified. Fuzzy clustering means algorithm was used for identifying hotspots of Chicago police departments citizen law enforcement analysis and reporting system data. In fuzzy clustering, a membership value to each data is assigned, which indicate the strength of relationship between that data points and a specific cluster. In this study each cluster represented the group of global positioning system data points having latitude and longitude as their co- ordinates. The findings from this study were expected to aware the public about crime hotspots. Law enforcement agencies can take prior steps to prevent crime with the use of detected crime hotspots.
Future Internet | 2017
Sukhjit Singh Sehra; Jaiteg Singh; Hardeep Singh Rai
OpenStreetMap (OSM) is a recent emerging area in computational science. There are several unexplored issues in the quality assessment of OSM. Firstly, researchers are using various established assessment methods by comparing OSM with authoritative dataset. However, these methods are unsuitable to assess OSM data quality in the case of the non-availability of authoritative data. In such a scenario, the intrinsic quality indicators can be used to assess the quality. Secondly, a framework for data assessment specific to different geographic information system (GIS) domains is not available. In this light, the current study presents an extension of the Quantum GIS (QGIS) processing toolbox by using existing functionalities and writing new scripts to handle spatial data. This would enable researchers to assess the completeness of spatial data using intrinsic indicators. The study also proposed a heuristic approach to test the road navigability of OSM data. The developed models are applied on Punjab (India) OSM data. The results suggest that the OSM project in Punjab (India) is progressing at a slow peace, and contributors’ motivation is required to enhance the fitness of data. It is concluded that the scripts developed to provide an intuitive method to assess the OSM data based on quality indicators can be easily utilized for evaluating the fitness-of-use of the data of any region.
International Journal of Computer Applications | 2013
Gitanjali; Sukhjit Singh Sehra; Jaiteg Singh
Cloud Computing is a set of IT Services that are provided to a customer over a network and these services are delivered by third party provider who owns the infrastructure and reduce the burden at users end. Nowadays researchers devoted their work access control method to enhance the security on Cloud. RBAC is attractive access model because the number of roles is significantly less hence users can be easily classified according to their roles. The Role-based Access Control (RBAC) model provides efficient way to manage access to information while reducing the cost of security administration and complexity in large networked applications. This paper specify various policies in RBAC on clouds such as migration policy which helps the user to migrate the database schema and roles easily to the Cloud using XML with more security. Restriction policy provide the security enhancement in Role Based Access Model by restricting the number of transaction per user and if the number of transactions will increase the admin will come to know through its monitoring system that unauthorized access has been made and it would be easier to take action against such happening. This paper proposes backup and restoration policy in Role Based Access Model in which if the main cloud is crashed or not working properly then the backup and restoration facility will be available to avoid the lost of important data. In this case chances of loss of data are very less so enhance more security on Cloud Computing.
International Journal of Computer Applications | 2013
Sumeet Kaur Sehra; Jasneet Kaur; Sukhjit Singh Sehra
Software effort estimation requires high accuracy, but accurate estimations are difficult to achieve. Increasingly, data mining is used to improve an organizations software process quality, e. g. the accuracy of effort estimations . There are a large number of different method combination exists for software effort estimation, selecting the most suitable combination becomes the subject of research in this paper. In this study, three simple preprocessors are taken (none, norm, log) and effort is measured using COCOMO model. Then results obtained from different preprocessors are compared and norm preprocessor proves to be more accurate as compared to other preprocessors.
International Journal of Spatial, Temporal and Multimedia Information Systems | 2016
Sukhjit Singh Sehra; Jaiteg Singh; Hardeep Singh Rai
OpenStreetMap is producing huge spatial data contributed by users of different backgrounds and varying level of mapping experiences. Due to this generated map data may be topologically incorrect, which explicitly expresses the spatial relationship between features. To make the map data navigable, it is important that data is free from topological errors. The current work has been conducted to detect topological errors in OpenStreetMap data. OpenStreetMap data of Punjab (India) has been taken as test data for finding topological errors. For cleaning the topological errors, map data has been processed using different algorithms of open source geographic information systems, and it has been concluded that OpenStreetMap data is not free from topological errors and need a thorough preprocessing before being used for navigation purposes.
2016 International Conference on Innovation and Challenges in Cyber Security (ICICCS-INBUSH) | 2016
Sherry Chalotra; Sumeet Kaur Sehra; Sukhjit Singh Sehra
In this paper an overview of Bee Colony Optimization and area of its application where it has been used is given. Bee Colony Optimization is based on concept of Swarm Intelligence (SI), the artificial intelligence (AI) which is based on decentralized and self-organizing systems that can either be natural or artificial. Bee Colony Optimization is a meta-heuristic algorithm which uses the swarm behavior of bees to interact locally with one another in their environment that simulates the foraging behavior of honey bees and combines the global explorative search with local explorative search. The Bees Algorithms hunts synchronously the most promising regions of the solution space and also samples the most favorable regions. BCO is a class of optimization algorithm which uses the bottom-up approach of modeling and swarm intelligence of honeybees. The primary aim of this paper is to give an insight into the areas in which BCO can be used.
2017 International Conference on Inventive Systems and Control (ICISC) | 2017
Kirandeep Kaur; Sukhjit Singh Sehra; Priyanka Arora; Sumeet Kaur Sehra
With the change of time information related to geography and volunteered geography also changes. In this way extraction of spatial patterns from crowdsourced data has become most valuable for service suppliers. These patterns represent the spatial features of the co-related objects. The existing approaches used Dijkstras algorithm and Euclidean distance to find spatial patterns which can not compute accurately. Crowdsourced data is growing on daily basis through mobile phones, road networks and remote sensors. In order to process this type of large data set is also becoming difficult. In this research work we have proposed a system to process crowdsourced data taken from OpenStreetMap to mine the useful patterns using SpatialHadoop. SpatialHadoop has used Pigeon script, a spatial extension to Pig that is a high level language. These patterns will assist service providers to offer different sites based on facilities. In this extraction method, spatial data is loaded into the system and filtered for nodes, ways and relations. The filtered data is used for the mining process by using kNN joins. After this the evaluation of multiple resolution pruning filter with spatial datasets are generated using argument values. The different data sets have been checked using this methodology to extract the spatial patterns. This technique is compared with PostgreSQL and it is observed that SDM has provided more efficient results. The result obtained from experiment has shown the performance of our system that is better in comparison to the already existing systems in consideration of efficiency, speed and accuracy that rely on a network.
International Journal of Advanced Research in Computer and Communication Engineering | 2015
Rajeev Sharma; Sukhjit Singh Sehra; Sumeet Kaur Sehra; Guru Nanak
Now a day Internet only put up best effort service. Traffic is transmitting as earliest as possible, but during transmission, there is no guarantee of timelines or real delivery of packets. With the swiftly transformation of the Internet into a commercial infrastructure, demands for a quality of service have developed in rapid rate. People of the present world are very much depending upon the various network services like VOIP, Video conferencing and File Transfer (6,7). Various categories of Traffic Management systems are used in those services. Queuing is one of the very important mechanisms in traffic management system. Each router in the network must implement some queuing discipline that control how packets are buffered while waiting to be transmitted. The main aim of this paper is to highlight quality of service (QoS) analysis using different queuing disciplines.