Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sanjeev Patel is active.

Publication


Featured researches published by Sanjeev Patel.


Telecommunication Systems | 2017

Adaptive mean queue size and its rate of change: queue management with random dropping

Karmeshu; Sanjeev Patel; Shalabh Bhatnagar

The random early detection active queue management (AQM) scheme uses the average queue size to calculate the dropping probability in terms of minimum and maximum thresholds. The effect of heavy load enhances the frequency of crossing the maximum threshold value resulting in frequent dropping of the packets. An adaptive queue management with random dropping algorithm is proposed which incorporates information not just about the average queue size but also the rate of change of the same. Introducing an adaptively changing threshold level that falls in between lower and upper thresholds, our algorithm demonstrates that these additional features significantly improve the system performance in terms of throughput, average queue size, utilization and queuing delay in relation to the existing AQM algorithms.


ieee international conference on image information processing | 2013

Function point distribution using maximum entropy principle

Sanjeev Patel

Software cost is estimated through the effort and number of functioning components measured in terms of person month (p-m) and function points (FPs) respectively. In this paper we have considered the software cost based on the FPs because FPs is independent of the technologies. Initially function point analysis (FPA) was designed without any reference to the theoretical foundation which is based on the measurement done by the expert team. Function point data is described for more than hundred software development projects in the literature. It was also discussed about limitations of the resulting model in estimating development effort. This paper attempts to study and quantify the software cost in case of multiple projects or set of softwares. In case of single project or software, we attempt to study and quantify the function point counts (FPCs) for different components of the software or function types (FTs). Maximum Entropy Principle (MEP) is a very popular technique to estimate the maximum information or entropy subject to the given constraints. This paper presents an application of Maximum Entropy Principle (MEP) to distribute the Unadjusted Function Point Counts (UFPCs) subject to a given software cost. Thereafter, this application is applied over set of softwares to allocate the individual software cost when total cost to the software was given. In this paper we have also analyzed the proportionate of Unadjusted Function Point Counts (UFPCs), Number of FPs (# FPs), and weight of the different functional components or FTs for given software cost.


international conference on contemporary computing | 2014

Performance analysis of RED for stabilized queue

Sanjeev Patel

In this paper, our aim is to stabilize an Active Queue Management (AQM) Algorithm that aims to achieve low loss rate and high throughput as well as high link utilization. We have tried to study about stabilization of Random Early Detection (RED) for different models. Further, we have presented comparative performance analysis of existing stabilized models with our modified RED. The key idea is to modify the existing RED algorithm to achieve stabilization in Queue length at routers with reduced loss rate as compared to RED. The probability marking function of RED has been modified according to two different functions and the results and effects on various performance parameters like Queue length, throughput, delay etc have been shown in our paper. In this paper, RED and modified RED have been studied to achieve better stabilization of queue size with low loss rate and comparable throughput.


International Journal of Machine Learning and Computing | 2012

Drift Analysis of Backlogged Packets in Slotted ALOHA

Sanjeev Patel; P. K. Gupta

These Multiple access system (MAS) deals with a situation where multiple nodes (computers) are required to access commonly shared channel. These multiple users can be viewed as uncoordinated users or computers which compete to transmit messages in the form of packets or frames. It can be shown that the MAS is unstable due to nonlinearity of the problem in the form of contention of several nodes competing for the same channel. There are some well known algorithm to stabilize the throughput of slotted ALOHA i.e bayesian broadcast algorithm, splitting algorithm, modified stochastic gradient algorithm. In this paper, we have analyzed the number of backlogged packets by using statistical approach and stabilize the expected number of backlogged packets minus mean number of packets which are successfully transmitted. Index Terms—Slotted ALOHA, throughput, backlogged packets, probabilistic distribution, diffusion approximation.


Telecommunication Systems | 2018

A stochastic approximation approach to active queue management

Shalabh Bhatnagar; Sanjeev Patel; Karmeshu

Recently, a dynamic adaptive queue management with random dropping (AQMRD) scheme has been developed to capture the time-dependent variation of average queue size by incorporating the rate of change of average queue size as a parameter. A major issue with AQMRD is the choice of parameters. In this paper, a novel online stochastic approximation based optimization scheme is proposed to dynamically tune the parameters of AQMRD and which is also applicable for other active queue management (AQM) algorithms. Our optimization scheme significantly improves the throughput, average queue size, and loss-rate in relation to other AQM schemes.


IEEE Systems Journal | 2018

Performance Analysis of AQM Scheme Using Factorial Design Framework

Sanjeev Patel; Kanwar Sen; Karmeshu

The active queue management (AQM) is a router assisted congestion control technique which improves the performance of the networks. The AQM algorithm handles the congestion by dropping the packets from a congested router where dropping probability is computed with the help of average queue size, loss-rate, queuing delay, or other parameters. A novel approach based on design of experiments is proposed to study the performance measures related to several AQM schemes viz., random early detection (RED), random exponential marking, modified RED, adaptive RED, stabilized RED, three-section RED, and AQM with random dropping. The impact of several input factors on the performance measures viz., throughput, queuing delay, loss-rate is investigated by using the factorial design where it is used to find the interaction of input factors. The relative changes in the output responses on account of changes in input factors or variables are evaluated. Sensitivity analysis is carried out by computing the weighted sum of relative changes of response variable with respect to input factors for each AQM scheme. Based on the sensitivity analysis we find that a new AQM with random dropping is most robust while in contrast three-section RED is least robust.


international conference on computing communication and automation | 2016

Documents ranking using new learning approach

Sanjeev Patel; Kriti Khanna; Vishnu Sharma

Aim of document similarity is to retrieve the list of ranked documents based on a query document. In the literature, it has been shown that document similarity method first compute similarity scores between the query and the documents based on a score function. Thereafter, documents are ranked according to their similarity scores. In the literature, TextTiling algorithm shown was basically done in three stages: tokenization into sentence-sized units, score calculation and detecting the boundaries of the subtopic. In this paper, we have evaluated two different approaches for documents ranking and further we checked this achieved results with other approach based on machine learning Firstly, Documents are ranked based on standard score calculation i.e using the tf-idf concept. Secondly, Documents are ranked based on Textiling approach. TextTiles have already been integrated into a user interface in an information retrieval system and have been used successfully for segmenting Arabic newspaper texts, which have no paragraph breaks, for information retrieval. Textiling is a far better technique than the standard score calculation when we try to summarize the documents. This helps in the improving the retrieval performance of the system. In this paper, we have shown the comparative analysis of two approaches. Further, we also tried to incorporate the statistically machine learning approach in our results, for handling a broad range of problems in information retrieval (IR).


international conference on computing communication and automation | 2016

Throughput analysis of AQM schemes under low-rate Denial of service attacks

Sanjeev Patel; Badal Gupta; Vishnu Sharma

Active Queue Management (AQM) scheme aims to achieve low loss rate and high throughput as well as high link utilization. We have tried to study the throughput for AQM schemes under denial-of-service (DoS) attacks. DoS is a very serious threat which degrade the TCP throughput in the Internet. DoS attacks consume resources such as network bandwidth, server CPU cycles, and server interrupt processing capacity. In this paper, we have presented a comparative performance analysis of existing AQM schemes under DoS attacks. We find RED achieved good performance even DoS attacks is permitted.


International Journal of Communication Networks and Distributed Systems | 2016

Comparative performance analysis of TCP-based congestion control algorithms

Sanjeev Patel; Kritika Rani

Congestion control is a challenging problem for us. We tried to analyse the end-to-end congestion control algorithms, i.e., TCP Tahoe, TCP Reno, TCP Newreno, TCP Veno, etc. In the literature, TCP implements a window-based flow control mechanism which leads to vary the window size within a range. Older TCP designed assuming packet loss is always inferred due to congestion on link which leads to degradation of performance in wireless networks where random loss occurred due to transmission error or noise. This well-known problem affects on TCP performance. TCP Veno has been successfully proposed to deal with random loss efficiently and its performance is discussed in the literature. This paper evaluates the complex model for different performance parameters, i.e., throughput, queuing delay, goodput, etc. In addition, we have also proposed new performance metric to measure the network performance. We also tried to study and compare the analytical results with simulated data at different levels of loss rate.


arXiv: Networking and Internet Architecture | 2012

Comparative Analysis of Congestion Control Algorithms Using ns-2

Sanjeev Patel; P. K. Gupta; Arjun Garg; Prateek Mehrotra; Manish Chhabra

Collaboration


Dive into the Sanjeev Patel's collaboration.

Top Co-Authors

Avatar

Karmeshu

Jawaharlal Nehru University

View shared research outputs
Top Co-Authors

Avatar

P. K. Gupta

Jaypee University of Information Technology

View shared research outputs
Top Co-Authors

Avatar

Shalabh Bhatnagar

Indian Institute of Science

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Abhinav Sharma

Jaypee Institute of Information Technology

View shared research outputs
Top Co-Authors

Avatar

Badal Gupta

Jaypee Institute of Information Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kriti Khanna

Jaypee Institute of Information Technology

View shared research outputs
Top Co-Authors

Avatar

Kritika Rani

Jaypee Institute of Information Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge