Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ranjan Pal is active.

Publication


Featured researches published by Ranjan Pal.


international conference on computer communications | 2014

Will cyber-insurance improve network security? A market analysis

Ranjan Pal; Leana Golubchik; Konstantinos Psounis; Pan Hui

Recent work in security has illustrated that solutions aimed at detection and elimination of security threats alone are unlikely to result in a robust cyberspace. As an orthogonal approach to mitigating security problems, some have pursued the use of cyber-insurance as a suitable risk management technique. Such an approach has the potential to jointly align with the incentives of security vendors (e.g., Symantec, Microsoft, etc.), cyber-insurers (e.g., ISPs, cloud providers, security vendors, etc.), regulatory agencies (e.g., government), and network users (individuals and organizations), in turn paving the way for comprehensive and robust cyber-security mechanisms. To this end, in this work, we are motivated by the following important question: can cyber-insurance really improve the security in a network? To address this question, we adopt a market-based approach. Specifically, we analyze regulated monopolistic and competitive cyber-insurance markets, where the market elements consist of risk-averse cyber-insurers, risk-averse network users, a regulatory agency, and security vendors. Our results show that (i) without contract discrimination amongst users, there always exists a unique market equilibrium for both market types, but the equilibrium is inefficient and does not improve network security, and (ii) in monopoly markets, contract discrimination amongst users results in a unique market equilibrium that is efficient, which in turn results in network security improvement - however, the cyber-insurer can make zero expected profits. The latter fact is often sufficient to de-incentivize the insurer to be a part of a market, and will eventually lead to its collapse. This fact also emphasizes the need for designing mechanisms that incentivize the insurer to permanently be part of the market.


international conference of distributed computing and networking | 2012

Economic models for cloud service markets

Ranjan Pal; Pan Hui

Cloud computing is a paradigm that has the potential to transform and revolutionalize the next generation IT industry by making software available to end-users as a service. A cloud, also commonly known as a cloud network, typically comprises of hardware (network of servers) and a collection of softwares that is made available to end-users in a pay-as-you-go manner. Multiple public cloud providers (ex., Amazon) co-existing in a cloud computing market provide similar services (software as a service) to its clients, both in terms of the nature of an application, as well as in quality of service (QoS) provision. The decision of whether a cloud hosts (or finds it profitable to host) a service in the long-term would depend jointly on the price it sets, the QoS guarantees it provides to its customers , and the satisfaction of the advertised guarantees. In this paper, we devise and analyze three inter-organizational economic models relevant to cloud networks. We formulate our problems as non co-operative price and QoS games between multiple cloud providers existing in a cloud market. We prove that a unique pure strategy Nash equilibrium (NE) exists in two of the three models. Our analysis paves the path for each cloud provider to 1) know what prices and QoS level to set for end-users of a given service type, such that the provider could exist in the cloud market, and 2) practically and dynamically provision appropriate capacity for satisfying advertised QoS guarantees.


international conference on distributed computing systems | 2010

Analyzing Self-Defense Investments in Internet Security under Cyber-Insurance Coverage

Ranjan Pal; Leana Golubchik

Internet users such as individuals and organizations are subject to different types of epidemic risks such as worms, viruses, and botnets. To reduce the probability of risk, an Internet user generally invests in self-defense mechanisms like antivirus and antispam software. However, such software does not completely eliminate risk. Recent works have considered the problem of residual risk elimination by proposing the idea of cyber-insurance. In this regard, an important decision for Internet users is their amount of investment in self-defense mechanisms when insurance solutions are offered. In this paper, we investigate the problem of self-defense investments in the Internet, under full and partial cyber-insurance coverage models. By the term ‘self-defense investment’, we mean the monetary-cum-precautionary cost that each user needs to invest in employing risk mitigating self-defense mechanisms, given that it is fully or partially insured by the Internet insurance agencies. We propose a general mathematical framework by which co-operative and non-co-operative Internet users can decide whether or not to invest in self-defense for ensuring both, individual and social welfare. Our results show that (1) co-operation amongst users results in more efficient self-defense investments than those in a non-cooperative setting, under a full insurance coverage model and (2) partial insurance coverage motivates non-cooperative Internet users to invest more efficiently in self-defense mechanisms when compared to full insurance coverage.


applied sciences on biomedical and communication technologies | 2008

Characterizing reliability in cognitive radio networks

Ranjan Pal; Dua Mohammed Idris; Kirti Pasari; Neeli R. Prasad

Future wireless communications will require an increasing opportunistic use of the licensed radio frequency spectrum. The cognitive radio (CR) paradigm provides a suitable framework for this purpose. However, the phenomena of channel fading and primary cum secondary interference in cognitive radio networks does not guarantee application demands to be achieved continuously over time. In this paper, we consider the problem of analytically evaluating the reliability of a general multi-hop, multi-channel, multi-radio, multi-rate cognitive radio network, serving a given multi-application demand vector. We define reliability as the probability that a given demand vector is achieved. Each element in the vector is the desired flow rate of an application. We give a detailed simulation study to support our theory, and analyze the effect of the number of channels, radios, and simultaneous flows on the reliability of a CR network. Our quantitative measure of reliability is indicative of the effectiveness of a CR network, and will help network engineers to tune the network for better performance.


decision and game theory for security | 2011

Aegis : a novel cyber-insurance model

Ranjan Pal; Leana Golubchik; Konstantinos Psounis

Recent works on Internet risk management have proposed the idea of cyber-insurance to eliminate risks due to security threats, which cannot be tackled through traditional means such as by using antivirus and antivirus softwares. In reality, an Internet user faces risks due to security attacks as well as risks due to non-security related failures (e.g., reliability faults in the form of hardware crash, buffer overflow, etc.). These risk types are often indistinguishable by a naive user. However, a cyber-insurance agency would most likely insure risks only due to security attacks. In this case, it becomes a challenge for an Internet user to choose the right type of cyber-insurance contract as traditional optimal contracts, i.e., contracts for security attacks only, might prove to be sub-optimal for himself. In this paper, we address the problem of analyzing cyber-insurance solutions when a user faces risks due to both, security as well as non-security related failures. We propose Aegis, a simple and novel cyber-insurance model in which the user accepts a fraction (strictly positive) of loss recovery on himself and transfers rest of the loss recovery on the cyber-insurance agency. We mathematically show that only under conditions when buying cyber-insurance is mandatory, given an option, risk-averse Internet users would prefer Aegis contracts to traditional cyber-insurance contracts, under all premium types. This result firmly establishes the non-existence of traditional cyber-insurance markets when Aegis contracts are offered to users. We also derive an interesting counterintuitive result related to the Aegis framework: we show that an increase(decrease) in the premium of an Aegis contract may not always lead to decrease(increase) in its user demand. In the process, we also state the conditions under which the latter trend and its converse emerge. Our work proposes a new model of cyber-insurance for Internet security that extends all previous related models by accounting for the extra dimension of non-insurable risks. Aegis also incentivizes Internet users to take up more personal responsibility for protecting their systems.


measurement and modeling of computer systems | 2012

CyberInsurance for cybersecurity a topological take on modulating insurance premiums

Ranjan Pal; Pan Hui

A recent conjecture in cyber-insurance research states that for compulsory monopolistic insurance scenarios, charging fines and rebates on fair premiums will incentivize networkusers to invest in self-defense investments, thereby making cyber-space more robust. Assuming the validity of the conjecture in this paper, we adopt a topological perspective in proposing a mechanism that accounts for (i) the positive externalities posed (through self-defense investments) by network users on their peers, and (ii) network location (based on centrality measures) of users, and provides an appropriate way to proportionally allocate fines/rebates on user premiums. We mathematically justify (via a game-theoretic analysis) that optimal fine/rebates per user should be allocated in proportion to the Bonacich or eigenvector centrality value of the user.


Proceedings of the third workshop on Hot topics in software defined networking | 2014

A secure computation framework for SDNs

Nachikethas A. Jagadeesan; Ranjan Pal; Kaushik Nadikuditi; Yan Huang; Elaine Shi; Minlan Yu

Software Defined Networking (SDN) introduces a logically centralized control plane to run diverse management applications. In practice, a logically centralized control plane is realized using multiple controllers for scalability, reliability, and availability reasons. In fact, for various current and future networks of interest, it is practically infeasible to attempt a physically centralized SDN system. As SDN gains popularity, it is important to secure the SDN infrastructure to be resilient to potential attacks. In SDN, controllers can become high-value and attractive targets for an adversary for the following reasons. First, controllers are sinks of information collected from different switches. This includes network topology and flow-counter values. Such information can be privacy sensitive. For example, an organization may wish to protect its internal network topology or hide what type of traffic is being routed through its network. In addition, privacy policies may prohibit information from flowing between one part of the organizational network to another. Second, controllers run full-fledged software stacks including an operating system and management applications. Therefore, they may expose a much larger attack surface than switches. Moreover, threats may arise from multiple sources. In addition to software vulnerabilities that may exist in the controller software stack, malicious insiders who have privileged access to the controllers may leak sensitive information or sabotage network operations. For example, the network operator wants to make sure that traffic flow counters in the controllers stay untouched by an adversary. Manipulation of these counters could allow DDoS


measurement and modeling of computer systems | 2011

Settling for less: a QoS compromise mechanism for opportunistic mobile networks

Ranjan Pal; Sokol Kosta; Pan Hui

In recent years smartphones have become increasingly popular. In April 2011, Google claimed that around 350000 Android smartphones are being activated daily. A smartphone is a device equipped with a range of sensors, a gigahertzrange CPU, and high bandwidth wireless networking capabilities. The power and increasing prevalence of smartphones in combination with current research on opportunistic mobile networking have (1) increased the range of applications that could be supported on an opportunistic mobile network and (2) given birth to new fields of research such as mobile crowd computing [5] that are geared towards largescale distributed computations. An opportunistic network is created between mobile phones using local peer-to-peer connections. The nodes in such a network are mobile phones carried by human users on the move, and a link between two phones represent the fact that the corresponding phone users are within each other’s wireless communication range. Opportunistic networks are usually intermittently connected and are characterized by social-based mobility and heterogeneous contact rate. Their basic principle of operation is based on the store-and-forward strategy [2]. Keeping in mind the fact that opportunistic networks in the near future will primarily comprise of smartphones as nodes, and would be geared towards servicing numerous applications of varied QoS demands, the opportunistic network research community today still face three basic hurdles to achieving good performance on most applications. User mobility is one such hurdle. In a relatively sparse network, user mobility might lead to network disconnectivity at times, which in turn increases response time of a user application. The second hurdle is the uncertainty in the quality of the wireless transmission channel. Effects like fading, shadowing, and interference might result in data packets being lost during transmission or being transmitted at a low speeds. Finally, individual user selfishness is a psychological hurdle which users in an opportunistic network face. A mobile user would be unwilling to forward packets for someone it does not know due to (1) individual security concerns and (2) it unnecessarily expending battery power and computation resources for an application it has no relation with. Under the above mentioned hurdles, it is not guaranteed that user QoS demands could be satisfied to a certain degree at all times let alone guaranteeing complete user satisfac-


applied sciences on biomedical and communication technologies | 2009

Efficient data processing in ultra low power wireless networks: Ideas from compressed sensing

Ranjan Pal; Bharat Gupta; Neeli R. Prasad; Ramjee Prasad

In this paper, we propose a novel idea to perform efficient data processing in low-power wireless networks under noisy environments. We provide intuitions from compressed sensing about how to go about processing information accurately or near accurately in lossy environments. We show the way to future wireless network researchers about harnessing the tremendous power of compressed sensing in various applications concerning wireless ad-hoc networks, sensor networks, body area networks and cognitive radio networks.


IEEE Transactions on Parallel and Distributed Systems | 2016

MATCH for the Prosumer Smart Grid The Algorithmics of Real-Time Power Balance

Ranjan Pal; Charalampos Chelmis; Marc Frîncu; Viktor K. Prasanna

Prosumers or proactive consumers are steadily on the rise in emerging Smart Grid systems. These consumers, apart from their traditonal role of using energy from the grid, also are actively involved in individually transferring stored energy from renewable sources such as wind and solar, to the grid. The large-scale integration of renewable generation in the emerging grid will re-define ways of meeting consumer energy demands, and more importantly drive greener and cost-effective utility operations. In this paper, we investigate the problem of matching consumer demand with the grid supply in real-time, and in the presence of renewables. We formulate this problem as a stochastic optimization problem and propose MATCH, a fast distributed real-time algorithm that accounts for the uncertainties in (i) renewable generation, (ii) the latters transmission through the grid network, (iii) loads, and (iv) energy prices, and balances power in the Smart Grid at all times. MATCH is based on the Lyapunov stochastic optimization framework and scales to localities with a large number of networked renewable generation sources. We validate the efficacy of MATCH through experiments conducted using data modelled on proprietary data obtained from two public utilities. As part of the main results of this work, we show that (a) MATCH outputs unique approximate-optimal grid parameter configuration vectors in real-time that ensure perennial supplydemand balance in the grid at a minimum cost, and (b) mesh transmission network topologies lead to better MATCH outputs when compared to other existing transmission network topologies.

Collaboration


Dive into the Ranjan Pal's collaboration.

Top Co-Authors

Avatar

Leana Golubchik

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Pan Hui

Hong Kong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Konstantinos Psounis

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Sung-Han Lin

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Viktor K. Prasanna

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Aravind Kailas

University of North Carolina at Charlotte

View shared research outputs
Top Co-Authors

Avatar

Charalampos Chelmis

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Marc Frîncu

University of Southern California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge