Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Apu Kapadia is active.

Publication


Featured researches published by Apu Kapadia.


Pervasive and Mobile Computing | 2011

AnonySense: A system for anonymous opportunistic sensing

Minho Shin; Cory Cornelius; Daniel Peebles; Apu Kapadia; David Kotz; Nikos Triandopoulos

We describe AnonySense, a privacy-aware system for realizing pervasive applications based on collaborative, opportunistic sensing by personal mobile devices. AnonySense allows applications to submit sensing tasks to be distributed across participating mobile devices, later receiving verified, yet anonymized, sensor data reports back from the field, thus providing the first secure implementation of this participatory sensing model. We describe our security goals, threat model, and the architecture and protocols of AnonySense. We also describe how AnonySense can support extended security features that can be useful for different applications. We evaluate the security and feasibility of AnonySense through security analysis and prototype implementation. We show the feasibility of our approach through two plausible applications: a Wi-Fi rogue access point detector and a lost-object finder.


communication systems and networks | 2009

Opportunistic sensing: Security challenges for the new paradigm

Apu Kapadia; David Kotz; Nikos Triandopoulos

We study the security challenges that arise in opportunistic people-centric sensing, a new sensing paradigm leveraging humans as part of the sensing infrastructure. Most prior sensor-network research has focused on collecting and processing environmental data using a static topology and an application-aware infrastructure, whereas opportunistic sensing involves collecting, storing, processing and fusing large volumes of data related to everyday human activities. This highly dynamic and mobile setting, where humans are the central focus, presents new challenges for information security, because data originates from sensors carried by people— not tiny sensors thrown in the forest or attached to animals. In this paper we aim to instigate discussion of this critical issue, because opportunistic people-centric sensing will never succeed without adequate provisions for security and privacy. To that end, we outline several important challenges and suggest general solutions that hold promise in this new sensing paradigm.


pervasive computing and communications | 2012

DECENT: A decentralized architecture for enforcing privacy in online social networks

Sonia Jahid; Shirin Nilizadeh; Prateek Mittal; Nikita Borisov; Apu Kapadia

A multitude of privacy breaches, both accidental and malicious, have prompted users to distrust centralized providers of online social networks (OSNs) and investigate decentralized solutions. We examine the design of a fully decentralized (peer-to-peer) OSN, with a special focus on privacy and security. In particular, we wish to protect the confidentiality, integrity, and availability of user content and the privacy of user relationships. We propose DECENT, an architecture for OSNs that uses a distributed hash table to store user data, and features cryptographic protections for confidentiality and integrity, as well as support for flexible attribute policies and fast revocation. DECENT ensures that neither data nor social relationships are visible to unauthorized users and provides availability through replication and authentication of updates. We evaluate DECENT through simulation and experiments on the PlanetLab network and show that DECENT is able to replicate the main functionality of current centralized OSNs with manageable overhead.


computer and communications security | 2004

KNOW Why your access was denied: regulating feedback for usable security

Apu Kapadia; Geetanjali Sampemane; Roy H. Campbell

We examine the problem of providing useful feedback about access control decisions to users while controlling the disclosure of the systems security policies. Relevant feedback enhances system usability, especially in systems where permissions change in unpredictable ways depending on contextual information. However, providing feedback indiscriminately can violate the confidentiality of system policy. To achieve a balance between system usability and the protection of security policies, we present Know, a framework that uses cost functions to provide feedback to users about access control decisions. Know honors the policy protection requirements, which are represented as a meta-policy, and generates permissible and relevant feedback to users on how to obtain access to a resource. To the best of our knowledge, our work is the first to address the need for useful access control feedback while honoring the privacy and confidentiality requirements of a systems security policy.


ACM Transactions on Information and System Security | 2010

BLAC: Revoking Repeatedly Misbehaving Anonymous Users without Relying on TTPs

Patrick P. Tsang; Man Ho Allen Au; Apu Kapadia; Sean W. Smith

Several credential systems have been proposed in which users can authenticate to service providers anonymously. Since anonymity can give users the license to misbehave, some variants allow the selective deanonymization (or linking) of misbehaving users upon a complaint to a Trusted Third Party (TTP). The ability of the TTP to revoke a user’s privacy at any time, however, is too strong a punishment for misbehavior. To limit the scope of deanonymization, some systems have been proposed in which users can be deanonymized only if they authenticate “too many times,” such as “double spending” with electronic cash. While useful in some applications, such techniques cannot be generalized to more subjective definitions of misbehavior, for example, using such schemes it is not possible to block anonymous users who “deface too many Web pages” on a Web site. We present BLAC, the first anonymous credential system in which service providers can revoke the credentials of misbehaving users without relying on a TTP . Since revoked users remain anonymous, misbehaviors can be judged subjectively without users fearing arbitrary deanonymization by a TTP . Additionally, our construction supports a d-strikes-out revocation policy, whereby users who have been subjectively judged to have repeatedly misbehaved at least d times are revoked from the system. Thus, for the first time, it is indeed possible to block anonymous users who have “defaced too many Web pages” using our scheme.


symposium on usable privacy and security | 2011

Eyeing your exposure: quantifying and controlling information sharing for improved privacy

Roman Schlegel; Apu Kapadia; Adam J. Lee

A large body of research has focused on disclosure policies for controlling information release in social sharing (e.g., location-based) applications. However, less work has considered how exposed these policies actually leave users; i.e., to what extent are disclosures in compliance with these policies actually being made? For instance, consider a disclosure policy granting Alices coworkers access to her location during work hours. Alice might feel that this policy appropriately controls her exposure, but may feel differently if she learned that her boss was accessing her location every 5 minutes. In addition to specifying who has access to personal information, users need a way to quantify, interpret, and control the extent to which this data is shared. We propose and evaluate an intuitive mechanism for summarizing and controlling a users exposure on smartphone-based platforms. Our approach uses the visual metaphor of eyes appearing and growing in size on the home screen; the rate at which these eyes grow depends on the number of accesses granted for a users location, and the type of person (e.g., family vs. friend) making these accesses. This approach gives users an accurate and ambient sense of their exposure and helps them take actions to limit their exposure, all without explicitly identifying the social contacts making requests. Through two systematic user studies (N = 43,41) we show that our interface is indeed effective at summarizing complex exposure information and provides comparable information to a more cumbersome interface presenting more detailed information.


global communications conference | 2002

GREEN: proactive queue management over a best-effort network

Wu-chun Feng; Apu Kapadia; Sunil Thulasidasan

We present a proactive queue-management (PQM) algorithm called GREEN (generalized random early evasion network) that applies knowledge of the steady-state behavior of TCP connections to drop packets intelligently and proactively, thus preventing congestion from ever occurring and ensuring a higher degree of fairness between flows. This congestion-prevention approach is in contrast to the congestion-avoidance approach of traditional active queue-management (AQM) schemes where congestion is actively detected early and then reacted to. In addition to enhancing fairness, GREEN keeps packet-queue lengths relatively low and reduces bandwidth and latency jitter. These characteristics are particularly beneficial to real-time multimedia applications. Further, GREEN achieves the above while maintaining high link utilization and low packet loss.


human factors in computing systems | 2014

Reflection or action?: how feedback and control affect location sharing decisions

Sameer Patil; Roman Schlegel; Apu Kapadia; Adam J. Lee

Owing to the ever-expanding size of social and professional networks, it is becoming cumbersome for individuals to configure information disclosure settings. We used location sharing systems to unpack the nature of discrepancies between a persons disclosure settings and contextual choices. We conducted an experience sampling study (N = 35) to examine various factors contributing to such divergence. We found that immediate feedback about disclosures without any ability to control the disclosures evoked feelings of oversharing. Moreover, deviation from specified settings did not always signal privacy violation; it was just as likely that settings prevented information disclosure considered permissible in situ. We suggest making feedback more actionable or delaying it sufficiently to avoid a knee-jerk reaction. Our findings also make the case for proactive techniques for detecting potential mismatches and recommending adjustments to disclosure settings, as well as selective control when sharing location with socially distant recipients and visiting atypical locations.


The Journal of Supercomputing | 2002

Packet Spacing: An Enabling Mechanism for Delivering Multimedia Content in Computational Grids

Annette C. Feng; Apu Kapadia; Wu-chun Feng; Geneva G. Belford

Streaming multimedia with UDP has become increasingly popular over distributed systems like the Internet. Scientific applications that stream multimedia include remote computational steering of visualization data and video-on-demand teleconferencing over the Access Grid. However, UDP does not possess a self-regulating, congestion-control mechanism; and most best-effort traffic is served by congestion-controlled TCP. Consequently, UDP steals bandwidth from TCP such that TCP flows starve for network resources. With the volume of Internet traffic continuing to increase, the perpetuation of UDP-based streaming will cause the Internet to collapse as it did in the mid-1980s due to the use of non-congestion-controlled TCP.To address this problem, we introduce the counter-intuitive notion of inter-packet spacing with control feedback to enable UDP-based applications to perform well in the next-generation Internet and computational grids. When compared with traditional UDP-based streaming, we illustrate that our approach can reduce packet loss over 50% without adversely affecting delivered throughput.


international symposium on technology and society | 2013

Peer-produced privacy protection

Vaibhav Garg; Sameer Patil; Apu Kapadia; L. Jean Camp

Privacy risks have been addressed through technical solutions such as Privacy-Enhancing Technologies (PETs) as well as regulatory measures including Do Not Track. These approaches are inherently limited as they are grounded in the paradigm of a rational end user who can determine, articulate, and manage consistent privacy preferences. This assumes that self-serving efforts to enact privacy preferences lead to socially optimal outcomes with regard to information sharing. We argue that this assumption typically does not hold true. Consequently, solutions to specific risks are developed - even mandated - without effective reduction in the overall harm of privacy breaches. We present a systematic framework to examine these limitations of current technical and policy solutions. To address the shortcomings of existing privacy solutions, we argue for considering information sharing to be transactions within a community. Outcomes of privacy management can be improved at a lower overall cost if peers, as a community, are empowered by appropriate technical and policy mechanisms. Designing for a community requires encouraging dialogue, enabling transparency, and supporting enforcement of community norms. We describe how peer production of privacy is possible through PETs that are grounded in the notion of information as a common-pool resource subject to community governance.

Collaboration


Dive into the Apu Kapadia's collaboration.

Top Co-Authors

Avatar

David J. Crandall

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar

Adam J. Lee

University of Pittsburgh

View shared research outputs
Top Co-Authors

Avatar

Robert Templeman

Naval Surface Warfare Center

View shared research outputs
Top Co-Authors

Avatar

Roberto Hoyle

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tousif Ahmed

Indiana University Bloomington

View shared research outputs
Researchain Logo
Decentralizing Knowledge