Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sara Motahari is active.

Publication


Featured researches published by Sara Motahari.


symposium on usable privacy and security | 2007

Seven privacy worries in ubiquitous social computing

Sara Motahari; Constantine N. Manikopoulos; Roxanne Hiltz; Quentin Jones

Review of the literature suggests seven fundamental privacy challenges in the domain of ubiquitous social computing. To date, most research in this area has focused on the features associated with the revelation of personal location data. However, a more holistic view of privacy concerns that acknowledges these seven risks is required if we are to deploy privacy respecting next generation social computing applications. We highlight the threat associated with user inferences made possible by knowledge of the context and use of social ties. We also describe work in progress to both understand user perceptions and build a privacy sensitive urban enclave social computing system.


computational science and engineering | 2009

Social Inference Risk Modeling in Mobile and Social Applications

Sara Motahari; Sotirios G. Ziavras; Mor Naaman; Mohamed Ismail; Quentin Jones

The emphasis of emerging mobile and Web 2.0 applications on collaboration and communication increases threats to user privacy. A serious, yet under-researched privacy risk results from social inferences about user identity, location and other personal information. In this paper, after analyzing the social inference problem theoretically, we assess the extent of the risk to users of computer-mediated communication and location based applications through 1) a laboratory experimentation, 2) a mobile phone field study, and 3) simulation. Our experimentation involved the use of 530 user-created profiles and a 292-subject laboratory chat-study between strangers. The field study explored the patterns of collocation and anonymity of 165 users using a location-aware mobile-phone survey tool. The empirical data was then utilized to populate large-scale simulations of the social inference risk. The work validates the theoretical model, highlights the seriousness of the social inference risk, and shows how the extent and nature of the risk differs for different classes of social computing applications. We conclude with a discussion of the system design implications.


conference on recommender systems | 2010

Common attributes in an unusual context: predicting the desirability of a social match

Julia M. Mayer; Sara Motahari; Richard P. Schuler; Quentin Jones

Social matching systems recommend people to other people. With the widespread adoption of smartphones, mobile social matching systems could potentially transform our social landscape. However, we have a limited understanding of what makes a good social match in the mobile context. We present a theoretical framework which outlines how a users context and the rarity of different affinity measures in various contexts (match rarity) can be used to provide valuable social matches. We suggest that if a user attribute is very rare in a particular context, users will generally be more interested in an affinity match. We conducted a survey study to assess this framework with 117 respondents. We found that both context and match rarity significantly influence interest in a social match. These results validate the key aspects of the framework. We discuss the results in terms of implications for social matching system design.


IEEE Transactions on Information Forensics and Security | 2010

Online Anonymity Protection in Computer-Mediated Communication

Sara Motahari; Sotirios G. Ziavras; Quentin Jones

In any situation where a set of personal attributes are revealed, there is a chance that revealed data can be linked back to its owner. Examples of such situations are publishing user profile micro-data or information about social ties, sharing profile information on social networking sites, or revealing personal information in computer-mediated communication (CMC). Measuring user anonymity is the first step to ensuring that the identity of the owner of revealed information cannot be inferred. Most current measures of anonymity ignore important factors such as the probabilistic nature of identity inference, the inferrers outside knowledge, and the correlation between user attributes. Furthermore, in the social computing domain, variations in personal information and various levels of information exchange among users make the problem more complicated. We present an information-entropy-based realistic estimation of the user anonymity level to deal with these issues in social computing in an effort to help predict the identity inference risks. We then address implementation issues of online protection by proposing complexity reduction methods that take advantage of basic information entropy properties. Our analysis and delay estimation based on experimental data show that our methods are viable, effective, and efficient in facilitating privacy in social computing and synchronous CMCs.


international conference on tools with artificial intelligence | 2009

Preventing Unwanted Social Inferences with Classification Tree Analysis

Sara Motahari; Sotirios G. Ziavras; Quentin Jones

A serious threat to user privacy in new mobile and web2.0 applications stems from ‘social inferences’. These unwanted inferences are related to the users’ identity, current location and other personal information. We have previously introduced ‘inference functions’ to estimate the social inference risk based on information entropy. In this paper, after analyzing the problem and reviewing our risk estimation method, we create a decision tree to distinguish between high risk and normal situations. To evaluate our methodology, test and training datasets were collected during a large mobile-phone field study for a location-aware application. The classification tree employs our two inference functions, for the current and past situations, as internal nodes. Our results show that the achieved true classification rates are significantly better than approaches that employ other available features for the internal nodes of the trees. The results also suggest that common classification tools cannot accurately capture the information entropy for social applications. This is mostly due to the lack of enough training data for high-risk, low-entropy situations and outliers. Thus, we conclude that estimating the information entropy and the relevant inference risk using a pre-processor can yield a simpler and more accurate classification tree.


EAI Endorsed Transactions on Security and Safety | 2011

How did you know that about me? Protecting users against unwanted inferences

Sara Motahari; Julia M. Mayer; Quentin Jones

The widespread adoption of social computing applications is transforming our world. It has changed the way we routinely communicate and navigate our environment and enabled political revolutions. However, despite these applications’ ability to support social action, their use puts individual privacy at considerable risk. This is in large part due to the fact that the public sharing of personal information through social computing applications enables potentially unwanted inferences about users’ identity, location, or other related personal information. This paper provides a systematic overview of the social inference problem. It highlights the public’s and research community’s general lack of awareness of the problem and associated risks to user privacy. A social inference risk prediction framework is presented and associated empirical studies that attest to its validity. This framework is then used to outline the major research and practical challenges that need to be addressed if we are to deploy effective social inference protection systems. Challenges examined include how to address the computational complexity of social inference risk modeling and designing user interfaces that inform users about social inference opportunities.


Archive | 2009

System and method for protecting user privacy using social inference protection techniques

Sara Motahari; Quentin Jones


Archive | 2010

Socially- And Context-Aware People-Matching Systems and Methods Relating Thereto

Quentin Jones; Julia M. Mayer; Sara Motahari


Archive | 2010

Systems and Methods For Anonymity Protection

Sara Motahari; Sotirios G. Ziavras


hawaii international conference on system sciences | 2009

Identity Inference as a Privacy Risk in Computer-Mediated Communication

Sara Motahari; Sotirios G. Ziavras; Richard P. Schuler; Quentin Jones

Collaboration


Dive into the Sara Motahari's collaboration.

Top Co-Authors

Avatar

Quentin Jones

New Jersey Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Sotirios G. Ziavras

New Jersey Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Julia M. Mayer

New Jersey Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Richard P. Schuler

New Jersey Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Mohamed Ismail

New Jersey Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Constantine N. Manikopoulos

New Jersey Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

David J. Rosenbaum

New Jersey Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Ketan Patel

New Jersey Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Roxanne Hiltz

New Jersey Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge