Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Christopher S. Gates is active.

Publication


Featured researches published by Christopher S. Gates.


IEEE Transactions on Dependable and Secure Computing | 2014

Effective Risk Communication for Android Apps

Christopher S. Gates; Jing Chen; Ninghui Li; Robert W. Proctor

The popularity and advanced functionality of mobile devices has made them attractive targets for malicious and intrusive applications (apps). Although strong security measures are in place for most mobile systems, the area where these systems often fail is the reliance on the user to make decisions that impact the security of a device. As our prime example, Android relies on users to understand the permissions that an app is requesting and to base the installation decision on the list of permissions. Previous research has shown that this reliance on users is ineffective, as most users do not understand or consider the permission information. We propose a solution that leverages a method to assign a risk score to each app and display a summary of that information to users. Results from four experiments are reported in which we examine the effects of introducing summary risk information and how best to convey such information to a user. Our results show that the inclusion of risk-score information has significant positive effects in the selection process and can also lead to more curiosity about security-related information.


IEEE Transactions on Dependable and Secure Computing | 2015

A Probabilistic Discriminative Model for Android Malware Detection with Decompiled Source Code

Lei Cen; Christopher S. Gates; Luo Si; Ninghui Li

Mobile devices are an important part of our everyday lives, and the Android platform has become a market leader. In recent years a number of approaches for Android malware detection have been proposed, using permissions, source code analysis, or dynamic analysis. In this paper, we propose to use a probabilistic discriminative model based on regularized logistic regression for Android malware detection. Through extensive experimental evaluation, we demonstrate that it can generate probabilistic outputs with highly accurate classification results. In particular, we propose to use Android API calls as features extracted from decompiled source code, and analyze and explore issues in feature granularity, feature representation, feature selection, and regularization. We show that the probabilistic discriminative model also works well with permissions, and substantially outperforms the state-of-the-art methods for Android malware detection with application permissions. Furthermore, the discriminative learning model achieves the best detection results by combining both decompiled source code and application permissions. To the best of our knowledge, this is the first research that proposes probabilistic discriminative model for Android malware detection with a thorough study of desired representation of decompiled source code and is the first research work for Android malware detection task that combines both analysis of decompiled source code and application permissions.


IEEE Transactions on Dependable and Secure Computing | 2014

Generating Summary Risk Scores for Mobile Applications

Christopher S. Gates; Ninghui Li; Hao Peng; Bhaskar Pratim Sarma; Yuan Qi; Rahul Potharaju; Cristina Nita-Rotaru; Ian Molloy

One of Androids main defense mechanisms against malicious apps is a risk communication mechanism which, before a user installs an app, warns the user about the permissions the app requires, trusting that the user will make the right decision. This approach has been shown to be ineffective as it presents the risk information of each app in a “stand-alone” fashion and in a way that requires too much technical knowledge and time to distill useful information. We discuss the desired properties of risk signals and relative risk scores for Android apps in order to generate another metric that users can utilize when choosing apps. We present a wide range of techniques to generate both risk signals and risk scores that are based on heuristics as well as principled machine learning techniques. Experimental results conducted using real-world data sets show that these methods can effectively identify malware as very risky, are simple to understand, and easy to use.


Journal of Cognitive Engineering and Decision Making | 2015

Influence of Risk/Safety Information Framing on Android App-Installation Decisions

Jing Chen; Christopher S. Gates; Ninghui Li; Robert W. Proctor

We conducted three experiments with participants recruited on Amazon’s Mechanical Turk to examine the influence on app-installation decisions of summary risk information derived from the app permissions. This information can be framed negatively as amount of risk or positively as amount of safety, which was varied in all the experiments. In Experiments 1 and 2, the participants performed tasks in which they selected two Android apps from a list of six; in Experiment 3, the tasks were to reject two apps from the list. This summary information influenced the participants to choose less risky alternatives, particularly when it was framed in terms of safety and the app had high user ratings. Participants in the safety condition reported that they attended more to the summary score than did those in the risk condition. They also showed better comprehension of what the score was conveying, regardless of whether the task was to select or reject. The results imply that development of a valid risk/safety index for apps has the potential to improve users’ app-installation decisions, especially if that information is framed as amount of safety.


symposium on access control models and technologies | 2010

Towards analyzing complex operating system access control configurations

Hong Chen; Ninghui Li; Christopher S. Gates; Ziqing Mao

An operating system relies heavily on its access control mechanisms to defend against local and remote attacks. The complexities of modern access control mechanisms and the scale of possible configurations are often overwhelming to system administrators and software developers. Therefore mis-configurations are very common and the security consequences are serious. Given the popularity and uniqueness of Microsoft Windows systems, it is critical to have a tool to comprehensively examine the access control configurations. However, current studies on Windows access control mechanisms are mostly based on known attack patterns. We propose a tool, WACCA, to systematically analyze the Windows configurations. Given the attackers initial abilities and goals, WACCA generates an attack graph based on interaction rules. The tool then automatically generates attack patterns from the attack graph. Each attack pattern represents attacks of the same nature. The attack subgraphs and instances are also generated for each pattern. Compared to existing solutions, WACCA is more comprehensive and does not rely on manually defined attack patterns. It also has a unique feature in that it models software vulnerabilities and therefore can find attacks that rely on exploiting these vulnerabilities. We study two attack cases on a Windows Vista host and discuss the analysis results.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2014

Framing of Summary Risk/Safety Information and App Selection

Jing Chen; Christopher S. Gates; Robert W. Proctor; Ninghui Li

Participants recruited on Amazon’s Mechanical Turk performed a series of tasks in which they chose two Android apps from a list of six. Summary risk/safety information for each app was displayed in the form of one to five filled circles: The number of filled circles specified increasing risk for half of the participants and increasing safety for the other half. This summary information influenced the participants’ decisions, particularly when the app had high user ratings and when the decision was framed in terms of safety rather than risk. Participants indicated that they attended more to the risk/safety information when it was conveyed as amount of safety, and they showed better comprehension of what the index was conveying for safety as opposed to risk. The results imply that development of a valid risk/safety index for apps will improve users’ app-selection decisions, particularly if that information is displayed as amount of safety.


annual computer security applications conference | 2012

CodeShield: towards personalized application whitelisting

Christopher S. Gates; Ninghui Li; Jing Chen; Robert W. Proctor

Malware has been a major security problem both in organizations and homes for more than a decade. One common feature of most malware attacks is that at a certain point early in the attack, an executable is dropped on the system which, when executed, enables the attacker to achieve their goals and maintain control of the compromised machine. In this paper we propose the concept of Personalized Application Whitelisting (PAW) to block all unsolicited foreign code from executing on a system. We introduce CodeShield, an approach to implement PAW on Windows hosts. CodeShield uses a simple and novel security model, and a new user interaction approach for obtaining security-critical decisions from users. We have implemented CodeShield, demonstrated its security effectiveness, and conducted a user study, having 38 participants run CodeShield on their laptops for 6 weeks. Results from the data demonstrate the usability and promises of our design.


european symposium on research in computer security | 2015

Learning from Others: User Anomaly Detection Using Anomalous Samples from Other Users

Youngja Park; Ian Molloy; Suresh Chari; Zenglin Xu; Christopher S. Gates; Ninghui Li

Machine learning is increasingly used as a key technique in solving many security problems such as botnet detection, transactional fraud, insider threat, etc. One of the key challenges to the widespread application of ML in security is the lack of labeled samples from real applications. For known or common attacks, labeled samples are available, and, therefore, supervised techniques such as multi-class classification can be used. However, in many security applications, it is difficult to obtain labeled samples as each attack can be unique. In order to detect novel, unseen attacks, researchers used unsupervised outlier detection or one-class classification approaches, where they treat existing samples as benign samples. These methods, however, yield high false positive rates, preventing their adoption in real applications.


european symposium on research in computer security | 2013

Estimating Asset Sensitivity by Profiling Users

Youngja Park; Christopher S. Gates; Stephen C. Gates

We introduce algorithms to automatically score and rank information technology (IT) assets in an enterprise, such as computer systems or data files, by their business value and criticality to the organization. Typically, information assets are manually assigned classification labels with respect to the confidentiality, integrity and availability. In this paper, we propose semi-automatic machine learning algorithms to automatically estimate the sensitivity of assets by profiling the users. Our methods do not require direct access to the target assets or privileged knowledge about the assets, resulting in a more efficient, scalable and privacy-preserving approach compared with existing data security solutions relying on data content classification. Instead, we rely on external information such as the attributes of the users, their access patterns and other published data content by the users. Validation with a set of 8,500 computers collected from a large company show that all our algorithms perform significantly better than two baseline methods.


computer and communications security | 2012

Using probabilistic generative models for ranking risks of Android apps

Hao Peng; Christopher S. Gates; Bhaskar Pratim Sarma; Ninghui Li; Yuan Qi; Rahul Potharaju; Cristina Nita-Rotaru; Ian Molloy

Collaboration


Dive into the Christopher S. Gates's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jing Chen

New Mexico State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge