Clint Feher
Ben-Gurion University of the Negev
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Clint Feher.
Security Informatics | 2012
Asaf Shabtai; Robert Moskovitch; Clint Feher; Shlomi Dolev; Yuval Elovici
In previous studies classification algorithms were employed successfully for the detection of unknown malicious code. Most of these studies extracted features based on byte n-gram patterns in order to represent the inspected files. In this study we represent the inspected files using OpCode n-gram patterns which are extracted from the files after disassembly. The OpCode n-gram patterns are used as features for the classification process. The classification process main goal is to detect unknown malware within a set of suspected files which will later be included in antivirus software as signatures. A rigorous evaluation was performed using a test collection comprising of more than 30,000 files, in which various settings of OpCode n-gram patterns of various size representations and eight types of classifiers were evaluated. A typical problem of this domain is the imbalance problem in which the distribution of the classes in real life varies. We investigated the imbalance problem, referring to several real-life scenarios in which malicious files are expected to be about 10% of the total inspected files. Lastly, we present a chronological evaluation in which the frequent need for updating the training set was evaluated. Evaluation results indicate that the evaluated methodology achieves a level of accuracy higher than 96% (with TPR above 0.95 and FPR approximately 0.1), which slightly improves the results in previous studies that use byte n-gram representation. The chronological evaluation showed a clear trend in which the performance improves as the training set is more updated.
european conference on intelligence and security informatics | 2008
Robert Moskovitch; Clint Feher; Nir Tzachar; Eugene Berger; Marina Gitelman; Shlomi Dolev; Yuval Elovici
The recent growth in network usage has motivated the creation of new malicious code for various purposes, including economic ones. Todays signature-based anti-viruses are very accurate, but cannot detect new malicious code. Recently, classification algorithms were employed successfully for the detection of unknown malicious code. However, most of the studies use byte sequence n-grams representation of the binary code of the executables. We propose the use of (Operation Code) OpCodes, generated by disassembling the executables. We then use n-grams of the OpCodes as features for the classification process. We present a full methodology for the detection of unknown malicious code, based on text categorization concepts. We performed an extensive evaluation of a test collection of more than 30,000 files, in which we evaluated extensively the OpCode n-gram representation and investigated the imbalance problem, referring to real-life scenarios, in which the malicious file content is expected to be about 10% of the total files. Our results indicate that greater than 99% accuracy can be achieved through the use of a training set that has a malicious file percentage lower than 15%, which is higher than in our previous experience with byte sequence n-gram representation [1].
intelligence and security informatics | 2008
Robert Moskovitch; Dima Stopel; Clint Feher; Nir Nissim; Yuval Elovici
Todaypsilas signature-based anti-viruses are very accurate, but are limited in detecting new malicious code. Currently, dozens of new malicious codes are created every day, and this number is expected to increase in the coming years. Recently, classification algorithms were used successfully for the detection of unknown malicious code. These studies used a test collection with a limited size where the same malicious-benign-file ratio in both the training and test sets, which does not reflect real-life conditions. In this paper we present a methodology for the detection of unknown malicious code, based on text categorization concepts. We performed an extensive evaluation using a test collection that contains more than 30,000 malicious and benign files, in which we investigated the imbalance problem. In real-life scenarios, the malicious file content is expected to be low, about 10% of the total files. For practical purposes, it is unclear as to what the corresponding percentage in the training set should be. Our results indicate that greater than 95% accuracy can be achieved through the use of a training set that contains below 20% malicious file content.
Information Sciences | 2012
Clint Feher; Yuval Elovici; Robert Moskovitch; Lior Rokach; Alon Schclar
Identity theft is a crime in which hackers perpetrate fraudulent activity under stolen identities by using credentials, such as passwords and smartcards, unlawfully obtained from legitimate users or by using logged-on computers that are left unattended. User verification methods provide a security layer in addition to the username and password by continuously validating the identity of logged-on users based on their physiological and behavioral characteristics. We introduce a novel method that continuously verifies users according to characteristics of their interaction with the mouse. The contribution of this work is threefold: first, user verification is derived based on the classification results of each individual mouse action, in contrast to methods which aggregate mouse actions. Second, we propose a hierarchy of mouse actions from which the features are extracted. Third, we introduce new features to characterize the mouse activity which are used in conjunction with features proposed in previous work. The proposed algorithm outperforms current state-of-the-art methods by achieving higher verification accuracy while reducing the response time of the system.
Journal in Computer Virology | 2009
Robert Moskovitch; Dima Stopel; Clint Feher; Nir Nissim; Nathalie Japkowicz; Yuval Elovici
The recent growth in network usage has motivated the creation of new malicious code for various purposes. Today’s signature-based antiviruses are very accurate for known malicious code, but can not detect new malicious code. Recently, classification algorithms were used successfully for the detection of unknown malicious code. But, these studies involved a test collection with a limited size and the same malicious: benign file ratio in both the training and test sets, a situation which does not reflect real-life conditions. We present a methodology for the detection of unknown malicious code, which examines concepts from text categorization, based on n-grams extraction from the binary code and feature selection. We performed an extensive evaluation, consisting of a test collection of more than 30,000 files, in which we investigated the class imbalance problem. In real-life scenarios, the malicious file content is expected to be low, about 10% of the total files. For practical purposes, it is unclear as to what the corresponding percentage in the training set should be. Our results indicate that greater than 95% accuracy can be achieved through the use of a training set that has a malicious file content of less than 33.3%.
intelligence and security informatics | 2007
Robert Moskovitch; Shay Pluderman; Ido Gus; Dima Stopel; Clint Feher; Yisrael Parmet; Yuval Shahar; Yuval Elovici
Detecting unknown malicious code (malcode) is a challenging task. Current common solutions, such as anti-virus tools, rely heavily on prior explicit knowledge of specific instances of malcode binary code signatures. During the time between its appearance and an update being sent to anti-virus tools, a new worm can infect many computers and cause significant damage. We present a new host-based intrusion detection approach, based on analyzing the behavior of the computer to detect the presence of unknown malicious code. The new approach consists on classification algorithms that learn from previous known malcode samples which enable the detection of an unknown malcode. We performed several experiments to evaluate our approach, focusing on computer worms being activated on several computer configurations while running several programs in order to simulate background activity. We collected 323 features in order to measure the computer behavior. Four classification algorithms were applied on several feature subsets. The average detection accuracy that we achieved was above 90% and for specific unknown worms even above 99%.
KI '07 Proceedings of the 30th annual German conference on Advances in Artificial Intelligence | 2007
Robert Moskovitch; Nir Nissim; Dima Stopel; Clint Feher; Roman Englert; Yuval Elovici
Detecting unknown worms is a challenging task. Extant solutions, such as anti-virus tools, rely mainly on prior explicit knowledge of specific worm signatures. As a result, after the appearance of a new worm on the Web there is a significant delay until an update carrying the worms signature is distributed to anti-virus tools. We propose an innovative technique for detecting the presence of an unknown worm, based on the computer operating system measurements. We monitored 323 computer features and reduced them to 20 features through feature selection. Support vector machines were applied using 3 kernel functions. In addition we used active learning as a selective sampling method to increase the performance of the classifier, exceeding above 90% mean accuracy, and for specific unknown worms 94% accuracy.
pacific asia workshop on intelligence and security informatics | 2009
Robert Moskovitch; Clint Feher; Yuval Elovici
Signature-based anti-viruses are very accurate, but are limited in detecting new malicious code. Dozens of new malicious codes are created every day, and the rate is expected to increase in coming years. To extend the generalization to detect unknown malicious code, heuristic methods are used; however, these are not successful enough. Recently, classification algorithms were used successfully for the detection of unknown malicious code. In this paper we describe the methodology of detection of malicious code based on static analysis and a chronological evaluation, in which a classifier is trained on files till year k and tested on the following years. The evaluation was performed in two setups, in which the percentage of the malicious files in the training set was 50% and 16%. Using 16% malicious files in the training set for some classifiers showed a trend, in which the performance improves as the training set is more updated.
intelligence and security informatics | 2008
Robert Moskovitch; Clint Feher; Yuval Elovici
Signature-based anti-viruses are very accurate, but are limited in detecting new malicious code. Dozens of new malicious codes are created every day, and the rate is expected to increase in coming years. To extend the generalization to detect unknown malicious code, heuristic methods are used; however, these are not successful enough. Recently, classification algorithms were used successfully for the detection of unknown malicious code. We earlier investigated the optimized conditions in which highest-level accuracy is achieved, in terms of the percentage of malicious files. In this paper we describe the methodology of detection of malicious code based on static analysis and a chronological evaluation, in which a classifier is trained on files till year k and tested on the following years. The evaluation was performed in two setups, in which the percentage of the malicious files in the training set was 50% or 16%. Using 16% malicious files in the training set showed a clear trend, in which the performance improves as the training set is more updated.
computational intelligence and data mining | 2007
Robert Moskovitch; Ido Gus; Shay Pluderman; Dima Stopel; Clint Feher; Chanan Glezer; Yuval Shahar; Yuval Elovici
Detecting unknown worms is a challenging task. Extant solutions, such as anti-virus tools, rely mainly on prior explicit knowledge of specific worm signatures. As a result, after the appearance of a new worm on the Web there is a significant delay until an update carrying the worms signature is distributed to anti-virus tools. During this time interval a new worm can infect many computers and cause significant damage. We propose an innovative technique for detecting the presence of an unknown worm, not necessarily by recognizing specific instances of the worm, but rather based on the computer measurements. We designed an experiment to test the new technique employing several computer configurations and background applications activity. During the experiments 323 computer features were monitored. Four feature selection techniques were used to reduce the amount of features and four classification algorithms were applied on the resulting feature subsets. Our results indicate that using this approach resulted in exceeding 90% mean accuracy, and for specific unknown worms accuracy reached above 99%, using just 20 features while maintaining a low level of false positive rate.