Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Andras Rozsa is active.

Publication


Featured researches published by Andras Rozsa.


computer vision and pattern recognition | 2016

Adversarial Diversity and Hard Positive Generation

Andras Rozsa; Ethan M. Rudd; Terrance E. Boult

State-of-the-art deep neural networks suffer from a fundamental problem – they misclassify adversarial examples formed by applying small perturbations to inputs. In this paper, we present a new psychometric perceptual adversarial similarity score (PASS) measure for quantifying adversarial images, introduce the notion of hard positive generation, and use a diverse set of adversarial perturbations – not just the closest ones – for data augmentation. We introduce a novel hot/cold approach for adversarial example generation, which provides multiple possible adversarial perturbations for every single image. The perturbations generated by our novel approach often correspond to semantically meaningful image structures, and allow greater flexibility to scale perturbation-amplitudes, which yields an increased diversity of adversarial images. We present adversarial images on several network topologies and datasets, including LeNet on the MNIST dataset, and GoogLeNet and ResidualNet on the ImageNet dataset. Finally, we demonstrate on LeNet and GoogLeNet that fine-tuning with a diverse set of hard positives improves the robustness of these networks compared to training with prior methods of generating adversarial images.


IEEE Communications Surveys and Tutorials | 2017

A Survey of Stealth Malware: Attacks, Mitigation Measures, and Steps Toward Autonomous Open World Solutions

Ethan M. Rudd; Andras Rozsa; Manuel Günther; Terrance E. Boult

As our professional, social, and financial existences become increasingly digitized and as our government, healthcare, and military infrastructures rely more on computer technologies, they present larger and more lucrative targets for malware. Stealth malware in particular poses an increased threat because it is specifically designed to evade detection mechanisms, spreading dormant, in the wild for extended periods of time, gathering sensitive information or positioning itself for a high-impact zero-day attack. Policing the growing attack surface requires the development of efficient anti-malware solutions with improved generalization to detect novel types of malware and resolve these occurrences with as little burden on human experts as possible. In this paper, we survey malicious stealth technologies as well as existing solutions for detecting and categorizing these countermeasures autonomously. While machine learning offers promising potential for increasingly autonomous solutions with improved generalization to new malware types, both at the network level and at the host level, our findings suggest that several flawed assumptions inherent to most recognition algorithms prevent a direct mapping between the stealth malware recognition problem and a machine learning solution. The most notable of these flawed assumptions is the closed world assumption: that no sample belonging to a class outside of a static training set will appear at query time. We present a formalized adaptive open world framework for stealth malware recognition and relate it mathematically to research from other machine learning domains.


international conference on pattern recognition | 2016

Are facial attributes adversarially robust

Andras Rozsa; Manuel Günther; Ethan M. Rudd; Terrance E. Boult

Facial attributes are emerging soft biometrics that have the potential to reject non-matches, for example, based on mismatching gender. To be usable in stand-alone systems, facial attributes must be extracted from images automatically and reliably. In this paper, we propose a simple yet effective solution for automatic facial attribute extraction by training a deep convolutional neural network (DCNN) for each facial attribute separately, without using any pre-training or dataset augmentation, and we obtain new state-of-the-art facial attribute classification results on the CelebA benchmark. To test the stability of the networks, we generated adversarial images - formed by adding imperceptible non-random perturbations to original inputs which result in classification errors - via a novel fast flipping attribute (FFA) technique. We show that FFA generates more adversarial examples than other related algorithms, and that DCNNs for certain attributes are generally robust to adversarial inputs, while DCNNs for other attributes are not. This result is surprising because no DCNNs tested to date have exhibited robustness to adversarial images without explicit augmentation in the training procedure to account for adversarial examples. Finally, we introduce the concept of natural adversarial samples, i.e., images that are misclassified but can be easily turned into correctly classified images by applying small perturbations. We demonstrate that natural adversarial samples commonly occur, even within the training set, and show that many of these images remain misclassified even with additional training epochs. This phenomenon is surprising because correcting the misclassification, particularly when guided by training data, should require only a small adjustment to the DCNN parameters.


international conference on machine learning and applications | 2016

Are Accuracy and Robustness Correlated

Andras Rozsa; Manuel Günther; Terrance E. Boult

Machine learning models are vulnerable to adversarial examples formed by applying small carefully chosen perturbations to inputs that cause unexpected classification errors. In this paper, we perform experiments on various adversarial example generation approaches with multiple deep convolutional neural networks including Residual Networks, the best performing models on ImageNet Large-Scale Visual Recognition Challenge 2015. We compare the adversarial example generation techniques with respect to the quality of the produced images, and measure the robustness of the tested machine learning models to adversarial examples. Finally, we conduct large-scale experiments on cross-model adversarial portability. We find that adversarial examples are mostly transferable across similar network topologies, and we demonstrate that better machine learning models are less vulnerable to adversarial examples.


Pattern Recognition Letters | 2017

Facial attributes: Accuracy and adversarial robustness

Andras Rozsa; Manuel Günther; Ethan M. Rudd; Terrance E. Boult

Abstract Facial attributes, emerging soft biometrics, must be automatically and reliably extracted from images in order to be usable in stand-alone systems. While recent methods extract facial attributes using deep neural networks (DNNs) trained on labeled facial attribute data, the robustness of deep attribute representations has not been evaluated. In this paper, we examine the representational stability of several approaches that recently advanced the state of the art on the CelebA benchmark by generating adversarial examples formed by adding small, non-random perturbations to inputs yielding altered classifications. We show that our fast flipping attribute (FFA) technique generates more adversarial examples than traditional algorithms, and that the adversarial robustness of DNNs varies highly between facial attributes. We also test the correlation of facial attributes and find that only for related attributes do the formed adversarial perturbations change the classification of others. Finally, we introduce the concept of natural adversarial samples, i.e., misclassified images where predictions can be corrected via small perturbations. We demonstrate that natural adversarial samples commonly occur and show that many of these images remain misclassified even with additional training epochs, even though their correct classification may require only a small adjustment to network parameters.


computer vision and pattern recognition | 2015

Genetic algorithm attack on minutiae-based fingerprint authentication and protected template fingerprint systems

Andras Rozsa; Albert E. Glock; Terrance E. Boult

This paper describes a new generic attack against minutiae-based fingerprint authentication systems. The goal of the attack is to construct a fingerprint minutiae template that matches a fixed but unknown reference template. The effectiveness of our attacking system is experimentally demonstrated against multiple fingerprint authentication systems. The paper discusses this attack on two leading privacy-enhanced template schemes and shows it can easily recover high matching score templates. A more general and novel aspect of our work is showing that despite high scores of the attack, the resulting templates do not match the original fingerprint and therefore the underlying data is still privacy protected. We conjecture that the ambiguity caused by collisions from projections/hashing during the privacy-enhanced template production provides for a multitude of minima, which trap attacks in a high-score but non-authentic region.


workshop on applications of computer vision | 2018

Towards Robust Deep Neural Networks with BANG

Andras Rozsa; Manuel Günther; Terrance E. Boult


International Journal of Central Banking | 2017

AFFACT: Alignment-free facial attribute classification technique

Manuel Günther; Andras Rozsa; Terrance E. Boult


british machine vision conference | 2017

Adversarial Robustness: Softmax versus Openmax.

Andras Rozsa; Manuel Günther; Terrance E. Boult


arXiv: Computer Vision and Pattern Recognition | 2017

Exploring LOTS in Deep Neural Networks

Andras Rozsa; Manuel Günther; Terrance E. Boult

Collaboration


Dive into the Andras Rozsa's collaboration.

Top Co-Authors

Avatar

Terrance E. Boult

University of Colorado Colorado Springs

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ethan M. Rudd

University of Colorado Colorado Springs

View shared research outputs
Top Co-Authors

Avatar

Albert E. Glock

University of Colorado Colorado Springs

View shared research outputs
Researchain Logo
Decentralizing Knowledge