Josef Allen
Harris Corporation
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Josef Allen.
Proceedings of SPIE | 2011
Josef Allen; Nan Zhao; Jiangbo Yuan; Xiuwen Liu
Tattoo segmentation is challenging due to the complexity and large variance in tattoo structures. We have developed a segmentation algorithm for finding tattoos in an image. Our basic idea is split-merge: split each tattoo image into clusters through a bottom-up process, learn to merge the clusters containing skin and then distinguish tattoo from the other skin via top-down prior in the image itself. Tattoo segmentation with unknown number of clusters is transferred to a figureground segmentation. We have applied our segmentation algorithm on a tattoo dataset and the results have shown that our tattoo segmentation system is efficient and suitable for further tattoo classification and retrieval purpose.
Modeling and Simulation for Military Operations II | 2007
Mark Rahmes; J. Harlan Yates; Josef Allen; Patrick Kelley
High resolution Digital Surface Models (DSMs) may contain voids (missing data) due to the data collection process used to obtain the DSM, inclement weather conditions, low returns, system errors/malfunctions for various collection platforms, and other factors. DSM voids are also created during bare earth processing where culture and vegetation features have been extracted. The Harris LiteSiteTM Toolkit handles these void regions in DSMs via two novel techniques. We use both partial differential equations (PDEs) and exemplar based inpainting techniques to accurately fill voids. The PDE technique has its origin in fluid dynamics and heat equations (a particular subset of partial differential equations). The exemplar technique has its origin in texture analysis and image processing. Each technique is optimally suited for different input conditions. The PDE technique works better where the area to be void filled does not have disproportionately high frequency data in the neighborhood of the boundary of the void. Conversely, the exemplar based technique is better suited for high frequency areas. Both are autonomous with respect to detecting and repairing void regions. We describe a cohesive autonomous solution that dynamically selects the best technique as each void is being repaired.
Proceedings of SPIE, the International Society for Optical Engineering | 2007
Kenneth Sartor; Josef Allen; Emile Ganthier; Gnana Bhaskar Tenali
The most commonly used smoothing algorithms for complex data processing are blurring functions (i.e., Hanning, Taylor weighting, Gaussian, etc.). Unfortunately, the filters so designed blur the edges in a Synthetic Aperture Radar (SAR) scene, reduce the accuracy of features, and blur the fringe lines in an interferogram. For the Digital Surface Map (DSM) extraction, the blurring of these fringe lines causes inaccuracies in the height of the unwrapped terrain surface. Our goal here is to perform spatially non-uniform smoothing to overcome the above mentioned disadvantages. This is achieved by using a Complex Anisotropic Non-Linear Diffuser (CANDI) filter that is a spatially varying. In particular, an appropriate choice of the convection function in the CANDI filter is able to accomplish the non-uniform smoothing. This boundary sharpening intra-region smoothing filter acts on interferometric SAR (IFSAR) data with noise to produce an interferogram with significantly reduced noise contents and desirable local smoothing. Results of CANDI filtering will be discussed and compared with those obtained by using the standard filters on simulated data.
cyber security and information intelligence research workshop | 2011
Josef Allen; Sereyvathana Ty; Xiuwen Liu; Ivan Lozano
Securing the power grid is one of the top priorities in our nation due to its critical importance. As critical infrastructure security of the grid is vulnerable to both cyber and physical attacks, a wellcoordinated synchronized attack can lead to cascading outages, damage major physical components such as generators, and financially devastate our country. Due to the laws of physics, the time for effective countermeasures to prevent such an attack is on the order of ten power cycles. To the point, control operators may not have sufficient time to react to such an attack. To mitigate such synchronized attacks, we propose a distributed approach by decomposing the grid into resilient nodes that are connected through communication infrastructure (e.g., NASPInet). A key advantage of the proposed approach over existing ones is that the resulting system is scalable and deployable and provides an effective way to stop cascading events, which cannot be done using current centralized approaches. As disturbances can propagate quickly through a large area, the system must be able to compute stable margins of the system robustly and quickly; we propose a new method based on polyhedron approximation of the global stable regions of each resilient zone by clustering potential fault vectors for the zone. By the combination of multiple resilient zones, we effectively handle practically unlimited synchronized attacks with efficient models.
cyber security and information intelligence research workshop | 2011
Alan Michaels; Josef Allen
Communication in wide-area measurement/control systems (WAM/CS) presents simultaneous challenges of achieving low latency, high security, and sufficient data throughputs to meet the overall quality of service requirements. High-value and timecritical information such as that indicating transient instability or attacks on the cyber-physical infrastructure must be delivered to their destination within 5-20 ms of identification [1], eliminating many of the existing commercial communications solutions that employ a traditional net-centric ISO stack-based architecture. This paper focuses on a physical-layer wireless communications technology capable of supporting real-time protection, detection, and reaction processes within the WAM/CS via physical-layer encryption. This PHY-layer encryption adapts a maximum entropy digital chaotic spread spectrum waveform originally developed for secure military communications [2] by employing the underlying spreading sequence as a private key cryptosystem, thus avoiding latencies derived from MAC-layer framing or cryptographic block algorithms. The proposed approach delivers latencies on the order of 500μs per node, allowing significant flexibility in wireless network topology. We present the proposed PHY-layer encryption system in the context of communicating an a monitored “event,” evaluate its behavior against the traditional IA objectives, and also consider its integration with related communications techniques to demonstrate its utility as a building block for enhancing cyber-physical security.
Proceedings of SPIE | 2011
Sereyvathana Ty; Josef Allen; Xiuwen Liu
In recent years, driven by the development of steganalysis methods, steganographic algorithms have been evolved rapidly with the ultimate goal of an unbreakable embedding procedure, resulting in recent steganographic algorithms with minimum distortions, exemplified by the recent family of Modified Matrix Encoding (MME) algorithms, which has shown to be most difficult to be detected. In this paper we propose a compressed sensing based on approach for intrinsic steganalysis to detect MME stego messages. Compressed sensing is a recently proposed mathematical framework to represent an image (in general, a signal) using a sparse representation relative to an overcomplete dictionary by minimizing the l1-norm of resulting coefficients. Here we first learn a dictionary from a training set so that the performance will be optimized using the KSVD algorithm; since JPEG images are processed by 8x8 blocks, the training examples are 8x8 patches, rather than the entire images and this increases the generalization of compressed sensing. For each 8x8 block, we compute its sparse representation using OMP (orthogonal matching pursuit) algorithm. Using computed sparse representations, we train a support vector machine (SVM) to classify 8x8 blocks into stego and non-stego classes. Then given an input image, we first divide it into 8x8 blocks. For each 8x8 block, we compute its sparse representation and classify it using the trained SVM. After all the 8x8 blocks are classified, the entire image is classified based on the majority rule of 8x8 block classification results. This allows us to achieve a robust decision even when 8x8 blocks can be classified only with relatively low accuracy. We have tested the proposed algorithm on two datasets (Corel-1000 dataset and a remote sensing image dataset) and have achieved 100% accuracy on classifying images, even though the accuracy of classifying 8x8 blocks is only 80.89%.
cyber security and information intelligence research workshop | 2010
Josef Allen; Xiuwen Liu; Liam M. Mayron; Washington Mio
Due to the exponentially increasing multimedia traffic on the Internet, steganography, hiding messages within a seemingly normal cover object (e.g., an image), becomes potentially a serious national security threat as images and other media files can be exploited to communicate secret messages without even being noticed, a unique advantage of steganography over encryption. Therefore, steganalysis, the detection of the presence of steganographic embedding in media files, has become an active area of research. It appears that the most state of the art steganalysis routines are based on a learning strategy, where a classifier is first learned based on a training set with known cover and stego objects and then the learned classifier is used to detect the presence of new media files; among classifiers, support vector machines (SVMs) are the most commonly used due to their reported effectiveness. In this paper, based on the observation that a stego object file is typically very close to its cover object (so that it will not attract attention), we argue that the generalization performance of support vector machines and other classifiers are inherently limited, unless the steganographic routine has a detectable intrinsic signature. The claim is supported by systematic investigation on the effectiveness of detecting perturbed quantization (PQ), a method of minimal distortion steganography using JPEG images. Experimental results suggest interesting alternatives to the generic machine learning paradigm, which should lead to next generation steganalysis methods.
military communications conference | 2012
Josef Allen; Xiuwen Liu; Ivan Lozano; Xin Yuan
Unexpected occurrences of large-area cascading failures due to small disturbances in worldwide electricity grids serve as evidence of their intrinsic instability. As the grid is the most fundamental critical infrastructure in any modern society, detection and mitigation of such cascading failures due to accidental failures or malicious attacks are of vital importance to both civilian and military applications. However, due to the unique physical properties of electricity, such as its travel speed, systems must be able to react within a fraction of second in order to detect and prevent occurrences of cascading failures. In this paper, by modeling the grid as a cyber-physical system, we propose a decentralized, hierarchical framework to develop and implement a wide-area actionable system, capable of detecting and mitigating potential cascading failures. The states of the grid and physical constraints are modeled as manifolds, and evolution of the grid becomes a path on the manifold. By decomposing the grid into resilience zones with minimal power flow between them, we utilize precomputed scenarios in each resilience zone to develop a parametrized model. During deployment, online phasor measurements will be used to estimate the stability within each zone and interactions among them. The detection of cascading failures will be based on the detection of cascading failing paths among the K hop trees built for each zone. We illustrate the effectiveness of the proposed approach using the 2003 Italy blackout scenarios, and we discuss practical requirements in order to deploy such a system.
Proceedings of SPIE | 2011
Josef Allen; Jiangbo Yuan; Xiuwen Liu; Mark Rahmes
We present an innovative way to autonomously classify LiDAR points into bare earth, building, vegetation, and other categories. One desirable product of LiDAR data is the automatic classification of the points in the scene. Our algorithm automatically classifies scene points using Compressed Sensing Methods via Orthogonal Matching Pursuit algorithms utilizing a generalized K-Means clustering algorithm to extract buildings and foliage from a Digital Surface Models (DSM). This technology reduces manual editing while being cost effective for large scale automated global scene modeling. Quantitative analyses are provided using Receiver Operating Characteristics (ROC) curves to show Probability of Detection and False Alarm of buildings vs. vegetation classification. Histograms are shown with sample size metrics. Our inpainting algorithms then fill the voids where buildings and vegetation were removed, utilizing Computational Fluid Dynamics (CFD) techniques and Partial Differential Equations (PDE) to create an accurate Digital Terrain Model (DTM) [6]. Inpainting preserves building height contour consistency and edge sharpness of identified inpainted regions. Qualitative results illustrate other benefits such as Terrain Inpaintings unique ability to minimize or eliminate undesirable terrain data artifacts.
Proceedings of SPIE | 2011
Josef Allen; Sereyvathana Ty; Xiuwen Liu
Digital steganographic algorithms hide secret messages in seemingly innocent cover objects, such as images. Steganographic algorithms are rapidly evolving, reducing distortions, and making detection of altered cover objects by steganalysis algorithms more challenging. The value of current steganographic and steganalysis algorithms is difficult to evaluate until they are tested on realistic datasets. We propose a system approach to steganalysis for reliably detecting steganographic objects among a large number of images, acknowledging that most digital images are intact. The system consists of a cascade of intrinsic image formations filters (IIFFs), where the IIFFs in the early stage are designed to filter out non-stego images based on real world constraints, and the IIFFs in the late stage are designed to detect intrinsic features of specific steganographic routines. Our approach makes full use of all available constraints, leading to robust detection performance and low probability of false alarm. Our results based on a large image set from Flickr.com demonstrate the potential of our approach on large-scale real-world repositories.