Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Erik Blasch is active.

Publication


Featured researches published by Erik Blasch.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2012

Objective Assessment of Multiresolution Image Fusion Algorithms for Context Enhancement in Night Vision: A Comparative Study

Zheng Liu; Erik Blasch; Zhiyun Xue; Jiying Zhao; Robert Laganière; Wei Wu

Comparison of image processing techniques is critically important in deciding which algorithm, method, or metric to use for enhanced image assessment. Image fusion is a popular choice for various image enhancement applications such as overlay of two image products, refinement of image resolutions for alignment, and image combination for feature extraction and target recognition. Since image fusion is used in many geospatial and night vision applications, it is important to understand these techniques and provide a comparative study of the methods. In this paper, we conduct a comparative study on 12 selected image fusion metrics over six multiresolution image fusion algorithms for two different fusion schemes and input images with distortion. The analysis can be applied to different image combination algorithms, image processing methods, and over a different choice of metrics that are of use to an image processing expert. The paper relates the results to an image quality measurement based on power spectrum and correlation analysis and serves as a summary of many contemporary techniques for objective assessment of image fusion algorithms.


IEEE Transactions on Image Processing | 2013

Efficient Minimum Error Bounded Particle Resampling L1 Tracker With Occlusion Detection

Xue Mei; Haibin Ling; Yi Wu; Erik Blasch; Li Bai

Recently, sparse representation has been applied to visual tracking to find the target with the minimum reconstruction error from a target template subspace. Though effective, these L1 trackers require high computational costs due to numerous calculations for l1 minimization. In addition, the inherent occlusion insensitivity of the l1 minimization has not been fully characterized. In this paper, we propose an efficient L1 tracker, named bounded particle resampling (BPR)-L1 tracker, with a minimum error bound and occlusion detection. First, the minimum error bound is calculated from a linear least squares equation and serves as a guide for particle resampling in a particle filter (PF) framework. Most of the insignificant samples are removed before solving the computationally expensive l1 minimization in a two-step testing. The first step, named τ testing, compares the sample observation likelihood to an ordered set of thresholds to remove insignificant samples without loss of resampling precision. The second step, named max testing, identifies the largest sample probability relative to the target to further remove insignificant samples without altering the tracking result of the current frame. Though sacrificing minimal precision during resampling, max testing achieves significant speed up on top of τ testing. The BPR-L1 technique can also be beneficial to other trackers that have minimum error bounds in a PF framework, especially for trackers based on sparse representations. After the error-bound calculation, BPR-L1 performs occlusion detection by investigating the trivial coefficients in the l1 minimization. These coefficients, by design, contain rich information about image corruptions, including occlusion. Detected occlusions are then used to enhance the template updating. For evaluation, we conduct experiments on three video applications: biometrics (head movement, hand holding object, singers on stage), pedestrians (urban travel, hallway monitoring), and cars in traffic (wide area motion imagery, ground-mounted perspectives). The proposed BPR-L1 method demonstrates an excellent performance as compared with nine state-of-the-art trackers on eleven challenging benchmark sequences.


IEEE Transactions on Image Processing | 2012

Real-Time Probabilistic Covariance Tracking With Efficient Model Update

Yi Wu; Jian Cheng; Jinqiao Wang; Hanqing Lu; Jun Wang; Haibin Ling; Erik Blasch; Li Bai

The recently proposed covariance region descriptor has been proven robust and versatile for a modest computational cost. The covariance matrix enables efficient fusion of different types of features, where the spatial and statistical properties, as well as their correlation, are characterized. The similarity between two covariance descriptors is measured on Riemannian manifolds. Based on the same metric but with a probabilistic framework, we propose a novel tracking approach on Riemannian manifolds with a novel incremental covariance tensor learning (ICTL). To address the appearance variations, ICTL incrementally learns a low-dimensional covariance tensor representation and efficiently adapts online to appearance changes of the target with only computational complexity, resulting in a real-time performance. The covariance-based representation and the ICTL are then combined with the particle filter framework to allow better handling of background clutter, as well as the temporary occlusions. We test the proposed probabilistic ICTL tracker on numerous benchmark sequences involving different types of challenges including occlusions and variations in illumination, scale, and pose. The proposed approach demonstrates excellent real-time performance, both qualitatively and quantitatively, in comparison with several previously proposed trackers.


IEEE Transactions on Image Processing | 2015

Encoding Color Information for Visual Tracking: Algorithms and Benchmark

Pengpeng Liang; Erik Blasch; Haibin Ling

While color information is known to provide rich discriminative clues for visual inference, most modern visual trackers limit themselves to the grayscale realm. Despite recent efforts to integrate color in tracking, there is a lack of comprehensive understanding of the role color information can play. In this paper, we attack this problem by conducting a systematic study from both the algorithm and benchmark perspectives. On the algorithm side, we comprehensively encode 10 chromatic models into 16 carefully selected state-of-the-art visual trackers. On the benchmark side, we compile a large set of 128 color sequences with ground truth and challenge factor annotations (e.g., occlusion). A thorough evaluation is conducted by running all the color-encoded trackers, together with two recently proposed color trackers. A further validation is conducted on an RGBD tracking benchmark. The results clearly show the benefit of encoding color information for tracking. We also perform detailed analysis on several issues, including the behavior of various combinations between color model and visual tracker, the degree of difficulty of each sequence for tracking, and how different challenge factors affect the tracking performance. We expect the study to provide the guidance, motivation, and benchmark for future work on encoding color in visual tracking.


international conference on information fusion | 2010

Measures of effectiveness for high-level fusion

Erik Blasch; Pierre Valin; Eloi Bosse

Current advances in technology, sensor collection, data storage, and data distribution have afforded more complex, distributed, and operational information fusion systems (IFSs). IFSs notionally consist of low-level (data collection, registration, and association in time and space) and high-level fusion (user coordination, situational awareness, and mission control). Low-level IFSs typically rely on standard metrics for evaluation such as timeliness, accuracy, and confidence. Given the broader use of IFSs, it is also important to look at high-level fusion processes and determine a set of metrics to test IFSs, such as workload, throughput, and cost. Three types of measures (measures of performance MOP, measures of effectiveness MOE, and measures of merit MOM) are summarized. In this paper, we seek to describe MOEs for High-Level Fusion (HLF) based on developments in Quality of Service (QOS) and Quality of Information (QOI) that support the user and the machine, respectively. We define a HLF MOE based on (1) information quality, (2) robustness, and (3) information gain. We demonstrate the HLF MOE based for a maritime domain situation awareness example.


international conference on information fusion | 2006

Level 5 (User Refinement) issues supporting Information Fusion Management

Erik Blasch

Subsequent revisions to the Joint-Directors of Lab (JDL) model emphasize the differentiation between fusion (estimation) and sensor management (control). Two diverging groups include one pressing for fusion automation (JDL revisions) and one advocating the role of the user (user-fusion model). The center of debate is real-world delivery of fusion systems which requires presenting fusion results for knowledge representation (fusion estimation) and knowledge reasoning (control management). The purpose of the paper is to highlight the need of Users, with individual differences, facilitated by knowledge representations to reason about user situational awareness (SA). This paper includes: (1) Addressing the user in system management/control. (2) Assessing information quality (metrics) to support SA. (3) Evaluating Fusion systems to deliver user info needs, (4) Planning knowledge delivery for dynamic updating. (5) Designing SA interfaces to support user reasoning


IEEE Aerospace and Electronic Systems Magazine | 2012

High Level Information Fusion (HLIF): Survey of models, issues, and grand challenges

Erik Blasch; Dale A. Lambert; Pierre Valin; Mieczyslaw M. Kokar; James Llinas; Subrata Das; Chee Chong; Elisa Shahbazian

High-level information fusion (situation and threat assessment, process and user refinement) requires novel solutions for the operational transition of information fusion designs. Low-level (signal processing, object state estimation and characterization) is well-vetted in the community as compared to high-level information fusion (control and relationships to the environment). Specific areas of interest include modeling (situations, environments), representations (semantic, knowledge, and complex), information management (ontologies, protocols) systems design (scenario-based, user-based, distributed-agent) and evaluation (measures of performance/effectiveness, and empirical case studies).


international conference on information fusion | 2005

DFIG Level 5 (User Refinement) issues supporting Situational Assessment Reasoning

Erik Blasch; Susan Plano

Subsequent revisions to the JDL model modified definitions for model usefulness that stressed differentiation between fusion (estimation) and sensor management (control). Two diverging groups include one pressing for fusion automation (JDL revisions) and one advocating the role of the user (user-fusion model). The center of debate is real-world delivery of fusion systems which requires presenting information fusion results for knowledge representation (fusion estimation) and knowledge reasoning (control management). The purpose of the paper is to highlight the need of users, with individual differences, facilitated by knowledge representations to reason about user situational awareness (SA). This position paper highlights: (1) addressing the user in system management/control, (2) assessing information quality (metrics) to support SA, (3) evaluating fusion systems to deliver user info needs, (4) planning knowledge delivery for dynamic updating, (5) designing SA interfaces to support user reasoning.


IEEE Aerospace and Electronic Systems Magazine | 2008

Resource management coordination with level 2/3 fusion issues and challenges [Panel Report]

Erik Blasch; J. Salerno; I. Kadar; K. Hintz; J. Biermann; Subrata Das

Information fusion system designs require sensor and resource management (SM) for effective and efficient data collection, processing, and dissemination. Common Level 4 fusion sensor management (or process refinement) inter-relations with target tracking and identification (Level 1 fusion) have been detailed in the literature. At the ISIF Fusion Conference, a panel discussion was held to examine the contemporary issues and challenges pertaining to the interaction between SM and situation and threat assessment (Level 2/3 fusion). This summarizes the key tenants of the invited panel experts. The common themes were: (1) Addressing the user in system control, (2) Determining a standard set of metrics, (3) Evaluating fusion systems to deliver timely information needs, (4) Dynamic updating for planning mission time-horizons, (5) Joint optimization of objective functions at all levels, (6) L2/3 situation entity definitions for knowledge discovery, modeling, and information projection, and (7) Addressing constraints for resource planning and scheduling.


international conference on information fusion | 2005

Nonlinear constrained tracking of targets on roads

Chun Yang; Michael Bakich; Erik Blasch

Ground targets are constrained to move on the Earths surface and are most likely to travel along a road network. For targets on road, their interaction with the environment and with each other particularly at intersections is more structured, thus useful to tracking algorithms. Indeed, the knowledge of terrain database and road maps can be used as constraints and incorporated into the tracking algorithms. In this paper, we set forth a nonlinear formulation of tracking multiple interacting targets along road networks and apply nonlinear filtering techniques (namely, the extended Kalman filter, the unscented Kalman filter, the particle filter, and a hybrid Kalman particle filter) to the problem for such scenarios as air-to-ground surveillance. Simulation results are present to illustrate the performance of the nonlinear filters with the nonlinear formulation as compared to conventional ones.

Collaboration


Dive into the Erik Blasch's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Khanh Pham

Air Force Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Dan Shen

Ohio State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Zhonghai Wang

Michigan Technological University

View shared research outputs
Top Co-Authors

Avatar

Chun Yang

Air Force Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Xin Tian

University of Connecticut

View shared research outputs
Top Co-Authors

Avatar

Soundararajan Ezekiel

Indiana University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Yu Chen

Binghamton University

View shared research outputs
Top Co-Authors

Avatar

Mark G. Alford

Air Force Research Laboratory

View shared research outputs
Researchain Logo
Decentralizing Knowledge