Evan Krieger
University of Dayton
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Evan Krieger.
Signal, Image and Video Processing | 2017
Paheding Sidike; Evan Krieger; M. Zahangir Alom; Vijayan K. Asari; Tarek M. Taha
The goal of super-resolution (SR) is to increase the spatial resolution of a low-resolution (LR) image by a certain factor using either single or multiple LR input images. This paper presents a machine learning-based approach to reconstruct a high-resolution (HR) image from a single LR image. Inspired by the human visual cortex system, which is sensitive to high-frequency (HF) components in an image, we aim to model this concept by training a neural network to estimate the missing HF components that contain structural details. In our method, various directional edge responses at each pixel are considered to obtain more complete HF information and then a regularized extreme learning regression model is trained using a set of LR and HR images. Finally, the trained system is applied to a LR image to generate HR image. The experimental results confirm the effectiveness and efficiency of the proposed scheme in comparison with the state-of-the-art SR methods.
national aerospace and electronics conference | 2015
Evan Krieger; Paheding Sidike; Theus H. Aspiras; Vijayan K. Asari
The tracking of vehicles in wide area motion imagery (WAMI) can be a challenge due to the full and partial occlusions that can occur. The proposed solution for this challenge is to use the Directional Ringlet Intensity Feature Transform (DRIFT) feature extraction method with a Kalman filter. The proposed solution will utilize the properties of the DRIFT feature to solve the partial occlusion challenges. The Kalman filter will be used to estimate the object location during a full occlusion. The proposed solution will be tested on several vehicle sequences from the Columbus Large Image Format (CLIF) dataset.
Proceedings of SPIE | 2015
Fatema A. Albalooshi; Evan Krieger; Paheding Sidike; Vijayan K. Asari
Thermal images are exploited in many areas of pattern recognition applications. Infrared thermal image segmentation can be used for object detection by extracting regions of abnormal temperatures. However, the lack of texture and color information, low signal-to-noise ratio, and blurring effect of thermal images make segmenting infrared heat patterns a challenging task. Furthermore, many segmentation methods that are used in visible imagery may not be suitable for segmenting thermal imagery mainly due to their dissimilar intensity distributions. Thus, a new method is proposed to improve the performance of image segmentation in thermal imagery. The proposed scheme efficiently utilizes nonlinear intensity enhancement technique and Unsupervised Active Contour Models (UACM). The nonlinear intensity enhancement improves visual quality by combining dynamic range compression and contrast enhancement, while the UACM incorporates active contour evolutional function and neural networks. The algorithm is tested on segmenting different objects in thermal images and it is observed that the nonlinear enhancement has significantly improved the segmentation performance.
Proceedings of SPIE | 2014
Evan Krieger; Vijayan K. Asari; Saibabu Arigela
Security and surveillance videos, due to usage in open environments, are likely subjected to low resolution, underexposed, and overexposed conditions that reduce the amount of useful details available in the collected images. We propose an approach to improve the image quality of low resolution images captured in extreme lighting conditions to obtain useful details for various security applications. This technique is composed of a combination of a nonlinear intensity enhancement process and a single image super resolution process that will provide higher resolution and better visibility. The nonlinear intensity enhancement process consists of dynamic range compression, contrast enhancement, and color restoration processes. The dynamic range compression is performed by a locally tuned inverse sine nonlinear function to provide various nonlinear curves based on neighborhood information. A contrast enhancement technique is used to obtain sufficient contrast and a nonlinear color restoration process is used to restore color from the enhanced intensity image. The single image super resolution process is performed in the phase space, and consists of defining neighborhood characteristics of each pixel to estimate the interpolated pixels in the high resolution image. The combination of these approaches shows promising experimental results that indicate an improvement in visibility and an increase in usable details. In addition, the process is demonstrated to improve tracking applications. A quantitative evaluation is performed to show an increase in image features from Harris corner detection and improved statistics of visual representation. A quantitative evaluation is also performed on Kalman tracking results.
Pattern Recognition and Tracking XXIX | 2018
Evan Krieger; Theus H. Aspiras; Vijayan K. Asari; Kevin Krucki; Bryce Wauligman; Yakov Diskin; Karl Salva
Object trackers for full-motion-video (FMV) need to handle object occlusions (partial and short-term full), rotation, scaling, illumination changes, complex background variations, and perspective variations. Unlike traditional deep learning trackers that require extensive training time, the proposed Progressively Expanded Neural Network (PENNet) tracker methodology will utilize a modified variant of the extreme learning machine, which encompasses polynomial expansion and state preserving methodologies. This reduces the training time significantly for online training of the object. The proposed algorithm is evaluated on the DAPRA Video Verification of Identity (VIVID) dataset, wherein the selected highvalue-targets (HVTs) are vehicles.
Infrared Remote Sensing and Instrumentation XXV | 2017
Vijayan K. Asari; Theus H. Aspiras; Evan Krieger
Current object tracking implementations utilize different feature extraction techniques to obtain salient features to track objects of interest which change in different types of imaging modalities and environmental conditions.nChallenges in infrared imagery for object tracking include object deformation, occlusion, background variations, and smearing, which demands high performance algorithms. We propose the directional ringlet intensity feature transform to encompass significant levels of detail while being able to track low resolution targets. The algorithm utilizes a weighted circularly partitioned histogram distribution method which outperforms regular histogram distribution matching by localizing information and utilizing the rotation invariance of the circular rings. The image also utilizes directional edge information created by a Frei-Chen edge detector to improve the ability of the algorithm in different lighting conditions. We find the matching features using a weighted Earth Movers Distance (EMD), which results in the specific location of the target object. The algorithm is fused with image registration, motion detection from background subtraction and motion estimation from Kalman filtering to create robustness from camera jitter and occlusions. It is found that the DRIFT algorithm performs very well under different operating conditions in IR imagery and yields better results as compared to other state-of-the-art feature based object trackers. The testing is done on two IR databases, a collected database of vehicle and pedestrian sequences and the Visual Object Tracking (VOT) IR database.
applied imagery pattern recognition workshop | 2016
Evan Krieger; Almabrok Essa; Sidike Paheding; Theus H. Aspiras; Vijayan K. Asari
Accurate and efficient object tracking is an important aspect of various security and surveillance applications. In object tracking solutions which utilize intensity-based histogram feature methods for use on wide area motion imagery (WAMI), there currently exists tracking challenges due to object structural information distortions and pavement/background variations. The inclusion of structural target information including edge features in addition to the intensity features will allow for more robust object tracking. To achieve this we propose a feature extraction method that utilizes the Frei-Chen edge detector and Gaussian ringlet feature mapping. Frei-Chen edge detector extracts edge, line, and mean features that can be used to represent the structural features of the target. Gaussian ringlet feature mapping is used to obtain rotational invariant features that are robust to target and viewpoint rotation. These aspects are combined to create an efficient and robust tracking scheme. The proposed scheme is evaluated against state-of-the-art feature tracking methods using both temporal and spatial robustness metrics. The evaluations yield more accurate results for the proposed method on challenging WAMI sequences.
Optical Pattern Recognition XXVII | 2016
Sidike Paheding; Almabrok Essa; Evan Krieger; Vijayan K. Asari
Challenges in object tracking such as object deformation, occlusion, and background variations require a robust tracker to ensure accurate object location estimation. To address these issues, we present a Pyramidal Rotation Invariant Features (PRIF) that integrates Gaussian Ringlet Intensity Distribution (GRID) and Fourier Magnitude of Histogram of Oriented Gradients (FMHOG) methods for tracking objects from videos in challenging environments. In this model, we initially partition a reference object region into increasingly fine rectangular grid regions to construct a pyramid. Histograms of local features are then extracted for each level of pyramid. This allows the appearance of a local patch to be captured at multiple levels of detail to make the algorithm insensitive to partial occlusion. Then GRID and magnitude of discrete Fourier transform of the oriented gradient are utilized to achieve a robust rotation invariant feature. The GRID feature creates a weighting scheme to emphasize the object center. In the tracking stage, a Kalman filter is employed to estimate the center of the object search regions in successive frames. Within the search regions, we use a sliding window technique to extract the PRIF of candidate objects, and then Earth Mover’s Distance (EMD) is used to classify the best matched candidate features with respect to the reference. Our PRIF object tracking algorithm is tested on two challenging Wide Area Motion Imagery (WAMI) datasets, namely Columbus Large Image Format (CLIF) and Large Area Image Recorder (LAIR), to evaluate its robustness. Experimental results show that the proposed PRIF approach yields superior results compared to state-of-the-art feature based object trackers.
Proceedings of SPIE | 2015
Evan Krieger; Vijayan K. Asari; Saibabu Arigela; Theus H. Aspiras
Object tracking in wide area motion imagery is a complex problem that consists of object detection and target tracking over time. This challenge can be solved by human analysts who naturally have the ability to keep track of an object in a scene. A computer vision solution for object tracking has the potential to be a much faster and efficient solution. However, a computer vision solution faces certain challenges that do not affect a human analyst. To overcome these challenges, a tracking process is proposed that is inspired by the known advantages of a human analyst. First, the focus of a human analyst is emulated by doing processing only the local object search area. Second, it is proposed that an intensity enhancement process should be done on the local area to allow features to be detected in poor lighting conditions. This simulates the ability of the human eye to discern objects in complex lighting conditions. Third, it is proposed that the spatial resolution of the local search area is increased to extract better features and provide more accurate feature matching. A quantitative evaluation is performed to show tracking improvement using the proposed method. The three databases, each grayscale sequences that were obtained from aircrafts, used for these evaluations include the Columbus Large Image Format database, the Large Area Image Recorder database, and the Sussex database.
Optics and Laser Technology | 2017
Evan Krieger; Paheding Sidike; Theus H. Aspiras; Vijayan K. Asari