Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Alex Aved is active.

Publication


Featured researches published by Alex Aved.


IEEE Aerospace and Electronic Systems Magazine | 2014

Information fusion in a cloud computing era: A systems-level perspective

Bingwei Liu; Yu Chen; Ari Hadiks; Erik Blasch; Alex Aved; Dan Shen; Genshe Chen

Information fusion utilizes a collection of data sources for uncertainty reduction, coverage extension, and situation awareness. Future information fusion solutions require systems design [1], coordination with users [2], metrics of performance [3], and methods of multilevel security [4]. A current trend that can enable all of these services is cloud computing. Cloud computing as defined by NIST is: Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. [5] Cloud computing provides capabilities (on-demand self service, broad network access, resource pooling, rapid elasticity, and measured service) over different types of clouds (private, community, public, and hybrid).


applied imagery pattern recognition workshop | 2013

Video-based activity analysis using the L1 tracker on VIRAT data

Erik Blasch; Zhonghai Wang; Haibin Ling; Kannappan Palaniappan; Genshe Chen; Dan Shen; Alex Aved

Developments in video tracking have addressed various aspects such as target detection, tracking accuracy, algorithm comparison, and implementation methods which are briefly reviewed. However, there are other attributes of full motion video (FMV) tracking that require further investigation for situation awareness of event and activity analysis. Key aspects of activity and behavior analysis include interaction between individuals, groups, and crowds as well as with objects in the environment like vehicles and buildings over a specified time duration as it is typically assumed that the activities of interest include people. In this paper, we explore activity analysis using the L1 tracker over various scenarios in the VIRAT data. Activity analysis extends event detection from tracking accuracy to characterizing number, types, and relationships between actors in analyzing human activities of interest. Relationships include correlation in space and time of actors with other people, objects, vehicles, and facilities (POVF). Event detection is more mature (e.g., based on image exploitation and tracking techniques), while activity analysis (as a higher level fusion function) requires innovative techniques for relationship understanding.


international conference on conceptual structures | 2015

Dynamic Data-driven Application System (DDDAS) for Video Surveillance User Support☆

Erik Blasch; Alex Aved

Human-machine interaction mixed initiatives require a pragmatic coordination between different systems. Context understanding is established from the content, analysis, and guidance from query-based coordination between users and machines. Inspired by Level 5 Information Fusion user refinement, a live-video computing (LVC) structure is presented for user-based query access of a data-base management of information. Information access includes multimedia fusion of query-based text, images, and exploited tracks which can be utilized for context assessment, content-based information retrieval (CBIR), and situation awareness. In this paper, we explore new developments in dynamic data-driven application systems (DDDAS) of context analysis for user support. Using a common image processing data set, a system-level time savings is demonstrated using a query-based approach in a context, control, and semantic-aware information fusion design.


international conference on conceptual structures | 2015

Multi-INT Query Language for DDDAS Designs

Alex Aved; Erik Blasch

Abstract Context understanding is established from coordination of content, analysis, and interaction between users and machines. In this manuscript, a live-video computing (LVC) approach is presented for access to data, comprehension of context, and analysis for context assessment. Context assessment includes multimedia fusion of query-based text, images, and exploited tracks, which are utilized for information retrieval. We explore developments in database systems which enable context to be extracted by user-based queries. Using a common image processing data set, we demonstrate activity analysis with context, privacy, and semantic-awareness in a Dynamic Data-Driven Applications System (DDDAS).


Proceedings of SPIE | 2014

Visualization of graphical information fusion results

Erik Blasch; Georgiy Levchuk; Gennady Staskevich; Dustin Burke; Alex Aved

Graphical fusion methods are popular to describe distributed sensor applications such as target tracking and pattern recognition. Additional graphical methods include network analysis for social, communications, and sensor management. With the growing availability of various data modalities, graphical fusion methods are widely used to combine data from multiple sensors and modalities. To better understand the usefulness of graph fusion approaches, we address visualization to increase user comprehension of multi-modal data. The paper demonstrates a use case that combines graphs from text reports and target tracks to associate events and activities of interest visualization for testing Measures of Performance (MOP) and Measures of Effectiveness (MOE). The analysis includes the presentation of the separate graphs and then graph-fusion visualization for linking network graphs for tracking and classification.


IEEE Transactions on Neural Networks | 2018

Multiview Boosting With Information Propagation for Classification

Jing Peng; Alex Aved; Kannappan Palaniappan

Multiview learning has shown promising potential in many applications. However, most techniques are focused on either view consistency, or view diversity. In this paper, we introduce a novel multiview boosting algorithm, called Boost.SH, that computes weak classifiers independently of each view but uses a shared weight distribution to propagate information among the multiple views to ensure consistency. To encourage diversity, we introduce randomized Boost.SH and show its convergence to the greedy Boost.SH solution in the sense of minimizing regret using the framework of adversarial multiarmed bandits. We also introduce a variant of Boost.SH that combines decisions from multiple experts for recommending views for classification. We propose an expert strategy for multiview learning based on inverse variance, which explores both consistency and diversity. Experiments on biometric recognition, document categorization, multilingual text, and yeast genomic multiview data sets demonstrate the advantage of Boost.SH (85%) compared with other boosting algorithms like AdaBoost (82%) using concatenated views and substantially better than a multiview kernel learning algorithm (74%).


international conference on conceptual structures | 2015

Adapting Stream Processing Framework for Video Analysis

Sharma Chakravarthy; Alex Aved; S. Shirvani; Manish Annappa; Erik Blasch

Abstract Stream processing (SP) became relevant mainly due to inexpensive and hence ubiquitous deployment of sensors in many domains (e.g., environmental monitoring, battle field monitoring). Other continuous data generators (surveillance, traffic data) have also prompted processing and analysis of these streams for applications such as traffic congestion/accidents and personalized marketing. Image processing has been researched for several decades. Recently there is emphasis on video stream analysis for situation monitoring due to the ubiquitous deployment of video cameras and unmanned aerial vehicles for security and other applications. This paper elaborates on the research and development issues that need to be addressed for extending the traditional stream processing framework for video analysis, especially for situation awareness. This entails extensions to: data model, operators and language for expressing complex situations, QoS (Quality of service) specifications and algorithms needed for their satisfaction. Specifically, this paper demonstrates inadequacy of current data representation (e.g., relation and arrable ) and querying capabilities to infer long-term research and development issues.


national aerospace and electronics conference | 2014

Video observations for cloud activity-based intelligence (VOCABI)

Erik Blasch; Phillip DiBona; Michael Czajkowski; Kevin Barry; Ray Rimey; Jeff Freeman; Kevin Newman; Alex Aved; Mike Hinman

The availability of video imagery through reduction in sensor size, cost, and power has enabled an explosion of collection opportunities. With the increased amount of imagery there is a need to understand the usefulness of video for applications such as Activity-Based Intelligence (ABI), situation understanding, and event-based processing. In this paper, we explore some of the emerging developments in video observations with a focus on cloud technology. Cloud technology supports integration of multiple algorithms, storage of large data sets, indexing over multimedia, and workflow opportunities between humans and machines. We highlight multiple tools such as GeoFlix™, Application Knowledge Interface To Algorithms (AKITA™), and Intelligence Preparation of the Operational Environment (IPOE) using the Ozone Widget Framework (OWF) for permissive surveillance, data to decisions, and information fusion. These tools enable data analytics, algorithm comparison, and user-defined visualizations. An example is presented for target localization and tracking through planned video observations.


IEEE Transactions on Aerospace and Electronic Systems | 2017

Regularized Difference Criterion for Computing Discriminants for Dimensionality Reduction

Alex Aved; Erik Blasch; Jing Peng

Hyperspectral data classification has shown potential in many applications. However, a large number of spectral bands cause overfitting. Methods for reducing spectral bands, e.g., linear discriminant analysis, require matrix inversion. We propose a semidefinite programming for linear discriminants regularized difference (SLRD) criterion approach that does not require matrix inversion. The paper establishes a classification error bound and provides experimental results with ten methods over six hyperspectral datasets demonstrating the efficacy of the proposed SLRD technique.


Proceedings of SPIE | 2014

Visualization of multi-INT fusion data using Java Viewer (JVIEW)

Erik Blasch; Alex Aved; James Nagy; Stephen Scott

Visualization is important for multi-intelligence fusion and we demonstrate issues for presenting physics-derived (i.e., hard) and human-derived (i.e., soft) fusion results. Physics-derived solutions (e.g., imagery) typically involve sensor measurements that are objective, while human-derived (e.g., text) typically involve language processing. Both results can be geographically displayed for user-machine fusion. Attributes of an effective and efficient display are not well understood, so we demonstrate issues and results for filtering, correlation, and association of data for users - be they operators or analysts. Operators require near-real time solutions while analysts have the opportunities of non-real time solutions for forensic analysis. In a use case, we demonstrate examples using the JVIEW concept that has been applied to piloting, space situation awareness, and cyber analysis. Using the open-source JVIEW software, we showcase a big data solution for multi-intelligence fusion application for context-enhanced information fusion.

Collaboration


Dive into the Alex Aved's collaboration.

Top Co-Authors

Avatar

Erik Blasch

Air Force Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Dan Shen

Ohio State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

James Nagy

Air Force Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Jing Peng

Montclair State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge