Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Alexander Filonenko is active.

Publication


Featured researches published by Alexander Filonenko.


IEEE Transactions on Industrial Informatics | 2016

Unattended Object Identification for Intelligent Surveillance Systems Using Sequence of Dual Background Difference

Wahyono; Alexander Filonenko; Kang-Hyun Jo

Image-based surveillance systems are widely employed toward safety and security applications in many fields. Cameras, that are connected over an IP network for monitoring public areas, can produce large quantities of video footage. It is tedious for humans to simultaneously observe every type of event on several cameras. Thus, it is necessary to build a user-friendly intelligent system, enabling the analysis of video to detect suspicious events. One of the most important tasks of this system would be to identify unattended objects to prevent an unexpected accident such as the bombing of a public space. This paper presents a novel technique for such a task. The method is based on a sequence of dual background differences, which is obtained by computing the intensity difference between the current and reference background models within a time period. A clustering and an object detector are then integrated to identify the unattended objects. The effectiveness of the method was verified using public and our own databases. The results confirmed that the method is efficient to detect unattended objects and is suitable for implementation in video surveillance systems.


international conference on human system interactions | 2015

Detecting abandoned objects in crowded scenes of surveillance videos using adaptive dual background model

Wahyono; Alexander Filonenko; Kang-Hyun Jo

Detecting an abandoned object in crowded scenes of surveillance videos becomes more complex task due to occlusions, lighting changes, and other factors. In this paper, a new framework to detect abandoned object using dual background model subtraction is presented. In our system, the adaptive background model is generated based on statistical information of pixel intensity that robust against lighting condition. Foreground analysis using geometrical properties is then applied in order to filter out false region. Human and vehicle detection are then integrated to verify the region as static object, human or vehicle. The robustness and efficiency of the proposed method are tested on several public databases such as i-LIDS and PETS2006 datasets. These are also tested using our own dataset, ISLab dataset. The test and evaluation result show that our method is efficient and robust to detect abandoned object in crowded scenes.


conference of the industrial electronics society | 2015

Illegally parked vehicle detection using adaptive dual background model

Wahyono; Alexander Filonenko; Kang-Hyun Jo

Detecting an illegally parked vehicle in urban scenes of traffic monitoring system becomes more complex task due to occlusions, lighting changes, and other factors. In this paper, a new framework to detect illegally parked vehicle using dual background model subtraction is presented. In our system, the adaptive background model is generated based on statistical information of pixel intensity that robust against lighting condition. Foreground analysis using geometrical properties is then applied in order to filter out false region. Vehicle detection is then integrated to verify the region as vehicle or not. Vehicle detection method is performed based on Scalable Histogram of Oriented Gradient feature and is trained using Support Vector Machine. The robustness and efficiency of the proposed method are tested on i-LIDS datasets. These are also tested using our own dataset, ISLab dataset. The test and evaluation result show that our method is efficient and robust to detect illegally parked vehicle in traffic scenes. Thus, it is very useful for traffic monitoring application system.


conference of the industrial electronics society | 2014

Smoke detection on roads for autonomous vehicles

Alexander Filonenko; Van-Dung Hoang; Kang-Hyun Jo

This paper describes the smoke detection algorithm for autonomous vehicles equipped with camera and lidar. The main feature is the ability to detect smoke with ego motion of the camera. Color characteristics of smoke are used to detect regions of interest by similarity of pixels between the current frame and the training data. The following metrics are used: red, green, blue, cyan, saturation channels and spatial entropy. Each region of interest is then enhanced by removing small objects and by filling holes. Sky region is removed by checking edge density of the region. Other rigid objects are expelled by the boundary roughness feature. By knowing the fact that smoke tends to change its shape in frame sequence, the angle-radius shape descriptor is introduced. Cross-correlation of this descriptor between regions in consequent frames will show objects with not appropriate behavior. Data from the camera and lidar are fused to make the final decision.


Sensors | 2016

Real-Time Lane Region Detection Using a Combination of Geometrical and Image Features

Danilo Cáceres Hernández; Laksono Kurnianggoro; Alexander Filonenko; Kang-Hyun Jo

Over the past few decades, pavement markings have played a key role in intelligent vehicle applications such as guidance, navigation, and control. However, there are still serious issues facing the problem of lane marking detection. For example, problems include excessive processing time and false detection due to similarities in color and edges between traffic signs (channeling lines, stop lines, crosswalk, arrows, etc.). This paper proposes a strategy to extract the lane marking information taking into consideration its features such as color, edge, and width, as well as the vehicle speed. Firstly, defining the region of interest is a critical task to achieve real-time performance. In this sense, the region of interest is dependent on vehicle speed. Secondly, the lane markings are detected by using a hybrid color-edge feature method along with a probabilistic method, based on distance-color dependence and a hierarchical fitting model. Thirdly, the following lane marking information is extracted: the number of lane markings to both sides of the vehicle, the respective fitting model, and the centroid information of the lane. Using these parameters, the region is computed by using a road geometric model. To evaluate the proposed method, a set of consecutive frames was used in order to validate the performance.


korea japan joint workshop on frontiers of computer vision | 2015

Smoke detection for static cameras

Alexander Filonenko; Danilo Cáceres Hernández; Kang-Hyun Jo

This paper describes the smoke detection for static cameras. The background subtraction was used to determine moving objects. Color characteristics were utilized to distinguish smoke regions and other scene members. Separate pixels were united into blobs by morphology operations and connected components labeling methods. The image is then refined by boundary roughness and edge density to decrease amount of false detections. Results of the current frame are compared to the previous one in order to check the behavior of objects in time domain.


international symposium on industrial electronics | 2014

Vision-based heading angle estimation for an autonomous mobile robots navigation

Danilo Cáceres Hernández; Van-Dung Hoang; Alexander Filonenko; Kang-Hyun Jo

Autonomous mobile robots navigation and control systems are still hugely important in real time robotic applications. When moving towards fully autonomous navigation, guidance plays a vital task for successful autonomous navigation. In this paper, the authors propose real time guidance fuzzy logic application based on edge and color information surrounding the road surface by using omnidirectional cameras. Autonomous navigation systems must be able to recognize feature descriptors from both edge and color information. Firstly, it was proposed to extract the longest segments of lines from the above mentioned methods. Secondly, RANdom SAmple Consensus (RANSAC) curve fitting method was implemented for detecting the best curve fitting given the data set of points for each line segment. Thirdly, the set of intersection points for each pair of curves were extracted. Fourthly, the Density-based spatial clustering of applications with noise (DBSCAN) method was used in estimating the vanishing point (VP). Finally, to control the mobile robot in an unknown environment, a fuzzy logic controller facilitated by the VP was implemented. Preliminary results were gathered and tested on a group of consecutive frames undertaken at the University of Ulsan (UoU) to prove their effectiveness.


conference of the industrial electronics society | 2013

Visual surveillance with sensor network for accident detection

Alexander Filonenko; Kang-Hyun Jo

This paper describes an autonomous monitoring system to detect environmental accidents such as fire and gas leaks. This system is designed as a set of sensor nodes mounted on statical and dynamical objects and connected via a wireless network. Each sensor node measures such important environmental parameters as temperature, humidity, poisonous gases concentration, etc. Video surveillance is used to increase the probability the accident is detected. The fire detection vision technique based on the color information and flame behavior properties is used. Video stream is distributed via the Internet and can be viewed on a personal computer (PC) and on a mobile device. The graphical user interface (GUI) based on the sensor network and vision data helps an operator to make correct inference about the threat level.


international conference on ubiquitous robots and ambient intelligence | 2012

Self-configuration for surveillance sensor network

Alexander Filonenko; Fei Yang; Andrey Vavilin; Kang-Hyun Jo

This paper describes an autonomous monitoring system to supervise environmental parameters such as temperature, humidity, poisonous gases or smoke concentration, etc. This system is designed as a set of sensor nodes connected via a wireless network. The feature of this system is its ability to autonomously configure the network structure and synchronize data between nodes. This allows the network to be fault-tolerant. Each sensor node consists of three layers. The bottom layer includes a 5 to 12 volts battery and a stabilizer. The middle level consists of an 8-bit microcontroller, LCD (Liquid Crystal Display), SD (Secure Digital) card reader, and a set of sensors. The top level includes RF (Radio Frequency) communication module and GPS (Global Positioning System) module. Each sensor node is able to work as a standalone unit and as a part of a sensor network. The experimental works were performed for ensuring worth using in real time.


international symposium on industrial electronics | 2016

Unified smoke and flame detection for intelligent surveillance system

Alexander Filonenko; Danilo Cáceres Hernández; Ajmal Shahbaz; Kang-Hyun Jo

This paper explains the way of unification of flame and smoke detection algorithms by merging the common steps into a single processing flow. Scenario, discussed in the current manuscript, considers using fixed surveillance cameras that allows using background subtraction to detect changes in a scene. Due to imperfection of background subtraction, foreground pixels, belonging to the same real object, are often separated. These pixels are united by morphological operations. All pixels are then labeled by connected components labeling algorithm, and tiny objects are removed since noticeable smoke and flames are to be detected. All the previous steps are processed only once, and then separate smoke and flame parts are started which use the same input image obtained after removing tiny objects. Smoke detection includes color probability, boundary roughness, edge density, and area variability filtering processes. Flame detection uses color probability, boundary roughness, and area variability filtering. Preliminary results show that applying unification to smoke and flame detection algorithms makes processing time similar to a single smoke detection algorithm if smoke and flame are processed in parallel. If the whole algorithm is implemented on a single thread, processing time is still lower comparing to running smoke and fire detection without unification. The result of unified processing part can also be used as input for multiple tasks of intelligent surveillance systems.

Collaboration


Dive into the Alexander Filonenko's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge