S Solmaz Javanbakhti
Eindhoven University of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by S Solmaz Javanbakhti.
Journal of Electronic Imaging | 2013
Xinfeng Bao; S Solmaz Javanbakhti; S Sveta Zinger; Rob G. J. Wijnhoven
Abstract. In port surveillance, video-based monitoring is a valuable supplement to a radar system by helping to detect smaller ships in the shadow of a larger ship and with the possibility to detect nonmetal ships. Therefore, automatic video-based ship detection is an important research area for security control in port regions. An approach that automatically detects moving ships in port surveillance videos with robustness for occlusions is presented. In our approach, important elements from the visual, spatial, and temporal features of the scene are used to create a model of the contextual information and perform a motion saliency analysis. We model the context of the scene by first segmenting the video frame and contextually labeling the segments, such as water, vegetation, etc. Then, based on the assumption that each object has its own motion, labeled segments are merged into individual semantic regions even when occlusions occur. The context is finally modeled to help locating the candidate ships by exploring semantic relations between ships and context, spatial adjacency and size constraints of different regions. Additionally, we assume that the ship moves with a significant speed compared to its surroundings. As a result, ships are detected by checking motion saliency for candidate ships according to the predefined criteria. We compare this approach with the conventional technique for object classification based on support vector machine. Experiments are carried out with real-life surveillance videos, where the obtained results outperform two recent algorithms and show the accuracy and robustness of the proposed ship detection approach. The inherent simplicity of our algorithmic subsystems enables real-time operation of our proposal in embedded video surveillance, such as port surveillance systems based on moving, nonstatic cameras.
Journal of Electronic Imaging | 2013
Im Ivo Creusen; S Solmaz Javanbakhti; Mjh Marijn Loomans; L Lykele Hazelhoff; Nadejda Roubtsova; S Sveta Zinger
Abstract. The use of contextual information can significantly aid scene understanding of surveillance video. Just detecting people and tracking them does not provide sufficient information to detect situations that require operator attention. We propose a proof-of-concept system that uses several sources of contextual information to improve scene understanding in surveillance video. The focus is on two scenarios that represent common video surveillance situations, parking lot surveillance and crowd monitoring. In the first scenario, a pan–tilt–zoom (PTZ) camera tracking system is developed for parking lot surveillance. Context is provided by the traffic sign recognition system to localize regular and handicapped parking spot signs as well as license plates. The PTZ algorithm has the ability to selectively detect and track persons based on scene context. In the second scenario, a group analysis algorithm is introduced to detect groups of people. Contextual information is provided by traffic sign recognition and region labeling algorithms and exploited for behavior understanding. In both scenarios, decision engines are used to interpret and classify the output of the subsystems and if necessary raise operator alerts. We show that using context information enables the automated analysis of complicated scenarios that were previously not possible using conventional moving object classification techniques.
advanced video and signal based surveillance | 2014
X Xinfeng Bao; S Solmaz Javanbakhti; S Sveta Zinger; Rgj Rob Wijnhoven
We present a new traffic surveillance video analysis system, focusing on building a framework with robust and generic techniques, based on both scene understanding and moving object-of-interest detection. Since traffic surveillance is widely applied, we want to design a single system that can be reused for various traffic surveillance applications. Scene understanding provides contextual information, which improves object detection and can be further used for other applications in a traffic surveillance system. Our framework consists of two main stages: Semantic Hypothesis Generation (SHG) and Context-Based Hypothesis Verification (CBHV). In the SHG stage, a semantic region labeling engine and an appearance-based detector jointly generate the visual regions with specific features or of specific interests. The regions may also contain objects of interest, either moving or static. In the CBHV stage, a cascaded verification is performed to refine the results and smooth the detection by temporal filtering. We model the context by jointly considering spatial and scale constraints and motion saliency. Our proposed framework is validated on real-life road surveillance videos, in which objects-of-interest are moving vehicles. The results of the obtained vehicle detection outperform a recent object detection algorithm, in both precision (92.7%) and recall (92.0%). The framework is both conceptually and in the applied techniques of a generic nature and can be reused in various traffic surveillance applications, that operate, e.g. on a road crossing or in a harbor.
IEEE Transactions on Consumer Electronics | 2017
S Solmaz Javanbakhti; S Sveta Zinger
In professional/consumer domains, video databases are broadly applied, facilitating quick searching by fast region analysis, to provide an indication of the video contents. For realtime and cost-efficient implementations, it is important to develop algorithms with high accuracy and low computational complexity. In this paper, we analyze the accuracy and computational complexity of newly developed approaches for semantic region labeling and salient region detection, which aim at extracting spatial contextual information from a video. Both algorithms are analyzed by their native DSP computations and memory usage to prove their practical feasibility. In the analyzed semantic region labeling approach, color and texture features are combined with their related vertical image position to label the key regions. In the salient region detection approach, a discrete cosine transform (DCT) is employed, since it provides a compact representation of the signal energy and the computation can be implemented at low cost. The techniques are applied to two complex surveillance use cases, moving ships in a harbor region and moving cars in traffic surveillance videos, to improve scene understanding in surveillance videos. Results show that our spatial contextual information methods quantitatively and qualitatively outperform other approaches with up to 22% gain in accuracy, while operating at several times lower complexity.
international conference on consumer electronics | 2017
S Solmaz Javanbakhti; S Sveta Zinger
Video databases are broadly applied both in consumer and professional domains. The importance of real-time surveillance video monitoring has increased for security reasons, while also for consumers video databases are rapidly growing. Quickly searching in databases is facilitated by region analysis, as it provides an indication for the contents of the video. For real-time and cost-efficient implementations, it is important to develop algorithms with low computational complexity. In this paper, we analyze the complexity of a newly develop semantic region labeling approach [2], e.g. road, sky, etc., which aims at extracting spatial contextual information from a video. In the analyzed semantic region labeling approach, color and texture features are combined with the vertical position to label the key regions. The algorithm is analyzed by its native DSP computations and memory usage to prove its practical feasibility. The analysis results show that the system has a low complexity while offering high-accuracy region labeling. A comparison with the state-of-the-art algorithm convincingly reveals that our system outperforms the state-of-the-art with fewer computations.
international conference on information science, electronics and electrical engineering | 2014
S Solmaz Javanbakhti; S Sveta Zinger
Archive | 2012
S Solmaz Javanbakhti; S Sveta Zinger
Archive | 2014
S Solmaz Javanbakhti; X Xinfeng Bao; S Sveta Zinger
Archive | 2012
S Solmaz Javanbakhti; S Sveta Zinger
Biometrics | 2016
S Solmaz Javanbakhti; Xinfeng Bao; Im Ivo Creusen; L Lykele Hazelhoff; Wp Willem Sanberg; D.W.J.M. (Denis) van de Wouw; Gijs Dubbelman; S Sveta Zinger