Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sadi Vural is active.

Publication


Featured researches published by Sadi Vural.


Pattern Recognition Letters | 2012

Multi-view fast object detection by using extended haar filters in uncontrolled environments

Sadi Vural; Yasushi Mae; Huseyin Uvet; Tatsuo Arai

In this paper, we propose multi-view object detection methodology by using specific extended class of haar-like filters, which apparently detects the object with high accuracy in the unconstraint environments. There are several object detection techniques, which work well in restricted environments, where illumination is constant and the view angle of the object is restricted. The proposed object detection methodology successfully detects faces, cars, logo objects at any size and pose with high accuracy in real world conditions. To cope with angle variation, we propose a multiple trained cascades by using the proposed filters, which performs even better detection by spanning a different range of orientation in each cascade. We tested the proposed approach by still images by using image databases and conducted some evaluations by using video images from an IP camera placed in outdoor. We tested the method for detecting face, logo, and vehicle in different environments. The experimental results show that the proposed method yields higher classification performance than Viola and Joness detector, which uses a single feature for each weak classifier. Given the less number of features, our detector detects any face, object, or vehicle in 15fps when using 4 megapixel images with 95% accuracy on an Intel i7 2.8GHz machine.


machine vision applications | 2014

Face relighting using discriminative 2D spherical spaces for face recognition

Amr Almaddah; Sadi Vural; Yasushi Mae; Kenichi Ohara; Tatsuo Arai

As part of the face recognition task in a robust security system, we propose a novel approach for the illumination recovery of faces with cast shadows and specularities. Given a single 2D face image, we relight the face object by extracting the nine spherical harmonic bases and the face spherical illumination coefficients by using the face spherical spaces properties. First, an illumination training database is generated by computing the properties of the spherical spaces out of face albedo and normal values estimated from 2D training images. The training database is then discriminately divided into two directions in terms of the illumination quality and light direction of each image. Based on the generated multi-level illumination discriminative training space, we analyze the target face pixels and compare them with the appropriate training subspace using pre-generated tiles. When designing the framework, practical real-time processing speed and small image size were considered. In contrast to other approaches, our technique requires neither 3D face models nor restricted illumination conditions for the training process. Furthermore, the proposed approach uses one single face image to estimate the face albedo and face spherical spaces. In this work, we also provide the results of a series of experiments performed on publicly available databases to show the significant improvements in the face recognition rates.


Journal of Pattern Recognition Research | 2011

Illumination Normalization for Outdoor Face Recognition by Using Ayofa-Filters

Sadi Vural; Yasushi Mae; Huseyin Uvet; Tatsuo Arai

Abstract In this paper, we propose an illumination normalization app roach, which apparently improves the face recognition accuracy in outdoor environments. The pro posed approach computes the frequency variability and reflection direction on local face regions w here the direction of the light source is unknown. It effectively recovers the illumination on a face su rface. Majority of conventional approaches needs constant albedo coefficients as well as known illumina tio source direction to recover the illumination. Our novel approach computes unknown reflection d rections by using spatial frequency components on salient regions of a face. The method requires only one single image taken under any arbitrary illumination condition where we do not know the li ght source direction, strength, or light sources. The method relies on the spatial frequencies and do es not need to use any precompiled face model database. It references the nose tip to evaluate the re flection model that contains six different reflection vectors. We tested the proposed approach by still images from major face databases and conducted testing by using video images from an IP camera pla ced in outdoor. The efficiency of the Ayofa-filter was tested by both holistic-based approach es and feature-based methods. We used principal component analysis (PCA) and linear discriminan t analysis (LDA) as holistic methods and used Gabor-wavelets and active appearance model (AAM) as fe ature-based methods. The error rates obtained after the illumination-normalization show that o ur novel method significantly improves the recognition ratio with these recognition methods.


World Academy of Science, Engineering and Technology, International Journal of Computer, Electrical, Automation, Control and Information Engineering | 2007

DWT Based Robust Watermarking Embed Using CRC-32 Techniques

Sadi Vural; Hiromi Tomii; Hironori Yamauchi


World Academy of Science, Engineering and Technology, International Journal of Computer, Electrical, Automation, Control and Information Engineering | 2008

Robust Digital Cinema Watermarking

Sadi Vural; Hiromi Tomii; Hironori Yamauchi


Journal of robotics and mechatronics | 2013

Spherical Spaces for Illumination Invariant Face Relighting

Amr Almaddah; Sadi Vural; Yasushi Mae; Kenichi Ohara; Tatsuo Arai


World Academy of Science, Engineering and Technology, International Journal of Computer, Electrical, Automation, Control and Information Engineering | 2012

2D Spherical Spaces for Face Relighting under Harsh Illumination

Amr Almaddah; Sadi Vural; Yasushi Mae; Kenichi Ohara; Tatsuo Arai


Archive | 2014

Geometrical 2D Face Rotation by Using Gabor- tensor-based Active Appearance Model

Sadi Vural


信号処理 | 2010

Face perception based on Spatial Gaussian Bessel Mixture and nonlinear feature analysis

Sadi Vural; Yasushi Mae; Tatsuo Arai


信号処理 | 2010

Illumination-invariant face texture analysis by Gaussian-Histogram equalization and Hierarchical Nonlinear Principal Component Analysis (Special issue on nonlinear circuits and signal processing)

Sadi Vural; Yasushi Mae; Tatsuo Arai

Collaboration


Dive into the Sadi Vural's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tatsuo Arai

Japanese Ministry of International Trade and Industry

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge