Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Farhad Dadgostar is active.

Publication


Featured researches published by Farhad Dadgostar.


Pattern Recognition Letters | 2006

An adaptive real-time skin detector based on Hue thresholding: A comparison on two motion tracking methods

Farhad Dadgostar; Abdolhossein Sarrafzadeh

Various applications like face and hand tracking and image retrieval have made skin detection an important area of research. However, currently available algorithms are based on static features of the skin colour, or require a significant amount of computation. On the other hand, skin detection algorithms are not robust to deal with real-world conditions, like background noise, change of intensity and lighting effects. This situation can be improved by using dynamic features of the skin colour in a sequence of images. This article proposes a skin detection algorithm based on adaptive Hue thresholding and its evaluation using two motion detection technique. The skin classifier is based on the Hue histogram of skin pixels, and adapts itself to the colour of the skin of the persons in the video sequence. This algorithm has demonstrated improvement in comparison to the static skin detection method.


international conference on innovations in information technology | 2006

See Me, Teach Me: Facial Expression and Gesture Recognition for Intelligent Tutoring Systems

Abdolhossein Sarrafzadeh; Samuel Alexander; Farhad Dadgostar; Chao Fan; Abbas Bigdeli

Many software systems would significantly improve performance if they could adapt to the emotional state of the user, for example if intelligent tutoring systems, ATMs and ticketing machines could recognise when users were confused, frustrated or angry they could provide remedial help so improving the service. This paper presents research leading to the development of Easy with Eve, an affective tutoring systems (ATS) for mathematics. The system detects student emotion, adapts to students and displays emotion via a lifelike agent called Eve. Eves is guided by a case-based system which uses data that was generated by an observational study. This paper presents the observational study, the case-based method, and the ATS


affective computing and intelligent interaction | 2005

Face tracking using mean-shift algorithm: a fuzzy approach for boundary detection

Farhad Dadgostar; Abdolhossein Sarrafzadeh; Scott P. Overmyer

Face and hand tracking are important areas of research, related to adaptive human-computer interfaces, and affective computing. In this article we have introduced two new methods for boundary detection of the human face in video sequences: (1) edge density thresholding, and (2) fuzzy edge density. We have analyzed these algorithms based on two main factors: convergence speed and stability against white noise. The results show that “fuzzy edge density” method has an acceptable convergence speed and significant robustness against noise. Based on the results we believe that this method of boundary detection together with the mean-shift and its variants like cam-shift algorithm, can achieve fast and robust tracking of the face in noisy environment, that makes it a good candidate for use with cheap cameras and real-world applications.


digital image computing: techniques and applications | 2009

Content-Based Video Retrieval (CBVR) System for CCTV Surveillance Videos

Yan Yang; Brian C. Lovell; Farhad Dadgostar

The inherent nature of image and video and its multi-dimension data space makes its processing and interpretation a very complex task, normally requiring considerable processing power. Moreover, understanding the meaning of video content and storing it in a fast searchable and readable form, requires taking advantage of image processing methods, which when running them on a video stream per query, would not be cost effective, and in some cases is quite impossible due to time restrictions. Hence, to speed up the search process, storing video and its extracted meta-data together is desired. The storage model itself is one of the challenges in this context, as based on the current CCTV technology; it is estimated to require a petabyte size data management system. This estimate however, is expected to grow rapidly as current advances in video recording devices are leading to higher resolution sensors, and larger frame size. On the other hand, the increasing demand for object tracking on video streams has invoked the research on Content-Based Image Retrieval (CBIR) and Content-Based Video Retrieval (CBVR). In this paper, we present the design and implementation of a framework and a data model for CCTV surveillance videos on RDBMS which provides the functions of a surveillance monitoring system, with a tagging structure for event detection. On account of some recent results, we believe this is a promising direction for surveillance video search in comparison to the existing solutions.


instrumentation and measurement technology conference | 2005

Real-time Hand Tracking based on Non-Invariant Features

Andre L. C. Barczak; Farhad Dadgostar; Chris H. Messom

In this paper, we discuss the importance of the choice of features in digital image object recognition. The features can be classified as invariants or non-invariants. Invariant features are robust against one or more modifications such as rotations, translations, scaling and different light (illumination) conditions. Noninvariant features are usually very sensitive to any of these modifiers. On the other hand, noninvariant features can be used even in the event of translation, scaling and rotation, but the feature choice is in some cases more important than the training method. If the feature space is adequate then the training process can be straightforward and good classifiers can be obtained. In the last few years, good algorithms have been developed relying on noninvariant features. In this article, we show how noninvariant features can cope with changes even though this requires additional computation at the detection phase. We also show preliminary results for a hand detector based on a set of cooperative Haar-like feature detectors. The results show the good potential of the method as well as the challenges to achieve real-time detection


instrumentation and measurement technology conference | 2009

Towards real-time sign language analysis via markerless gesture tracking

Rini Akmeliawati; Farhad Dadgostar; Serge N. Demidenko; Nuwan Gamage; Ye Chow Kuang; Chris H. Messom; Melanie Po-Leen Ooi; Abdolhossein Sarrafzadeh; G. SenGupta

This paper introduces the gesture and hand posture tracking systems for a prototype real-time New Zealand sign language recognition system. The novelty of this work is in the markerless tracking of 13 gestures plus an unknown gesture category. Currently the gesture set is limited, but over time a more extensive gesture library can be developed and trained using the same technique. The hand posture system currently uses markers to obtain the high level of accuracy required for recognition of spelling of words in sign language using finger-pointing. Markerless hand posture detection has been shown to be more challenging especially with different signers who have not been used to train the system.


international conference on tools with artificial intelligence | 2006

Modeling and Recognition of Gesture Signals in 2D Space: A Comparison of NN and SVM Approaches

Farhad Dadgostar; Abdolhossein Sarrafzadeh; Chao Fan; Liyanage C. De Silva; Chris H. Messom

In this paper we introduce a novel technique for modeling and recognizing gesture signals in 2D space. This technique is based on measuring the direction of the gradient of the movement trajectory as features of the gesture signal. Each gesture signal is represented as a time series of gradient angle values. These features are classified by applying a given classification method. In this article we compared the accuracy of a feed forward artificial neural network with a support vector machine using a radial kernel. The comparison was based on the recorded data of 13 gesture signals as training and testing data. The average accuracy of the ANN and SVM were 98.27% and 96.34% respectively. The false detection ratio was 3.83% for ANN and 8.45% for SVM, which suggests the ANN is more suitable for gesture signal recognition


international conference on neural information processing | 2008

Multi-layered hand and face tracking for real-time gesture recognition

Farhad Dadgostar; Abdolhossein Sarrafzadeh; Chris H. Messom

This paper presents research leading to the development of a visionbased gesture recognition system. The system comprises of three abstract layers each with their own specific type and requirements of data. The first layer is the skin detection layer. This component provides a set of disperse skin pixels for a tracker that forms the second layer. The second component is based on the Mean-shift algorithm which has been improved for robustness against noise using our novel fuzzy-based edge estimation method making the tracker suitable for real world applications. The third component is the gesture recognition layer which is based on a gesture modeling technique and artificial neural-networks for classification of the gesture.


international conference on distributed smart cameras | 2011

Summarisation of surveillance videos by key-frame selection

Yan Yang; Farhad Dadgostar; Conrad Sanderson; Brian C. Lovell

We propose two novel techniques for automatic summarisation of lengthy surveillance videos, based on selection of frames containing scenes most informative for rapid perusal and interpretation by humans. In contrast to other video summarisation methods, the proposed methods explicitly focus on foreground objects, via edge histogram descriptor and a localised foreground information quantity (entropy) measurement. Frames are iteratively pruned until a preset summarisation rate is reached. Experiments on the publicly available CAVIAR dataset, as well as our own dataset focused on people walking through natural choke points (such as doors), suggest that the proposed method obtains considerably better results than methods based on optical flow, entropy differences and colour spatial distribution characteristics.


international conference on image analysis and recognition | 2005

A fast real-time skin detector for video sequences

Farhad Dadgostar; Abdolhossein Sarrafzadeh

Skin detection has been employed in various applications including face and hand tracking, and retrieving people in video databases. However most of the currently available algorithms are either based on static features of the skin color, or require a significant amount of computation. Moreover, skin detection algorithms are not robust enough to deal with real-world conditions, such as background noise, change of intensity and lighting effects. This situation can be improved by using dynamic features of the skin color in a sequence of images. This article proposes a skin detection algorithm based on in-motion pixels of the image. The membership measurement function for recognizing skin/non skin is based on the Hue histogram of skin pixels that adapts itself to the users skin color, in each frame. This algorithm has demonstrated significant improvement in comparison to the static skin detection algorithms.

Collaboration


Dive into the Farhad Dadgostar's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Abbas Bigdeli

University of Queensland

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yan Yang

University of Queensland

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge