Sina Samangooei
University of Southampton
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Sina Samangooei.
international conference on biometrics theory applications and systems | 2008
Richard D. Seely; Sina Samangooei; Middleton Lee; John N. Carter; Mark S. Nixon
This paper presents the University of Southampton multi-biometric tunnel, a constrained environment that is designed with airports and other high throughput environments in mind. It is able to acquire a variety of non-contact biometrics in a non-intrusive manner. The system uses eight synchronised IEEE1394 cameras to capture gait and additional cameras to capture images from the face and one ear, as an individual walks through the tunnel. We demonstrate that it is possible to achieve a 99.6% correct classification rate and a 4.3% equal error rate without feature selection using the gait data collected from the system; comparing well with state-of-art approaches. The tunnel acquires data automatically as a subject walks through it and is designed for the collection of very large gait datasets.
Handbook of Statistics | 2013
Daniel A. Reid; Sina Samangooei; Cunjian Chen; Mark S. Nixon; Arun Ross
Abstract Biometrics is the science of automatically recognizing people based on physical or behavioral characteristics such as face, fingerprint, iris, hand, voice, gait, and signature. More recently, the use of soft biometric traits has been proposed to improve the performance of traditional biometric systems and allow identification based on human descriptions. Soft biometric traits include characteristics such as height, weight, body geometry, scars, marks, and tattoos (SMT), gender, etc. These traits offer several advantages over traditional biometric techniques. Soft biometric traits can be typically described using human understandable labels and measurements, allowing for retrieval and recognition solely based on verbal descriptions. Unlike many primary biometric traits, soft biometrics can be obtained at a distance without subject cooperation and from low quality video footage, making them ideal for use in surveillance applications. This chapter will introduce the current state of the art in the emerging field of soft biometrics.
international conference on biometrics theory applications and systems | 2008
Sina Samangooei; Baofeng Guo; Mark S. Nixon
Gait as a biometric has a unique advantage that it can be used when images are acquired at a distance and other biometrics are at too low a resolution to be perceived. In such a situation, there is still information which can be readily perceived by human vision, yet is difficult to extract automatically. We examine how this information can be used to enrich the recognition process. We call these descriptions semantic annotations and investigate their use in biometric scenarios. We outline a group of visually assessable physical traits formulated as a mutually exclusive set of semantic terms; we contend that these traits are usable in soft biometric fusion. An experiment to gather semantic annotations was performed and the most reliable traits are identified using ANOVA. We rate the ability to correctly identify subjects using these semantically prescribed traits, both in isolation as well as in fusion with an automatically derived gait signature.
acm multimedia | 2011
Jonathon S. Hare; Sina Samangooei; David Dupplaw
OpenIMAJ and ImageTerrier are recently released open-source libraries and tools for experimentation and development of multimedia applications using Java-compatible programming languages. OpenIMAJ (the Open toolkit for Intelligent Multimedia Analysis in Java) is a collection of libraries for multimedia analysis. The image libraries contain methods for processing images and extracting state-of-the-art features, including SIFT. The video and audio libraries support both cross-platform capture and processing. The clustering and nearest-neighbour libraries contain efficient, multi-threaded implementations of clustering algorithms. The clustering library makes it possible to easily create BoVW representations for images and videos. OpenIMAJ also incorporates a number of tools to enable extremely-large-scale multimedia analysis using distributed computing with Apache Hadoop. ImageTerrier is a scalable, high-performance search engine platform for content-based image retrieval applications using features extracted with the OpenIMAJ library and tools. The ImageTerrier platform provides a comprehensive test-bed for experimenting with image retrieval techniques. The platform incorporates a state-of-the-art implementation of the single-pass indexing technique for constructing inverted indexes and is capable of producing highly compressed index data structures.
conference on image and video retrieval | 2008
Jonathon S. Hare; Sina Samangooei; Paul H. Lewis; Mark S. Nixon
Semantic spaces encode similarity relationships between objects as a function of position in a mathematical space. This paper discusses three different formulations for building semantic spaces which allow the automatic-annotation and semantic retrieval of images. The models discussed in this paper require that the image content be described in the form of a series of visual-terms, rather than as a continuous feature-vector. The paper also discusses how these term-based models compare to the latest state-of-the-art continuous feature models for auto-annotation and retrieval.
Multimedia Tools and Applications | 2010
Sina Samangooei; Mark S. Nixon
In order to analyse surveillance video, we need to efficiently explore large datasets containing videos of walking humans. Effective analysis of such data relies on retrieval of video data which has been enriched using semantic annotations. A manual annotation process is time-consuming and prone to error due to subject bias however, at surveillance-image resolution, the human walk (their gait) can be analysed automatically. We explore the content-based retrieval of videos containing walking subjects, using semantic queries. We evaluate current research in gait biometrics, unique in its effectiveness at recognising people at a distance. We introduce a set of semantic traits discernible by humans at a distance, outlining their psychological validity. Working under the premise that similarity of the chosen gait signature implies similarity of certain semantic traits we perform a set of semantic retrieval experiments using popular Latent Semantic Analysis techniques. We perform experiments on a dataset of 2000 videos of people walking in laboratory conditions and achieve promising retrieval results for features such as Sex (mAP = 14% above random), Age (mAP = 10% above random) and Ethnicity (mAP = 9% above random).
Proceedings of the 1st international workshop on Multimodal crowd sensing | 2012
Heather S. Packer; Sina Samangooei; Jonathon S. Hare; Nicholas Gibbins; Paul H. Lewis
Twitter is a popular tool for publishing potentially interesting information about peoples opinions, experiences and news. Mobile devices allow people to publish tweets during real-time events. It is often difficult to identify the subject of a tweet because Twitter users often write using highly unstructured language with many typographical errors. Structured data related to entities can provide additional context to tweets. We propose an approach which associates tweets to a given event using query expansion and relationships defined on the Semantic Web, thus increasing the recall whilst maintaining or improving the precision of event detection. In this work, we investigate the usage of Twitter in discussing the Rock am Ring music festival. We aim to use prior knowledge of the festivals lineup to associate tweets with the bands playing at the festival. In order to evaluate the effectiveness of our approach, we compare the lifetime of the Twitter buzz surrounding an event to the actual programmed event, using Twitter users as social sensors.
international conference on multimedia retrieval | 2013
Jonathon S. Hare; Sina Samangooei; David Dupplaw; Paul H. Lewis
Millions of images are tweeted every day, yet very little research has looked at the non-textual aspect of social media communication. In this work we have developed a system to analyse streams of image data. In particular we explore trends in similar, related, evolving or even duplicated visual artefacts in the mass of tweeted image data - in short, we explore the visual pulse of Twitter.
international conference on multimedia retrieval | 2011
Jonathon S. Hare; Sina Samangooei; Paul H. Lewis
The SIFT keypoint descriptor is a powerful approach to encoding local image description using edge orientation histograms. Through codebook construction via k-means clustering and quantisation of SIFT features we can achieve image retrieval treating images as bags-of-words. Intensity inversion of images results in distinct SIFT features for a single local image patch across the two images. Intensity inversions notwithstanding these two patches are structurally identical. Through careful reordering of the SIFT feature vectors, we can construct the SIFT feature that would have been generated from a non-inverted image patch starting with those extracted from an inverted image patch. Furthermore, through examination of the local feature detection stage, we can estimate whether a given SIFT feature belongs in the space of inverted features, or non-inverted features. Therefore we can consistently separate the space of SIFT features into two distinct subspaces. With this knowledge, we can demonstrate reduced time complexity of codebook construction via clustering by up to a factor of four and also reduce the memory consumption of the clustering algorithms while producing equivalent retrieval results.
international conference on multimedia retrieval | 2014
Jonathon S. Hare; Jamie Davies; Sina Samangooei; Paul H. Lewis
Knowing the location where a photograph was taken provides us with data that could be useful in a wide spectrum of applications. With the advance of digital cameras, and with many users exchanging their digital cameras for GPS-enabled mobile phones, photographs annotated with geographical locations are becoming ever more present on photo-sharing websites such as Flickr. However there is still a mass of content that is not geotagged, meaning that algorithms for efficient and accurate geographical estimation of an image are needed. This paper presents a general model for effectively using both textual metadata and visual features of photos to automatically place them on a world map with state-of-the-art performance. In addition, we explore how information from user-modelling can be fused with our model, and investigate the effect such modelling has on performance.