Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Binyam kidan Gebre is active.

Publication


Featured researches published by Binyam kidan Gebre.


international conference on acoustics, speech, and signal processing | 2013

The gesturer is the speaker

Binyam Gebrekidan Gebre; Peter Wittenburg; Tom Heskes

We present and solve the speaker diarization problem in a novel way. We hypothesize that the gesturer is the speaker and that identifying the gesturer can be taken as identifying the active speaker. We provide evidence in support of the hypothesis from gesture literature and audio-visual synchrony studies. We also present a vision-only diarization algorithm that relies on gestures (i.e. upper body movements). Experiments carried out on 8.9 hours of a publicly available dataset (the AMI meeting data) show that diarization error rates as low as 15% can be achieved.


international conference on image processing | 2013

Automatic sign language identification

Binyam Gebrekidan Gebre; Peter Wittenburg; Tom Heskes

We propose a Random-Forest based sign language identification system. The system uses low-level visual features and is based on the hypothesis that sign languages have varying distributions of phonemes (hand-shapes, locations and movements). We evaluated the system on two sign languages - British SL and Greek SL, both taken from a publicly available corpus, called Dicta Sign Corpus. Achieved average F1 scores are about 95% - indicating that sign languages can be identified with high accuracy using only low-level visual features.


international conference on acoustics, speech, and signal processing | 2014

Motion history images for online speaker/signer diarization

Binyam Gebrekidan Gebre; Peter Wittenburg; Tom Heskes; Sebastian Drude

We present a solution to the problem of online speaker/signer diarization - the task of determining who spoke/signed when?. Our solution is based on the idea that gestural activity (hands and body movement) is highly correlated with uttering activity. This correlation is necessarily true for sign languages and mostly true for spoken languages. The novel part of our solution is the use of motion history images (MHI) as a likelihood measure for probabilistically detecting uttering activities. MHI is an efficient representation of where and how motion occurred for a fixed period of time. We conducted experiments on 4.9 hours of the AMI meeting data and 1.4 hours of sign language dataset (Kata Kolok data). The best performance obtained is 15.70% for sign language and 31.90% for spoken language (measurements are in DER). These results show that our solution is applicable in real-world applications like video conferences and information retrieval.


meeting of the association for computational linguistics | 2014

Unsupervised Feature Learning for Visual Sign Language Identification

Binyam Gebrekidan Gebre; Onno Crasborn; Peter Wittenburg; Sebastian Drude; Tom Heskes

Prior research on language identification focused primarily on text and speech. In this paper, we focus on the visual modality and present a method for identifying sign languages solely from short video samples. The method is trained on unlabelled video data (unsupervised feature learning) and using these features, it is trained to discriminate between six sign languages (supervised learning). We ran experiments on short video samples involving 30 signers (about 6 hours in total). Using leave-one-signer-out cross-validation, our evaluation shows an average best accuracy of 84%. Given that sign languages are underresourced, unsupervised feature learning techniques are the right tools and our results indicate that this is realistic for sign language identification.


language and technology conference | 2011

Application of audio and video processing methods for language research and documentation: The AVATecH Project

Przemyslaw Lenkiewicz; Sebastian Drude; Anna Lenkiewicz; Binyam Gebrekidan Gebre; Stefano Masneri; Oliver Schreer; Jochen Schwenninger; Rolf Bardeli

Evolution and changes of all modern languages is a well-known fact. However, recently it is reaching dynamics never seen before, which results in loss of the vast amount of information encoded in every language. In order to preserve such rich heritage, and to carry out linguistic research, properly annotated recordings of world languages are necessary. Since creating those annotations is a very laborious task, reaching times 100 longer than the length of the annotated media, innovative video processing algorithms are needed, in order to improve the efficiency and quality of annotation process. This is the scope of the AVATecH project presented in this article.


workshop on innovative use of nlp for building educational applications | 2013

Improving Native Language Identification with TF-IDF Weighting

Binyam Gebrekidan Gebre; Marcos Zampieri; Peter Wittenburg; Tom Heskes


language resources and evaluation | 2012

Towards Automatic Gesture Stroke Detection

Binyam Gebrekidan Gebre; Peter Wittenburg; Przemyslaw Lenkiewicz


language and technology conference | 2012

Classifying pluricentric languages: Extending the monolingual model

Marcos Zampieri; Binyam Gebrekidan Gebre; Sascha Diwersy


language resources and evaluation | 2014

VarClass: An Open-source Language Identification Tool for Language Varieties

Marcos Zampieri; Binyam Gebrekidan Gebre


language resources and evaluation | 2012

AVATecH ― automated annotation through audio and video analysis

Przemyslaw Lenkiewicz; Binyam Gebrekidan Gebre; Oliver Schreer; Stefano Masneri; Daniel Schneider; Sebastian Tsch"opel

Collaboration


Dive into the Binyam kidan Gebre's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tom Heskes

Radboud University Nijmegen

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Marijn Huijbregts

Radboud University Nijmegen

View shared research outputs
Top Co-Authors

Avatar

Onno Crasborn

Radboud University Nijmegen

View shared research outputs
Researchain Logo
Decentralizing Knowledge