Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yunus Emre Kara is active.

Publication


Featured researches published by Yunus Emre Kara.


international conference on computer vision | 2011

Real time hand pose estimation using depth sensors

Cem Keskin; Furkan Kıraç; Yunus Emre Kara; Lale Akarun

This paper describes a depth image based real-time skeleton fitting algorithm for the hand, using an object recognition by parts approach, and the use of this hand modeler in an American Sign Language (ASL) digit recognition application. In particular, we created a realistic 3D hand model that represents the hand with 21 different parts. Random decision forests (RDF) are trained on synthetic depth images generated by animating the hand model, which are then used to perform per pixel classification and assign each pixel to a hand part. The classification results are fed into a local mode finding algorithm to estimate the joint locations for the hand skeleton. The system can process depth images retrieved from Kinect in real-time at 30 fps. As an application of the system, we also describe a support vector machine (SVM) based recognition module for the ten digits of ASL based on our method, which attains a recognition rate of 99.9% on live depth images in real-time1.


european conference on computer vision | 2012

Hand pose estimation and hand shape classification using multi-layered randomized decision forests

Cem Keskin; Furkan K; ra; Yunus Emre Kara; Lale Akarun

Vision based articulated hand pose estimation and hand shape classification are challenging problems. This paper proposes novel algorithms to perform these tasks using depth sensors. In particular, we introduce a novel randomized decision forest (RDF) based hand shape classifier, and use it in a novel multi---layered RDF framework for articulated hand pose estimation. This classifier assigns the input depth pixels to hand shape classes, and directs them to the corresponding hand pose estimators trained specifically for that hand shape. We introduce two novel types of multi---layered RDFs: Global Expert Network (GEN) and Local Expert Network (LEN), which achieve significantly better hand pose estimates than a single---layered skeleton estimator and generalize better to previously unseen hand poses. The novel hand shape classifier is also shown to be accurate and fast. The methods run in real---time on the CPU, and can be ported to the GPU for further increase in speed.


computer vision and pattern recognition | 2012

Randomized decision forests for static and dynamic hand shape classification

Cem Keskin; Furkan Kıraç; Yunus Emre Kara; L. Akarun

This paper proposes a novel algorithm to perform hand shape classification using depth sensors, without relying on color or temporal information. Hence, the system is independent of lighting conditions and does not need a hand registration step. The proposed method uses randomized classification forests (RDF) to assign class labels to each pixel on a depth image, and the final class label is determined by voting. This method is shown to achieve 97.8% success rate on an American Sign Language (ASL) dataset consisting of 65k images collected from five subjects with a depth sensor. More experiments are conducted on a subset of the ChaLearn Gesture Dataset, consisting of a lexicon with static and dynamic hand shapes. The hands are found using motion cues and cropped using depth information, with a precision rate of 87.88% when there are multiple gestures, and 94.35% when there is a single gesture in the sample. The hand shape classification success rate is 94.74% on a small subset of nine gestures corresponding to a single lexicon. The success rate is 74.3% for the leave-one-subject-out scheme, and 67.14% when training is conducted on an external dataset consisting of the same gestures. The method runs on the CPU in real-time, and is capable of running on the GPU for further increase in speed.


Pattern Recognition Letters | 2014

Hierarchically constrained 3D hand pose estimation using regression forests from single frame depth data

Furkan Kıraç; Yunus Emre Kara; Lale Akarun

We apply Random Forests for regression (RDF-R) to 3D Hand Pose Estimation.RDF-R is generalized by hierarchical mode selection using constraints (RDF-R+).We test Classification Forests (RDF-C), RDF-R and RDF-R+ with 4 different datasets.The proposed method, RDF-R+ outperforms RDF-C and RDF-R. The emergence of inexpensive 2.5D depth cameras has enabled the extraction of the articulated human body pose. However, human hand skeleton extraction still stays as a challenging problem since the hand contains as many joints as the human body model. The small size of the hand also makes the problem more challenging due to resolution limits of the depth cameras. Moreover, hand poses suffer from self-occlusion which is considerably less likely in a body pose. This paper describes a scheme for extracting the hand skeleton using random regression forests in real-time that is robust to self- occlusion and low resolution of the depth camera. In addition to that, the proposed algorithm can estimate the joint positions even if all of the pixels related to a joint are out of the camera frame. The performance of the new method is compared to the random classification forests based method in the literature. Moreover, the performance of the joint estimation is further improved using a novel hierarchical mode selection algorithm that makes use of constraints imposed by the skeleton geometry. The performance of the proposed algorithm is tested on datasets containing synthetic and real data, where self-occlusion is frequently encountered. The new algorithm which runs in real time using a single depth image is shown to outperform previous methods.


signal processing and communications applications conference | 2010

A robust multimodal fall detection method for ambient assisted living applications

Hande Özgür Alemdar; Yunus Emre Kara; Mustafa Ozan Özen; Gökhan Remzi Yavuz; Ozlem Durmaz Incel; Lale Akarun; Cem Ersoy

Accidental falls threaten the lives of people over 65 years of age and can be overcome with quick action for saving lives. Old people who live alone and those who have chronic diseases constitute the main risk groups. Fast and effective detection of falls will increase the quality of life of these people. In this study, using accelerometers together with a video sensor, a multi-modal fall detection mechanism is proposed and its performance has been evaluated. The results indicate that an accelerometer triggered video processing method will minimize the processing costs together with privacy related issues.


information processing in sensor networks | 2010

Multi-modal fall detection within the WeCare framework

Hande Özgür Alemdar; Gökhan Remzi Yavuz; Mustafa Ozan Özen; Yunus Emre Kara; Ozlem Durmaz Incel; Lale Akarun; Cem Ersoy

Falls are identified as a major health risk for the elderly and a major obstacle to independent living. Considering the remarkable increase in the elderly population of developed countries, methods for fall detection have been a recent active area of research. However, existing methods often use only wearable sensors, such as acceloremeters, or cameras to detect falls. In this demonstration, in contrast to the state of the art solutions, we focus on the use of multi-modal wireless sensor networks within the WeCare framework. WeCare system is developed as a solution for independent living applications by remotely monitoring the health and well-being of its users. We describe the general structure of WeCare and demonstrate its fall detection method. Our set-up not only includes scalar sensors to detect falls and motion but also consists of embedded cameras and RFID tags and uses sensor fusion techniques to improve the success of fall detection and minimize the false positives.


IEEE Communications Letters | 2016

ISI-Aware Modeling and Achievable Rate Analysis of the Diffusion Channel

Gaye Genc; Yunus Emre Kara; H. Birkan Yilmaz; Tuna Tugcu

Analyzing the achievable rate of molecular communication via diffusion (MCvD) inherits intricacies due to its nature: MCvD channel has memory, and the heavy tail of the signal causes inter-symbol interference (ISI). Therefore, using Shannons channel capacity formulation for memoryless channel is not appropriate for the MCvD channel. Instead, a more general achievable rate formulation and system model must be considered to make this analysis accurately. In this letter, we propose an effective ISI-aware MCvD modeling technique in 3-D medium and properly analyze the achievable rate.


Neurocomputing | 2015

Modeling annotator behaviors for crowd labeling

Yunus Emre Kara; Gaye Genc; Oya Aran; Lale Akarun

Machine learning applications can benefit greatly from vast amounts of data, provided that reliable labels are available. Mobilizing crowds to annotate the unlabeled data is a common solution. Although the labels provided by the crowd are subjective and noisy, the wisdom of crowds can be captured by a variety of techniques. Finding the mean or finding the median of a sample?s annotations are widely used approaches for finding the consensus label of that sample. Improving consensus extraction from noisy labels is a very popular topic, the main focus being binary label data. In this paper, we focus on crowd consensus estimation of continuous labels, which is also adaptable to ordinal or binary labels. Our approach is designed to work on situations where there is no gold standard; it is only dependent on the annotations and not on the feature vectors of the instances, and does not require a training phase. For achieving a better consensus, we investigate different annotator behaviors and incorporate them into four novel Bayesian models. Moreover, we introduce a new metric to examine annotator quality, which can be used for finding good annotators to enhance consensus quality and reduce crowd labeling costs. The results show that the proposed models outperform the commonly used methods. With the use of our annotator scoring mechanism, we are able to sustain consensus quality with much fewer annotations.


signal processing and communications applications conference | 2011

Human action recognition in videos using keypoint tracking

Yunus Emre Kara; Lale Akarun

In this study, a new system for computer vision-based recognition of human actions is presented. The proposed system uses videos as input. The approach is invariant of the location of the action and zoom levels, the appearance of the person, partial occlusions including self-occlusions and some viewpoint changes. It is robust against temporal length variations. Keypoints are tracked through time and the trajectories of tracked keypoints are used for interpreting the human action in the video. Then, features from videos are extracted. A group of features for describing a trajectory are proposed. Trajectories are clustered using these trajectory features. The clustered trajectories are used for describing an image sequence. Image sequence descriptors are the normalized histograms of the clusters of trajectories. At the final stage, the proposed system uses the descriptors of the image sequences in a supervised learning approach.


signal processing and communications applications conference | 2012

3D hand pose estimation and classification using depth sensors

Cem Keskin; Furkan Kıraç; Yunus Emre Kara; Lale Akarun

This paper describes our method to fit a 3D skeleton to the human hand using depth images. The human hand is represented by a 3D skeleton with 21 parts. This model is used to generate synthetic depth images, that are used to train Random Decision Forests (RDF), which are used to assign each pixel to a hand part. Mean-shift method is used on the classification results and joint locations are estimated. The system can run in real time at 30 fps on Kinect depth images. We use this method and Support Vector Machines for classification and obtain 99.9% recognition rate on the American Sign Language (ASL) digit recognition problem.

Collaboration


Dive into the Yunus Emre Kara's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Cem Ersoy

Boğaziçi University

View shared research outputs
Top Co-Authors

Avatar

Gaye Genc

Boğaziçi University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Furkan K

Boğaziçi University

View shared research outputs
Researchain Logo
Decentralizing Knowledge