Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Koray Ozcan is active.

Publication


Featured researches published by Koray Ozcan.


IEEE Journal on Emerging and Selected Topics in Circuits and Systems | 2013

Automatic Fall Detection and Activity Classification by a Wearable Embedded Smart Camera

Koray Ozcan; Anvith Katte Mahabalagiri; Mauricio Casares; Senem Velipasalar

Robust detection of events and activities, such as falling, sitting, and lying down, is a key to a reliable elderly activity monitoring system. While fast and precise detection of falls is critical in providing immediate medical attention, other activities like sitting and lying down can provide valuable information for early diagnosis of potential health problems. In this paper, we present a fall detection and activity classification system using wearable cameras. Since the camera is worn by the subject, monitoring is not limited to confined areas, and extends to wherever the subject may go including indoors and outdoors. Furthermore, since the captured images are not of the subject, privacy concerns are alleviated. We present a fall detection algorithm employing histograms of edge orientations and strengths, and propose an optical flow-based method for activity classification. The first set of experiments has been performed with prerecorded video sequences from eight different subjects wearing a camera on their waist. Each subject performed around 40 trials, which included falling, sitting, and lying down. Moreover, an embedded smart camera implementation of the algorithm was also tested on a CITRIC platform with subjects wearing the CITRIC camera, and each performing 50 falls and 30 non-fall activities. Experimental results show the success of the proposed method.


IEEE Sensors Journal | 2017

A Survey on Activity Detection and Classification Using Wearable Sensors

Maria Cornacchia; Koray Ozcan; Yu Zheng; Senem Velipasalar

Activity detection and classification are very important for autonomous monitoring of humans for applications, including assistive living, rehabilitation, and surveillance. Wearable sensors have found wide-spread use in recent years due to their ever-decreasing cost, ease of deployment and use, and ability to provide continuous monitoring as opposed to sensors installed at fixed locations. Since many smart phones are now equipped with a variety of sensors, such as accelerometer, gyroscope, and camera, it has become more feasible to develop activity monitoring algorithms employing one or more of these sensors with increased accessibility. We provide a complete and comprehensive survey on activity classification with wearable sensors, covering a variety of sensing modalities, including accelerometer, gyroscope, pressure sensors, and camera- and depth-based systems. We discuss differences in activity types tackled by this breadth of sensing modalities. For example, accelerometer, gyroscope, and magnetometer systems have a history of addressing whole body motion or global type activities, whereas camera systems provide the context necessary to classify local interactions, or interactions of individuals with objects. We also found that these single sensing modalities laid the foundation for hybrid works that tackle a mix of global and local interaction-type activities. In addition to the type of sensors and type of activities classified, we provide details on each wearable system that include on-body sensor location, employed learning approach, and extent of experimental setup. We further discuss where the processing is performed, i.e., local versus remote processing, for different systems. This is one of the first surveys to provide such breadth of coverage across different wearable sensor systems for activity classification.


IEEE Embedded Systems Letters | 2016

Wearable Camera- and Accelerometer-Based Fall Detection on Portable Devices

Koray Ozcan; Senem Velipasalar

Robust and reliable detection of falls is crucial especially for elderly activity monitoring systems. In this letter, we present a fall detection system using wearable devices, e.g., smartphones, and tablets, equipped with cameras and accelerometers. Since the portable device is worn by the subject, monitoring is not limited to confined areas, and extends to wherever the subject may travel, as opposed to static sensors installed in certain rooms. Moreover, a camera provides an abundance of information, and the results presented here show that fusing camera and accelerometer data not only increases the detection rate, but also decreases the number of false alarms compared to only accelerometer-based or only camera-based systems. We employ histograms of edge orientations together with the gradient local binary patterns for the camera-based part of fall detection. We compared the performance of the proposed method with that of using original histograms of oriented gradients (HOG) as well as a modified version of HOG. Experimental results show that the proposed method outperforms using original HOG and modified HOG, and provides lower false positive rates for the camera-based detection. Moreover, we have employed an accelerometer-based fall detection method, and fused these two sensor modalities to have a robust fall detection system. Experimental results and trials with actual Samsung Galaxy phones show that the proposed method, combining two different sensor modalities, provides much higher sensitivity, and a significant decrease in the number of false positives during daily activities, compared to accelerometer-only and camera-only methods.


international conference on distributed smart cameras | 2015

Robust and reliable step counting by mobile phone cameras

Koray Ozcan; Senem Velipasalar

Wearable sensors are being widely used to monitor daily human activities and vital signs. Accelerometer-based step counters are commonly available, especially after being integrated into smartphones and smart watches. Moreover, accelerometer data is also used to measure step length and frequency for indoor positioning systems. Yet, accelerometer-based algorithms are prone to over-counting, since they also count other routine movements, including movements of the phone, as steps. In addition, when users walk really slowly, or when they stop and start walking again, the accelerometer-based counting becomes unreliable. Since accurate step detection is very important for indoor positioning systems, more precise alternatives are needed for step detection and counting. In this paper, we present a robust and reliable method for counting foot steps using videos captured with a Samsung Galaxy® S4 smartphone. The performance of the proposed method is compared with existing accelerometer-based step counters. Experiments have been performed with different subjects carrying five mobile devices simultaneously, including smart phones and watches, at different locations on their body. The results show that camera-based step counting has the lowest average error rate for different users, and is more reliable compared to accelerometer-based counters. In addition, the results show the high sensitivity of the accelerometer-based step counters to the location of the device and high variance in their performance across different users.


IEEE Transactions on Human-Machine Systems | 2017

Autonomous Fall Detection With Wearable Cameras by Using Relative Entropy Distance Measure

Koray Ozcan; Senem Velipasalar; Pramod K. Varshney

Timely, precise, and reliable detection of fall events is very important for systems monitoring activities of elderly people, especially the ones living independently. In this paper, we propose an autonomous fall detection system by taking a completely different view compared with existing vision-based activity monitoring systems and applying a reverse approach. In our system, in contrast with static sensors installed at fixed locations, the camera is worn by the subject, and thus, monitoring is not limited only to areas where the sensors are located and extends to wherever the subject may travel. Moreover, the camera provides a richer set of data and helps lower the false positive rates compared with accelerometer-only systems. We employ a modified version of the histograms of oriented gradients (HOG) approach together with the gradient local binary patterns (GLBP). It is shown that, with the same training set, the GLBP feature is more descriptive and discriminative than HOG, histograms of template, and semantic local binary patterns. Moreover, we autonomously compute a threshold, for the detection of fall events, from the training data based on relative entropy, which is a member of Ali-Silvey distance measures. Experiments are performed with ten different people and a total of around 300 associated fall events indoors and outdoors. Experimental results show that, with the autonomously computed threshold, the proposed method provides 93.78% and 89.8% accuracy for detecting falls with indoor and outdoor experiments, respectively.


advanced video and signal based surveillance | 2014

Camera motion detection for mobile smart cameras using segmented edge-based optical flow

Anvith Katte Mahabalagiri; Koray Ozcan; Senem Velipasalar

Determining camera motion is a challenging task in applications involving mobile smart cameras. With widespread use of cameras in mobile applications, analyzing motion-based information have become important. Optical flow has been a popular technique in determining camera motion. However, the use of traditional optical flow techniques can be computationally quite expensive and impractical for embedded smart cameras with limited processing power, The aim of this paper is to provide an effective and computationally efficient optical flow technique to determine the camera motion direction. This technique is based on the segmentation of edge features, and has been implemented on an actual embedded platform. We will show that the systematic segmentation of edge features not only reduces computation time drastically, but also provides sufficient details in determining basic camera motion patterns.


international conference on image processing | 2016

Doorway detection for autonomous indoor navigation of unmanned vehicles

Burak Kakillioglu; Koray Ozcan; Senem Velipasalar

Fully autonomous navigation of unmanned vehicles, without relying on pre-installed tags or markers, still remains a challenge for GPS-denied areas and complex indoor environments. Doors are important for navigation as the entry/exit points. A novel approach is proposed to autonomously detect™ doorways by using the Project Tango platform. We first detect the candidate door openings from the 3D point cloud, and then use a pre-trained detector on corresponding RGB image regions to verify if these openings are indeed doors. We employ Aggregate Channel Features for detection, which are computationally efficient for real-time applications. Since detection is only performed on candidate regions, the system is more robust against false positives. The approach can be generalized to recognize windows, some architectural structures and obstacles. Experiments show that the proposed method can detect open doors in a robust and efficient manner.


international conference on multimedia and expo | 2013

Fall detection and activity classification using a wearable smart camera

Koray Ozcan; Anvith Katte Mahabalagiri; Senem Velipasalar

Robust detection of events and activities, such as falling, sitting and lying down, is a key to a reliable elderly activity monitoring system. While fast and precise detection of falls is critical in providing immediate medical attention, other activities like sitting and lying down can provide valuable information for early diagnosis of potential health problems. In this paper, we present a fall detection and activity classification system using wearable cameras. Since the camera is worn by the subject, monitoring extends to wherever the subject may go. Furthermore, since the captured frames are not of the subject, privacy is preserved. We present an improved fall detection algorithm employing histograms of edge orientations and strengths, and propose an optical flow-based method for activity classification. Trials were performed on five different subjects wearing a camera on their waist, each performing 40 different activities. Experimental results show the success of the proposed method.


international conference on distributed smart cameras | 2013

A robust edge-based optical flow method for elderly activity classification with wearable smart cameras

Anvith Katte Mahabalagiri; Koray Ozcan; Senem Velipasalar

Automated monitoring of activities of elderly is very important in the field of elderly healthcare. While fast and precise detection of falls is critical in providing immediate medical attention, other activities like walking, sitting and lying down can provide valuable information for early diagnosis of potential health problems. Recently, wearable smart cameras have emerged as a new area of research, since they provide several advantages. First, activity monitoring is not restricted to confined environments, where static sensors are installed. Second, privacy concerns of the person being monitored are alleviated. Recent works in the literature show promising results for fall detection. However, classification of activities like walking, sitting and lying down still remains as a challenge. Moreover, since most of the processing needs to be performed on board, it becomes imperative that the method has real-time capabilities. In this paper, we present a new and efficient method for activity classification using histogram of oriented gradients (HOG) and optical flow. Since the regular global optical flow methods can be inaccurate and computationally expensive, we use an alternative edge-based optical flow technique, which provides better speed and accuracy especially for the application and embedded platforms under consideration. A total of 195 experiments were conducted on eight subjects performing falling, sitting and lying down activities. Results demonstrate the promise and feasibility of and the speed up provided by the proposed method.


international conference on distributed smart cameras | 2017

Traffic Sign Detection from Lower-quality and Noisy Mobile Videos

Koray Ozcan; Senem Velipasalar; Anuj Sharma

Accurate traffic sign detection, from vehicle-mounted cameras, is an important task for autonomous driving and driver assistance. It is a challenging task especially when the videos acquired from mobile cameras on portable devices are low-quality. In this paper, we focus on naturalistic videos captured from vehicle-mounted cameras. It has been shown that Region-based Convolutional Neural Networks provide high accuracy rates in object detection tasks. Yet, they are computationally expensive, and often require a GPU for faster training and processing. In this paper, we present a new method, incorporating Aggregate Channel Features and Chain Code Histograms, with the goal of much faster training and testing, and comparable or better performance without requiring specialized processors. Our test videos cover a range of different weather and daytime scenarios. The experimental results show the promise of the proposed method and a faster performance compared to the other detectors.

Collaboration


Dive into the Koray Ozcan's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mauricio Casares

University of Nebraska–Lincoln

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge