Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ali Ozer Ercan is active.

Publication


Featured researches published by Ali Ozer Ercan.


distributed computing in sensor systems | 2006

Optimal placement and selection of camera network nodes for target localization

Ali Ozer Ercan; Danny B. Yang; Abbas El Gamal; Leonidas J. Guibas

The paper studies the optimal placement of multiple cameras and the selection of the best subset of cameras for single target localization in the framework of sensor networks. The cameras are assumed to be aimed horizontally around a room. To conserve both computation and communication energy, each camera reduces its image to a binary “scan-line” by performing simple background subtraction followed by vertical summing and thresholding, and communicates only the center of the detected foreground object. Assuming noisy camera measurements and an object prior, the minimum mean squared error of the best linear estimate of the object location in 2-D is used as a metric for placement and selection. The placement problem is shown to be equivalent to a classical inverse kinematics robotics problem, which can be solved efficiently using gradient descent techniques. The selection problem on the other hand is a combinatorial optimization problem and finding the optimal solution can be too costly to implement in an energy-constrained wireless camera network. A semi-definite programming approximation for the problem is shown to achieve close to optimal solutions with much lower computational burden. Simulation and experimental results are presented.


information processing in sensor networks | 2007

Object tracking in the presence of occlusions via a camera network

Ali Ozer Ercan; Abbas El Gamal; Leonidas J. Guibas

This paper describes a sensor network approach to tracking a single object in the presence of static and moving occluders using a network of cameras. To conserve communication bandwidth and energy, each camera first performs simple local processing to reduce each frame to a scan line. This information is then sent to a cluster head to track a point object. We assume the locations of the static occluders to be known, but only prior statistics on the positions of the moving occluders are available. A noisy perspective camera measurement model is presented, where occlusions are captured through an occlusion indicator function. An auxiliary particle filter that incorporates the occluder information is used to track the object. Using simulations, we investigate (i) the dependency of the tracker performance on the accuracy of the moving occluder priors, (ii) the tradeoff between the number of cameras and the occluder prior accuracy required to achieve a prescribed tracker performance, and (iii) the importance of having occluder priors to the tracker performance as the number of occluders increases. We generally find that computing moving occluder priors may not be worthwhile, unless it can be obtained cheaply and to a reasonable accuracy. Preliminary experimental results are provided.


ACM Transactions on Sensor Networks | 2013

Object tracking in the presence of occlusions using multiple cameras: A sensor network approach

Ali Ozer Ercan; Abbas El Gamal; Leonidas J. Guibas

This article describes a sensor network approach to tracking a single object in the presence of static and moving occluders using a network of cameras. To conserve communication bandwidth and energy, we combine a task-driven approach with camera subset selection. In the task-driven approach, each camera first performs simple local processing to detect the horizontal position of the object in the image. This information is then sent to a cluster head to track the object. We assume the locations of the static occluders to be known, but only prior statistics on the positions of the moving occluders are available. A noisy perspective camera measurement model is introduced, where occlusions are captured through occlusion indicator functions. An auxiliary particle filter that incorporates the occluder information is used to track the object. The camera subset selection algorithm uses the minimum mean square error of the best linear estimate of the object position as a metric, and tracking is performed using only the selected subset of cameras. Using simulations and preselected subsets of cameras, we investigate (i) the dependency of the tracker performance on the accuracy of the moving occluder priors, (ii) the trade-off between the number of cameras and the occluder prior accuracy required to achieve a prescribed tracker performance, and (iii) the importance of having occluder priors to the tracker performance as the number of occluders increases. We find that computing moving occluder priors may not be worthwhile, unless it can be obtained cheaply and to high accuracy. We also investigate the effect of dynamically selecting the subset of camera nodes used in tracking on the tracking performance. We show through simulations that a greedy selection algorithm performs close to the brute-force method and outperforms other heuristics, and the performance achieved by greedily selecting a small fraction of the cameras is close to that of using all the cameras.


IEEE Transactions on Image Processing | 2015

Fusing Inertial Sensor Data in an Extended Kalman Filter for 3D Camera Tracking

Arif Tanju Erdem; Ali Ozer Ercan

In a setup where camera measurements are used to estimate 3D egomotion in an extended Kalman filter (EKF) framework, it is well-known that inertial sensors (i.e., accelerometers and gyroscopes) are especially useful when the camera undergoes fast motion. Inertial sensor data can be fused at the EKF with the camera measurements in either the correction stage (as measurement inputs) or the prediction stage (as control inputs). In general, only one type of inertial sensor is employed in the EKF in the literature, or when both are employed they are both fused in the same stage. In this paper, we provide an extensive performance comparison of every possible combination of fusing accelerometer and gyroscope data as control or measurement inputs using the same data set collected at different motion speeds. In particular, we compare the performances of different approaches based on 3D pose errors, in addition to camera reprojection errors commonly found in the literature, which provides further insight into the strengths and weaknesses of different approaches. We show using both simulated and real data that it is always better to fuse both sensors in the measurement stage and that in particular, accelerometer helps more with the 3D position tracking accuracy, whereas gyroscope helps more with the 3D orientation tracking accuracy. We also propose a simulated data generation method, which is beneficial for the design and validation of tracking algorithms involving both camera and inertial measurement unit measurements in general.


ieee sensors | 2002

Experimental high speed CMOS image sensor system and applications

Ali Ozer Ercan; Feng Xiao; Xinqiao Liu; SukHwan Lim; A. El Gamal; Brian A. Wandell

CMOS image sensors are capable of very high-speed non-destructive readout, enabling many novel applications. To explore such applications, we designed and prototyped an experimental high speed imaging system based on a CMOS digital pixel sensor (DPS). The experimental system comprises a PCB that has the DPS chip interfaced to a PC via three I/O cards supported by an easy to use software environment. The system is capable of image acquisition at rates of up to 1,400 frames/sec. After describing the DPS chip and experimental imaging system,we present two applications: dynamic range extension and optical flow estimation. These applications rely on the DPSs ability to perform non-destructive readout of multiple frames at high-speed.


american control conference | 2011

On sensor fusion for head tracking in augmented reality applications

Ali Ozer Ercan; Tanju Erdem

The paper presents a simple setup consisting of a camera and an accelerometer located on a head mounted display, and investigates the performance of head tracking for augmented reality applications using this setup. The information from the visual and inertial sensors is fused in an extended Kalman filter (EKF) tracker. The performance of treating accelerometer measurements as control inputs is compared to treating both camera and accelerometer measurements as measurements, i.e., fusing them in the measurement update stage of the EKF simultaneously. It is concluded via simulations that treating accelerometer measurements as control inputs performs practically as good as treating both measurements as measurements, while providing a lower complexity tracker.


Software - Practice and Experience | 2014

Fault masking as a service

Koray Gülcü; Hasan Sözer; Baris Aktemur; Ali Ozer Ercan

In SOA, composite services depend on a set of partner services to perform their tasks. These partner services may become unavailable because of system and/or network faults, leading to an increased error rate for the composite service. In this paper, we propose an approach to prevent the occurrence of errors that result from the unavailability of partner services. We introduce an external Web service, dubbed Fault Avoidance Service (FAS), to which composite services can register at will. After registration, FAS periodically checks the partner links, detects unavailable partner services, and updates the composite service with available alternatives. Thus, in case of a partner service error, the composite service will have been updated before attempting an ill‐destined request. We provide mathematical analysis regarding the error rate and the false positive rate with respect to the monitoring frequency of FAS for two models. We obtained empirical results by conducting several tests on the Amazon Elastic Compute Cloud to evaluate our mathematical analyses. We also introduce an industrial case study for improving the quality of a service‐oriented system from the broadcasting and content delivery domain. Copyright


international conference on software testing verification and validation workshops | 2016

Successive Refinement of Models for Model-Based Testing to Increase System Test Effectiveness

Ceren Sahin Gebizli; Hasan Sözer; Ali Ozer Ercan

Model-based testing is used for automatically generating test cases based on models of the system under test. The effectiveness of tests depends on the contents of these models. Therefore, we introduce a novel three-step model refinement approach. We represent test models in the form of Markov chains. First, we update state transition probabilities in these models based on usage profile. Second, we perform an update based on fault likelihood that is estimated with static code analysis. Our third update is based on error likelihood that is estimated with dynamic analysis. We generate and execute test cases after each refinement. We applied our approach for model-based testing of a Smart TV system and new faults were revealed after each refinement.


signal processing and communications applications conference | 2012

Architecture for a distributed openflow controller

Volkan Yazici; M. Oguz Sunay; Ali Ozer Ercan

Considering the modern internet traffic rates, the network architecture is of particular importance as the running services itself. On the other hand, due to the increasing complexity and black-box structure of the available networking hardware (switches, routers, etc.), the necessary network innovation imposed by the running services becomes infeasible in practice. The software-defined networking notion introduced to solve this problem and one of its emerging and powerful implementations, the OpenFlow protocol, advocate the idea of providing the control and data paths in separate planes. A network operating system running on this control plane, is anticipated to provide necessary measures for scalability and reliability in order to stand against the gigantic traffic pumped by the network. In this paper, we propose a distributed OpenFlow network operating system built with necessary scalability and reliability qualifications without requiring any changes to the existing OpenFlow protocol and networking equipment.


advanced video and signal based surveillance | 2014

Occlusion-aware 3D multiple object tracker with two cameras for visual surveillance

Osman Topcu; A. Aydin Alatan; Ali Ozer Ercan

An occlusion-aware multiple deformable object tracker for visual surveillance from two cameras is presented. Each object is tracked by a separate particle filter tracker, which is initiated upon detection of a new person and terminated when s/he leaves the scene. Objects are considered as 3D points at their centre of masses as if their mass density is uniform. Point objects and corresponding silhouette centroids in two views together with the epipolar geometry they satisfy resulted in a practical tracking methodology. An occlusion filter is described, that provides the tracker filters conditional occlusion probabilities of the objects, given their estimated positions. Advances over the previous work; in the computation of conditional occlusion probabilities, in incorporation of these probabilities in the particle filter, and in maintaining tracking of separating objects after long periods of moving close-by, are presented on PETS 2006, PETS 2009 and EPFL datasets.

Collaboration


Dive into the Ali Ozer Ercan's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

A. Aydin Alatan

Middle East Technical University

View shared research outputs
Top Co-Authors

Avatar

Osman Topcu

Scientific and Technological Research Council of Turkey

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Fred Burghardt

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge