Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Nevrez Imamoglu is active.

Publication


Featured researches published by Nevrez Imamoglu.


IEEE Transactions on Multimedia | 2013

A Saliency Detection Model Using Low-Level Features Based on Wavelet Transform

Nevrez Imamoglu; Weisi Lin; Yuming Fang

Researchers have been taking advantage of visual attention in various image processing applications such as image retargeting, video coding, etc. Recently, many saliency detection algorithms have been proposed by extracting features in spatial or transform domains. In this paper, a novel saliency detection model is introduced by utilizing low-level features obtained from the wavelet transform domain. Firstly, wavelet transform is employed to create the multi-scale feature maps which can represent different features from edge to texture. Then, we propose a computational model for the saliency map from these features. The proposed model aims to modulate local contrast at a location with its global saliency computed based on the likelihood of the features, and the proposed model considers local center-surround differences and global contrast in the final saliency map. Experimental evaluation depicts the promising results from the proposed model by outperforming the relevant state of the art saliency detection models.


Expert Systems With Applications | 2012

Autonomous quadrotor flight with vision-based obstacle avoidance in virtual environment

Aydın Eresen; Nevrez Imamoglu; Mehmet Önder Efe

In this paper, vision-based autonomous flight with a quadrotor type unmanned aerial vehicle (UAV) is presented. Automatic detection of obstacles and junctions are achieved by the use of optical flow velocities. Variation in the optical flow is used to determine the reference yaw angle. Path to be followed is generated autonomously and the path following process is achieved via a PID controller operating as the low level control scheme. Proposed method is tested in the Google Earth(R) virtual environment for four different destination points. In each case, autonomous UAV flight is successfully simulated without observing collisions. The results show that the proposed method is a powerful candidate for vision based navigation in an urban environment. Claims are justified with a set of experiments and it is concluded that proper thresholding of the variance of the gradient of optical flow difference have a critical effect on the detectability of roads having different widths.


Signal Processing-image Communication | 2018

A novel superpixel-based saliency detection model for 360-degree images

Yuming Fang; Xiaoqiang Zhang; Nevrez Imamoglu

Abstract Effective visual attention modeling is a key factor that helps enhance the overall Quality of Experience (QoE) of VR/AR data. Although a huge number of algorithms have been developed in recent years to detect salient regions in flat-2D images, the research on 360-degree image saliency is limited. In this study, we propose a superpixel-level saliency detection model for 360-degree images by figure-ground law of Gestalt theory. First, the input image is segmented into superpixels. CIE Lab color space is then used to extract the perceptual features. We extract luminance and texture features for 360-degree images from L channel, while color features are extracted from a and b channels. We compute two components for saliency prediction by figure-ground law of Gestalt theory: feature contrast and boundary connectivity. The feature contrast is computed on superpixel level by luminance and color features. The boundary connectivity is predicted for background measure and it describes the spatial layout of image region with two image boundaries (upper and lower boundary). The final saliency map of 360-degree image is calculated by fusing feature contrast map and boundary connectivity map. Experimental results on a public eye tracking database of 360-degree images show promising performance of saliency prediction from the proposed method.


The Scientific World Journal | 2014

Development of Robust Behaviour Recognition for an at-Home Biomonitoring Robot with Assistance of Subject Localization and Enhanced Visual Tracking

Nevrez Imamoglu; Enrique Dorronzoro; Zhixuan Wei; Huangjun Shi; Masashi Sekine; Jose Gonzalez; Dongyun Gu; Weidong Chen; Wenwei Yu

Our research is focused on the development of an at-home health care biomonitoring mobile robot for the people in demand. Main task of the robot is to detect and track a designated subject while recognizing his/her activity for analysis and to provide warning in an emergency. In order to push forward the system towards its real application, in this study, we tested the robustness of the robot system with several major environment changes, control parameter changes, and subject variation. First, an improved color tracker was analyzed to find out the limitations and constraints of the robot visual tracking considering the suitable illumination values and tracking distance intervals. Then, regarding subject safety and continuous robot based subject tracking, various control parameters were tested on different layouts in a room. Finally, the main objective of the system is to find out walking activities for different patterns for further analysis. Therefore, we proposed a fast, simple, and person specific new activity recognition model by making full use of localization information, which is robust to partial occlusion. The proposed activity recognition algorithm was tested on different walking patterns with different subjects, and the results showed high recognition accuracy.


arXiv: Computer Vision and Pattern Recognition | 2013

An Improved Saliency for RGB-D Visual Tracking and Control Strategies for a Bio-monitoring Mobile Robot

Nevrez Imamoglu; Zhixuan Wei; Huangjun Shi; Yuki Yoshida; Myagmarbayar Nergui; Jose Gonzalez; Dongyun Gu; Weidong Chen; Kenzo Nonami; Wenwei Yu

Our previous studies demonstrated that the idea of bio-monitoring home healthcare mobile robots is feasible. Therefore, by developing algorithms for mobile robot based tracking, measuring, and activity recognition of human subjects, we would be able to help impaired people (MIPs) to spend more time focusing in their motor function rehabilitation process from their homes. In this study we aimed at improving two important modules in these kinds of systems: the control of the robot and visual tracking of the human subject. For this purpose: 1) tracking strategies for different types of home environments were tested in a simulator to investigate the effect on robot behavior; 2) a multi- channel saliency fusion model with high perceptual quality was proposed and integrated into RGB-D based visual tracking. Regarding the control strategies, results showed that, depending on different types of room environment, different tracking strategies should be employed. For the visual tracking, the proposed saliency fusion model yielded good results by improving the saliency output. Also, the integration of this saliency model resulted in better performance of RGB-D based visual tracking application.Saliency computation has become a popular research field for many applications due to the useful information provided by saliency maps. For a saliency map, local relations around the salient regions in multi-channel perspective should be taken into consideration by aiming uniformity on the region of interest as an internal approach. And, irrelevant salient regions have to be avoided as much as possible. Most of the works achieve these criteria with external processing modules; however, these can be accomplished during the conspicuity map fusion process. Therefore, in this paper, a new model is proposed for saliency/conspicuity map fusion with two concepts: a) input image transformation relying on the principal component analysis (PCA), and b) saliency conspicuity map fusion with multi-channel pulsed coupled neural network (m-PCNN). Experimental results, which are evaluated by precision, recall, F-measure, and area under curve (AUC), support the reliability of the proposed method by enhancing the saliency computation.


Ipsj Transactions on Computer Vision and Applications | 2015

Spatial Visual Attention for Novelty Detection: A Space-based Saliency Model in 3D Using Spatial Memory

Nevrez Imamoglu; Enrique Dorronzoro; Masashi Sekine; Kahori Kita; Wenwei Yu

Saliency maps as visual attention computational models can reveal novel regions within a scene (as in the human visual system), which can decrease the amount of data to be processed in task specific computer vision applications. Most of the saliency computation models do not take advantage of prior spatial memory by giving priority to spatial or object based features to obtain bottom-up or top-down saliency maps. In our previous experiments, we demonstrated that spatial memory regardless of object features can aid detection and tracking tasks with a mobile robot by using a 2D global environment memory of the robot and local Kinect data in 2D to compute the space-based saliency map. However, in complex scenes where 2D space-based saliency is not enough (i.e., subject lying on the bed), 3D scene analysis is necessary to extract novelty within the scene by using spatial memory. Therefore, in this work, to improve the detection of novelty in a known environment, we proposed a space-based spatial saliency with 3D local information by improving 2D space base saliency with height as prior information about the specific locations. Moreover, the algorithm can also be integrated with other bottom-up or top-down saliency computational models to improve the detection results. Experimental results demonstrate that high accuracy for novelty detection can be obtained, and computational time can be reduced for existing state of the art detection and tracking models with the proposed algorithm.


international conference of the ieee engineering in medicine and biology society | 2014

Ultrasound imaging and semi-automatic analysis of active muscle features in electrical stimulation by optical flow.

Shota Kawamoto; Nevrez Imamoglu; Jose Gomez-Tames; Kahori Kita; Wenwei Yu

Ultrasound imaging is an effective way to measure the muscle activity in electrical stimulation studies. However, it is a time consuming task to manually measure pennation angle and muscle thickness, which are the benchmark features to analyze muscle activity from the ultrasound imaging. In previous studies, the muscle features were measured by calculating optical flow of the pennation angle by using only fibers of a muscle from the ultrasound, without carefully considering moving muscle edges during active and passive contraction. Therefore, this study aimed to measure the pennation angle and muscle thickness by using the edges and fibers of a muscle in a quantitative way in a semi-automatic optical flow based approach. The results of the semi-automatic analysis were compared to that of manual measurement. Through the comparison, it is clear that the proposed algorithm could achieve higher accuracy in tracking the thickness and pennation angle for a sequence of ultrasound images.


International Journal of Intelligent Unmanned Systems | 2013

Human motion tracking and recognition using HMM by a mobile robot

Myagmarbayar Nergui; Yuki Yoshida; Nevrez Imamoglu; Jose Gonzalez; Masashi Sekine; Wenwei Yu

Purpose – The aim of this paper is to develop autonomous mobile home healthcare robots, which are capable of observing patients’ motions, recognizing the patients’ behaviours based on observation data, and providing automatically calling for medical personnel in emergency situations. The robots to be developed will bring about cost‐effective, safe and easier at‐home rehabilitation to most motor‐function impaired patients (MIPs).Design/methodology/approach – The paper has developed following programs/control algorithms: control algorithms for a mobile robot to track and follow human motions, to measure human joint trajectories, and to calculate angles of lower limb joints; and algorithms for recognizing human gait behaviours based on the calculated joints angle data.Findings – A Hidden Markov Model (HMM) based human gait behaviour recognition taking lower limb joint angles and body angle as input was proposed. The proposed HMM based gait behaviour recognition is compared with the Nearest Neighbour (NN) cla...


Signal, Image and Video Processing | 2018

An integration of bottom-up and top-down salient cues on RGB-D data: saliency from objectness versus non-objectness

Nevrez Imamoglu; Wataru Shimoda; Chi Zhang; Yuming Fang; Asako Kanezaki; Keiji Yanai; Yoshifumi Nishida

Bottom-up and top-down visual cues are two types of information that helps the visual saliency models. These salient cues can be from spatial distributions of the features (space-based saliency) or contextual/task-dependent features (object-based saliency). Saliency models generally incorporate salient cues either in bottom-up or top-down norm separately. In this work, we combine bottom-up and top-down cues from both space- and object-based salient features on RGB-D data. In addition, we also investigated the ability of various pre-trained convolutional neural networks for extracting top-down saliency on color images based on the object dependent feature activation. We demonstrate that combining salient features from color and dept through bottom-up and top-down methods gives significant improvement on the salient object detection with space-based and object-based salient cues. RGB-D saliency integration framework yields promising results compared with the several state-of-the-art-models.


international conference on intelligent autonomous systems | 2016

Active Sensing for Human Activity Recognition by a Home Bio-monitoring Robot in a Home Living Environment

Keigo Nakahata; Enrique Dorronzoro; Nevrez Imamoglu; Masashi Sekine; Kahori Kita; Wenwei Yu

It has been shown that mobile robots could be a potential solution to home bio-monitoring for the elderly. Through our previous studies, a mobile robot system that is able to recognize daily living activities of a target person has been developed. However, in a home environment, there are several factors of uncertainty, such as confusion with surrounding objects, occlusion by furniture, etc. Thus, the features extracted could not guarantee the correct recognition. To solve the problem, we applied active sensing strategy to the robot, especially to the body contour based behavior recognition part, by implementing 3 algorithms in a row, which enabled (1) judging irregularity of feature extraction; (2) adjusting robot viewpoints accordingly; (3) avoiding excessive viewpoint adjustment based on a short-term memory mechanism, respectively. As a result of experiment in a home living scenario, higher activity recognition accuracy was achieved by the proposed active sensing algorithms.

Collaboration


Dive into the Nevrez Imamoglu's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ryosuke Nakamura

National Institute of Advanced Industrial Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yuming Fang

Jiangxi University of Finance and Economics

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Dongyun Gu

Shanghai Jiao Tong University

View shared research outputs
Researchain Logo
Decentralizing Knowledge