Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mariko Isogawa is active.

Publication


Featured researches published by Mariko Isogawa.


international conference on image processing | 2016

Eye gaze analysis and learning-to-rank to obtain the most preferred result in image inpainting

Mariko Isogawa; Dan Mikami; Kosuke Takahashi; Akira Kojima

This paper proposes a method that blindly predicts preference order between inpainted images, aiming at selecting the best one from a plurality of results. Image inpainting, which removes unwanted regions and restores them, has attracted recent attention. However, it is known that the inpainting result varies largely with the method used for inpainting and the parameters set. Thus, in a typical use case, users need to manually select the inpainting method and the parameter that yields the best one. This manual selection takes a great deal of time and thus there is a great need for a way to automatically estimate the best result. Although some methods, such as estimating perceptual preference score from image features, have been proposed in recent years, none of them are considered very promising approaches. Our method focuses on the following two points: (1) what we essentially need is a preference order relation rather than an absolute score, and (2) we consider that image features for order estimation can be effectively designed by using actually measured human visual attention. Comparison with other image quality assessment methods shows that our method estimates the preference order with high accuracy.


Multimedia Tools and Applications | 2017

Image and video completion via feature reduction and compensation

Mariko Isogawa; Dan Mikami; Kosuke Takahashi; Akira Kojima

This paper proposes a novel framework for image and video completion that removes and restores unwanted regions inside them. Most existing works fail to carry out the completion processing when similar regions do not exist in undamaged regions. To overcome this, our approach creates similar regions by projecting a low dimensional space from the original space. The approach comprises three stages. First, input images/videos are converted to a lower dimensional feature space. Second, a damaged region is restored in the converted feature space. Finally, inverse conversion is performed from the lower dimensional space to the original space. This generates two advantages: (1) it enhances the possibility of applying patches dissimilar to those in the original color space and (2) it enables the use of many existing restoration methods, each having various advantages, because the feature space for retrieving the similar patches is the only extension. The framework’s effectiveness was verified in experiments using various methods, the feature space for restoration in the second stage, and inverse conversion methods.


international symposium on mixed and augmented reality | 2015

[POSTER] Content Completion in Lower Dimensional Feature Space through Feature Reduction and Compensation

Mariko Isogawa; Dan Mikami; Kosuke Takahashi; Akira Kojima

A novel framework for image/video content completion comprising three stages is proposed. First, input images/videos are converted to a lower dimensional feature space, which is done to achieve effective restoration even in cases where a damaged region includes complex structures and changes in color. Second, a damaged region is restored in the converted feature space. Finally, an inverse conversion from the lower dimensional feature space to the original feature space is performed to generate the completed image in the original feature space. This three-step solution generates two advantages. First, it enhances the possibility of applying patches dissimilar to those in the original color space. Second, it enables the use of many existing restoration methods, each having various advantages, because the feature space for retrieving the similar patches is the only extension. Experiments verify the effectiveness of the proposed framework.


international symposium on mixed and augmented reality | 2015

[POSTER] Toward Enhancing Robustness of DR System: Ranking Model for Background Inpainting

Mariko Isogawa; Dan Mikami; Kosuke Takahashi; Akira Kojima

A method for blindly predicting inpainted image quality is proposed for enhancing the robustness of diminished reality (DR), which uses inpainting to remove unwanted objects by replacing them with background textures in real time. The method maps from inpainted image features to subjective image quality scores without the need for reference images. It enables more complex background textures to be applied to DR.


Multimedia Tools and Applications | 2018

Image quality assessment for inpainted images via learning to rank

Mariko Isogawa; Dan Mikami; Kosuke Takahashi; Hideaki Kimata

This paper proposes an image quality assessment (IQA) method for image inpainting, aiming at selecting the best one from a plurality of results. It is known that inpainting results vary largely with the method used for inpainting and the parameters set. Thus, in a typical use case, users need to manually select the inpainting method and the parameters that yield the best result. This manual selection takes a great deal of time and thus there is a great need for a way to automatically estimate the best result. Unlike existing IQA methods for inpainting, our method solves this problem as a learning-based ordering task between inpainted images. This approach makes it possible to introduce auto-generated training sets for more effective learning, which has been difficult for existing methods because judging inpainting quality is quite subjective. Our method focuses on the following three points: (1) the problem can be divided into a set of “pairwise preference order estimation” elemental problems, (2) this pairwise ordering approach enables a training set to be generated automatically, and (3) effective feature design is enabled by investigating actually measured human gazes for order estimation.


computer vision and pattern recognition | 2017

Ball 3D Trajectory Reconstruction without Preliminary Temporal and Geometrical Camera Calibration

Shogo Miyata; Hideo Saito; Kosuke Takahashi; Dan Mikami; Mariko Isogawa; Hideaki Kimata

This paper proposes a method for reconstructing 3D ball trajectories by using multiple temporally and geometrically uncalibrated cameras. To use cameras to measure the trajectory of a fast-moving object, such as a ball thrown by a pitcher, the cameras must be temporally synchronized and their position and orientation should be calibrated. In some cases, these conditions cannot be met, e.g., one cannot geometrically calibrate cameras when one cannot step into a baseball stadium. The basic idea of the proposed method is to use a ball captured by multiple cameras as a corresponding point. The method first detects a ball. Then, it estimates temporal difference between cameras. After that, the ball positions are used as corresponding points for geometrically calibrating the cameras. Experiments using actual pitching videos verify the effectiveness of our method.


international conference on computer vision theory and applications | 2016

Cornea-reflection-based Extrinsic Camera Calibration without a Direct View

Kosuke Takahashi; Dan Mikami; Mariko Isogawa; Akira Kojima

In this paper, we propose a novel method to extrinsically calibrate a camera to a 3D reference object that is not directly visible from the camera. We use the spherical human cornea as a mirror and calibrate the extrinsic parameters from the reflections of the reference points. The main contribution of this paper is to present a cornea-reflection-based calibration algorithm with minimal configuration; there are five reference points on a single plane and one mirror pose. In this paper, we derive a linear equation and obtain a closed-form solution of extrinsic calibration by introducing two key ideas. The first is to model the cornea as a virtual sphere, which enables us to estimate the center of the cornea sphere from its projection. The second idea is to use basis vectors to represent the position of the reference points, which enables us to deal with 3D information of reference points compactly. Besides, in order to make our method robust to observation noise, we minimize the reprojection error while maintaining the valid 3D geometry of the solution based on the derived linear equation. We demonstrate the advantages of the proposed method with qualitative and quantitative evaluations using synthesized and real data.


international symposium on mixed and augmented reality | 2015

[POSTER] Automatic Visual Feedback from Multiple Views for Motor Learning

Dan Mikami; Mariko Isogawa; Kosuke Takahashi; Akira Kojima

A system providing visual feedback of a trainees motions for effectively enhancing motor learning is presented. It provides feedback in synchronization with a reference motion from multiple view angles automatically with only a few seconds delay. Because the feedback is provided automatically, a trainee can obtain it without performing any operations while the memory of the motion is still clear. By employing features with low computational cost, the system achieves synchronized video feedback with four cameras connected to a consumer tablet PC.


computer vision and pattern recognition | 2018

Estimation of Center of Mass for Sports Scene Using Weighted Visual Hull

Tomoya Kaichi; Shohei Mori; Hideo Saito; Kosuke Takahashi; Dan Mikami; Mariko Isogawa; Hideaki Kimata


Ipsj Transactions on Computer Vision and Applications | 2016

Extrinsic Camera Calibration with Minimal Configuration Using Cornea Model and Equidistance Constraint

Kosuke Takahashi; Dan Mikami; Mariko Isogawa; Akira Kojima

Collaboration


Dive into the Mariko Isogawa's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Akira Kojima

Nippon Telegraph and Telephone

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Shiro Ozawa

Nippon Telegraph and Telephone

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge