Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yosuke Bando is active.

Publication


Featured researches published by Yosuke Bando.


international conference on computer graphics and interactive techniques | 2013

Compressive light field photography using overcomplete dictionaries and optimized projections

Kshitij Marwah; Gordon Wetzstein; Yosuke Bando; Ramesh Raskar

Light field photography has gained a significant research interest in the last two decades; today, commercial light field cameras are widely available. Nevertheless, most existing acquisition approaches either multiplex a low-resolution light field into a single 2D sensor image or require multiple photographs to be taken for acquiring a high-resolution light field. We propose a compressive light field camera architecture that allows for higher-resolution light fields to be recovered than previously possible from a single image. The proposed architecture comprises three key components: light field atoms as a sparse representation of natural light fields, an optical design that allows for capturing optimized 2D light field projections, and robust sparse reconstruction methods to recover a 4D light field from a single coded 2D projection. In addition, we demonstrate a variety of other applications for light field atoms and sparse coding, including 4D light field compression and denoising.


international conference on computer graphics and interactive techniques | 2008

Extracting depth and matte using a color-filtered aperture

Yosuke Bando; Bing-Yu Chen; Tomoyuki Nishita

This paper presents a method for automatically extracting a scene depth map and the alpha matte of a foreground object by capturing a scene through RGB color filters placed in the camera lens aperture. By dividing the aperture into three regions through which only light in one of the RGB color bands can pass, we can acquir three shifted views of a scene in the RGB planes of an image in a single exposure. In other words, a captured image has depth-dependent color misalignment. We develop a color alignment measure to estimate disparities between the RGB planes for depth reconstruction. We also exploit color misalignment cues in our matting algorithm in order to disambiguate between the foreground and background regions even where their colors are similar. Based on the extracted depth and matte, the color misalignment in the captured image can be canceled, and various image editing operations can be applied to the reconstructed image, including novel view synthesis, postexposure refocusing, and composition over different backgrounds.


Computer Graphics Forum | 2003

Animating Hair with Loosely Connected Particles

Yosuke Bando; Bing-Yu Chen; Tomoyuki Nishita

This paper presents a practical approach to the animation of hair at an interactive frame rate. In our approach,we model the hair as a set of particles that serve as sampling points for the volume of the hair, which covers thewhole region where hair is present. The dynamics of the hair, including hair‐hair interactions, is simulated usingthe interacting particles. The novelty of this approach is that, as opposed to the traditional way of modeling hair,we release the particles from tight structures that are usually used to represent hair strands or clusters. Therefore,by making the connections between the particles loose while maintaining their overall stiffness, the hair can bedynamically split and merged during lateral motion without losing its lengthwise coherence.


pacific conference on computer graphics and applications | 2007

Towards Digital Refocusing from a Single Photograph

Yosuke Bando; Tomoyuki Nishita

This paper explores an image processing method for synthesizing refocused images from a single input photograph containing some defocus blur. First, we restore a sharp image by estimating and removing spatially-variant defocus blur in an input photograph. To do this, we propose a local blur estimation method able to handle abrupt blur changes at depth discontinuities in a scene, and we also present an efficient blur removal method that significantly speeds up the existing deconvolution algorithm. Once a sharp image is restored, refocused images can be interactively created by adding different defocus blur to it based on the estimated blur, so that users can intuitively change focus and depth-of-field of the input photograph. Although information available from a single photograph is highly insufficient for fully correct refocusing, the results show that visually plausible refocused images can be obtained.


ACM Transactions on Graphics | 2013

Near-invariant blur for depth and 2D motion via time-varying light field analysis

Yosuke Bando; Henry Holtzman; Ramesh Raskar

Recently, several camera designs have been proposed for either making defocus blur invariant to scene depth or making motion blur invariant to object motion. The benefit of such invariant capture is that no depth or motion estimation is required to remove the resultant spatially uniform blur. So far, the techniques have been studied separately for defocus and motion blur, and object motion has been assumed 1D (e.g., horizontal). This article explores a more general capture method that makes both defocus blur and motion blur nearly invariant to scene depth and in-plane 2D object motion. We formulate the problem as capturing a time-varying light field through a time-varying light field modulator at the lens aperture, and perform 5D (4D light field + 1D time) analysis of all the existing computational cameras for defocus/motion-only deblurring and their hybrids. This leads to a surprising conclusion that focus sweep, previously known as a depth-invariant capture method that moves the plane of focus through a range of scene depth during exposure, is near-optimal both in terms of depth and 2D motion invariance and in terms of high-frequency preservation for certain combinations of depth and motion ranges. Using our prototype camera, we demonstrate joint defocus and motion deblurring for moving scenes with depth variation.


foundations of software engineering | 2013

ShAir: extensible middleware for mobile peer-to-peer resource sharing

Daniel J. Dubois; Yosuke Bando; Konosuke Watanabe; Henry Holtzman

ShAir is a middleware infrastructure that allows mobile applications to share resources of their devices (e.g., data, storage, connectivity, computation) in a transparent way. The goals of ShAir are: (i) abstracting the creation and maintenance of opportunistic delay-tolerant peer-to-peer networks; (ii) being decoupled from the actual hardware and network platform; (iii) extensibility in terms of supported hardware, protocols, and on the type of resources that can be shared; (iv) being capable of self-adapting at run-time; (v) enabling the development of applications that are easier to design, test, and simulate. In this paper we discuss the design, extensibility, and maintainability of the ShAir middleware, and how to use it as a platform for collaborative resource-sharing applications. Finally we show our experience in designing and testing a file-sharing application.


self-adaptive and self-organizing systems | 2013

Lightweight Self-organizing Reconfiguration of Opportunistic Infrastructure-mode WiFi Networks

Daniel J. Dubois; Yosuke Bando; Konosuke Watanabe; Henry Holtzman

The purpose of this work is to provide a method for exploiting pervasive wireless communication capabilities that are often underutilized on smart devices (e.g., phones, tables, cameras, TVs, etc.) in an opportunistic and collaborative way. This goal can be accomplished by sharing device resources using their built-in WiFi adapter. In this paper we explain why the standard ad-hoc mode for building mobile peer-to-peer networks is not always the best choice and we propose an alternative self-organizing approach in which an opportunistic infrastructure-mode WiFi network is built. The particularity of this network is that each device can either be an access point or a client and change its role and wireless channel over time. This contribution advances the state of the art by using a context-aware approach that considers actual frequency allocation to other devices and monitored traffic. We finally show that our approach increases the average speed for delivering messages to a level that in several situations outperforms previous work in the area, as well as a simple single-channel ad-hoc WiFi network.


Computer Graphics Forum | 2011

Motion Deblurring from a Single Image using Circular Sensor Motion

Yosuke Bando; Bing-Yu Chen; Tomoyuki Nishita

Image blur caused by object motion attenuates high frequency content of images, making post‐capture deblurring an ill‐posed problem. The recoverable frequency band quickly becomes narrower for faster object motion as high frequencies are severely attenuated and virtually lost. This paper proposes to translate a camera sensor circularly about the optical axis during exposure, so that high frequencies can be preserved for a wide range of in‐plane linear object motion in any direction within some predetermined speed. That is, although no object may be photographed sharply at capture time, differently moving objects captured in a single image can be deconvolved with similar quality. In addition, circular sensor motion is shown to facilitate blur estimation thanks to distinct frequency zero patterns of the resulting motion blur point‐spread functions. An analysis of the frequency characteristics of circular sensor motion in relation to linear object motion is presented, along with deconvolution results for photographs captured with a prototype camera.


Computer Graphics Forum | 2009

Simulation of Tearing Cloth with Frayed Edges

Napaporn Metaaphanon; Yosuke Bando; Bing-Yu Chen; Tomoyuki Nishita

Woven cloth can commonly be seen in daily life and also in animation. Unless prevented in some way, woven cloth usually frays at the edges. However, in computer graphics, woven cloth is typically modeled as a continuum sheet, which is not suitable for representing frays. This paper proposes a model that allows yarn movement and slippage during cloth tearing. Drawing upon techniques from textile and mechanical engineering fields, we model cloth as woven yarn crossings where each yarn can be independently torn when the strain limit is reached. To make the model practical for graphics applications, we simulate only tearing part of cloth with a yarn‐level model using a simple constrained mass‐spring system for computational efficiency. We designed conditions for switching from a standard continuum sheet model to our yarn‐level model, so that frays can be initiated and propagated along the torn lines. Results show that our method can achieve plausible tearing cloth animation with frayed edges.


consumer communications and networking conference | 2015

Supporting heterogeneous networks and pervasive storage in mobile content-sharing middleware

Daniel J. Dubois; Yosuke Bando; Konosuke Watanabe; Arata Miyamoto; Munehiko Sato; William Papper; V. Michael Bove

Sharing digital content with others is now an important part of human social activities. Despite the increasing need to share, most sharing operations are not simple. Many applications are not interoperable with others, require an Internet connection, or require cumbersome configuration and coordination efforts. Our idea is to simplify digital content sharing on mobile devices by providing support for self-organizing heterogeneous networks and pervasive storage. That is, mobile devices can spontaneously connect to each other over a mixture of different available networks (e.g., 3G/4G, WiFi, Bluetooth, etc.) without requiring an explicit user action of network selection or mandatory Internet access. Moreover, indirect communication can be further augmented by pervasive storage. Mobile devices can store shared content on it, which can later be automatically downloaded by other devices in proximity, thus allowing location-based sharing with minimal coordination even when devices are not in the same location at the same time. This paper shows how these technologies can be incorporated into mobile content-sharing middleware to simplify sharing operations among mobile devices without any modification to commercially available devices or applications. In particular, (i) we provide an implementation of our approach as extension modules for existing content-sharing middleware, (ii) we present two example applications built on top of it, and (iii) we demonstrate our approach through experiments in representative situations.

Collaboration


Dive into the Yosuke Bando's collaboration.

Top Co-Authors

Avatar

Tomoyuki Nishita

Hiroshima Shudo University

View shared research outputs
Top Co-Authors

Avatar

Bing-Yu Chen

National Taiwan University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Daniel J. Dubois

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Takuya Saito

National Institute for Environmental Studies

View shared research outputs
Top Co-Authors

Avatar

Henry Holtzman

Massachusetts Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge