Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Hideo Otsuna is active.

Publication


Featured researches published by Hideo Otsuna.


Nature | 2012

Crowding induces live cell extrusion to maintain homeostatic cell numbers in epithelia

George T. Eisenhoffer; Patrick D. Loftus; Masaaki Yoshigi; Hideo Otsuna; Chi Bin Chien; Paul A. Morcos; Jody Rosenblatt

For an epithelium to provide a protective barrier, it must maintain homeostatic cell numbers by matching the number of dividing cells with the number of dying cells. Although compensatory cell division can be triggered by dying cells, it is unknown how cell death might relieve overcrowding due to proliferation. When we trigger apoptosis in epithelia, dying cells are extruded to preserve a functional barrier. Extrusion occurs by cells destined to die signalling to surrounding epithelial cells to contract an actomyosin ring that squeezes the dying cell out. However, it is not clear what drives cell death during normal homeostasis. Here we show in human, canine and zebrafish cells that overcrowding due to proliferation and migration induces extrusion of live cells to control epithelial cell numbers. Extrusion of live cells occurs at sites where the highest crowding occurs in vivo and can be induced by experimentally overcrowding monolayers in vitro. Like apoptotic cell extrusion, live cell extrusion resulting from overcrowding also requires sphingosine 1-phosphate signalling and Rho-kinase-dependent myosin contraction, but is distinguished by signalling through stretch-activated channels. Moreover, disruption of a stretch-activated channel, Piezo1, in zebrafish prevents extrusion and leads to the formation of epithelial cell masses. Our findings reveal that during homeostatic turnover, growth and division of epithelial cells on a confined substratum cause overcrowding that leads to their extrusion and consequent death owing to the loss of survival factors. These results suggest that live cell extrusion could be a tumour-suppressive mechanism that prevents the accumulation of excess epithelial cells.


The Journal of Neuroscience | 2010

Synaptic Activity and Activity-Dependent Competition Regulates Axon Arbor Maturation, Growth Arrest, and Territory in the Retinotectal Projection

Naila Ben Fredj; Sarah Hammond; Hideo Otsuna; Chi-Bin Chien; Juan Burrone; Martin P. Meyer

In the retinotectal projection, synapses guide retinal ganglion cell (RGC) axon arbor growth by promoting branch formation and by selectively stabilizing branches. To ask whether presynaptic function is required for this dual role of synapses, we have suppressed presynaptic function in single RGCs using targeted expression of tetanus toxin light-chain fused to enhanced green fluorescent protein (TeNT-Lc:EGFP). Time-lapse imaging of singly silenced axons as they arborize in the tectum of zebrafish larvae shows that presynaptic function is not required for stabilizing branches or for generating an arbor of appropriate complexity. However, synaptic activity does regulate two distinct aspects of arbor development. First, single silenced axons fail to arrest formation of highly dynamic but short-lived filopodia that are a feature of immature axons. Second, single silenced axons fail to arrest growth of established branches and so occupy significantly larger territories in the tectum than active axons. However, if activity-suppressed axons had neighbors that were also silent, axonal arbors appeared normal in size. A similar reversal in phenotype was observed when single TeNT-Lc:EGFP axons are grown in the presence of the NMDA receptor antagonist MK801 [(+)-5-methyl-10,11- dihydro-5H-dibenzo[a,d]cyclohepten-5,10-imine maleate]. Although expansion of arbor territory is prevented when neighbors are silent, formation of transient filopodia is not. These results suggest that synaptic activity by itself regulates filopodia formation regardless of activity in neighboring cells but that the ability to arrest growth and focusing of axonal arbors in the target is an activity-dependent, competitive process.


IEEE Transactions on Visualization and Computer Graphics | 2009

An interactive visualization tool for multi-channel confocal microscopy data in neurobiology research

Yong Wan; Hideo Otsuna; Chi-Bin Chien; Charles D. Hansen

Confocal microscopy is widely used in neurobiology for studying the three-dimensional structure of the nervous system. Confocal image data are often multi-channel, with each channel resulting from a different fluorescent dye or fluorescent protein; one channel may have dense data, while another has sparse; and there are often structures at several spatial scales: subneuronal domains, neurons, and large groups of neurons (brain regions). Even qualitative analysis can therefore require visualization using techniques and parameters fine-tuned to a particular dataset. Despite the plethora of volume rendering techniques that have been available for many years, the techniques standardly used in neurobiological research are somewhat rudimentary, such as looking at image slices or maximal intensity projections. Thus there is a real demand from neurobiologists, and biologists in general, for a flexible visualization tool that allows interactive visualization of multi-channel confocal data, with rapid fine-tuning of parameters to reveal the three-dimensional relationships of structures of interest. Together with neurobiologists, we have designed such a tool, choosing visualization methods to suit the characteristics of confocal data and a typical biologists workflow. We use interactive volume rendering with intuitive settings for multidimensional transfer functions, multiple render modes and multi-views for multi-channel volume data, and embedding of polygon data into volume data for rendering and editing. As an example, we apply this tool to visualize confocal microscopy datasets of the developing zebrafish visual system.


Development | 2012

A complex choreography of cell movements shapes the vertebrate eye

Kristen M. Kwan; Hideo Otsuna; Hinako Kidokoro; Keith R. Carney; Yukio Saijoh; Chi Bin Chien

Optic cup morphogenesis (OCM) generates the basic structure of the vertebrate eye. Although it is commonly depicted as a series of epithelial sheet folding events, this does not represent an empirically supported model. Here, we combine four-dimensional imaging with custom cell tracking software and photoactivatable fluorophore labeling to determine the cellular dynamics underlying OCM in zebrafish. Although cell division contributes to growth, we find it dispensable for eye formation. OCM depends instead on a complex set of cell movements coordinated between the prospective neural retina, retinal pigmented epithelium (RPE) and lens. Optic vesicle evagination persists for longer than expected; cells move in a pinwheel pattern during optic vesicle elongation and retinal precursors involute around the rim of the invaginating optic cup. We identify unanticipated movements, particularly of central and peripheral retina, RPE and lens. From cell tracking data, we generate retina, RPE and lens subdomain fate maps, which reveal novel adjacencies that might determine corresponding developmental signaling events. Finally, we find that similar movements also occur during chick eye morphogenesis, suggesting that the underlying choreography is conserved among vertebrates.


Developmental Dynamics | 2015

High-resolution analysis of central nervous system expression patterns in zebrafish Gal4 enhancer-trap lines

Hideo Otsuna; David A. Hutcheson; Robert N. Duncan; Adam D. McPherson; Aaron N. Scoresby; Brooke F. Gaynes; Zongzong Tong; Esther Fujimoto; Kristen M. Kwan; Chi Bin Chien; Richard I. Dorsky

Background: The application of the Gal4/UAS system to enhancer and gene trapping screens in zebrafish has greatly increased the ability to label and manipulate cell populations in multiple tissues, including the central nervous system (CNS). However the ability to select existing lines for specific applications has been limited by the lack of detailed expression analysis. Results: We describe a Gal4 enhancer trap screen in which we used advanced image analysis, including three‐dimensional confocal reconstructions and documentation of expression patterns at multiple developmental time points. In all, we have created and annotated 98 lines exhibiting a wide range of expression patterns, most of which include CNS expression. Expression was also observed in nonneural tissues such as muscle, skin epithelium, vasculature, and neural crest derivatives. All lines and data are publicly available from the Zebrafish International Research Center (ZIRC) from the Zebrafish Model Organism Database (ZFIN). Conclusions: Our detailed documentation of expression patterns, combined with the public availability of images and fish lines, provides a valuable resource for researchers wishing to study CNS development and function in zebrafish. Our data also suggest that many existing enhancer trap lines may have previously uncharacterized expression in multiple tissues and cell types. Developmental Dynamics 244:785–796, 2015.


PLOS ONE | 2015

The RNA Binding Protein Igf2bp1 Is Required for Zebrafish RGC Axon Outgrowth In Vivo

John A. Gaynes; Hideo Otsuna; Douglas S. Campbell; John Manfredi; Edward M. Levine; Chi Bin Chien

Attractive growth cone turning requires Igf2bp1-dependent local translation of β-actin mRNA in response to external cues in vitro. While in vivo studies have shown that Igf2bp1 is required for cell migration and axon terminal branching, a requirement for Igf2bp1 function during axon outgrowth has not been demonstrated. Using a timelapse assay in the zebrafish retinotectal system, we demonstrate that the β-actin 3’UTR is sufficient to target local translation of the photoconvertible fluorescent protein Kaede in growth cones of pathfinding retinal ganglion cells (RGCs) in vivo. Igf2bp1 knockdown reduced RGC axonal outgrowth and tectal coverage and retinal cell survival. RGC-specific expression of a phosphomimetic Igf2bp1 reduced the density of axonal projections in the optic tract while sparing RGCs, demonstrating for the first time that Igf2bp1 is required during axon outgrowth in vivo. Therefore, regulation of local translation mediated by Igf2bp proteins may be required at all stages of axon development.


2012 IEEE Symposium on Biological Data Visualization (BioVis) | 2012

Interactive extraction of neural structures with user-guided morphological diffusion

Yong Wan; Hideo Otsuna; Chi-Bin Chien; Charles D. Hansen

Extracting neural structures with their fine details from confocal volumes is essential to quantitative analysis in neurobiology research. Despite the abundanceof various segmentation methods and tools, for complex neural structures, both manual and semi-automatic methods are ineffective either in full 3D or when user interactions are restricted to 2D slices. Novel interaction techniques and fast algorithms are demanded by neurobiologists to interactively and intuitively extract neural structures from confocal data. In this paper, we present such an algorithm-technique combination, which lets users interactively select desired structures from visualization results instead of 2D slices. By integrating the segmentation functions with a confocal visualization tool neurobiologists can easily extract complex neural structures within their typical visualization workflow.


eurographics | 2013

Synthetic brainbows

Yong Wan; Hideo Otsuna; Charles D. Hansen

Brainbow is a genetic engineering technique that randomly colorizes cells. Biological samples processed with this technique and imaged with confocal microscopy have distinctive colors for individual cells. Complex cellular structures can then be easily visualized. However, the complexity of the Brainbow technique limits its applications. In practice, most confocal microscopy scans use different florescence staining with typically at most three distinct cellular structures. These structures are often packed and obscure each other in rendered images making analysis difficult. In this paper, we leverage a process known as GPU framebuffer feedback loops to synthesize Brainbow‐like images. In addition, we incorporate ID shuffling and Monte‐Carlo sampling into our technique, so that it can be applied to single‐channel confocal microscopy data. The synthesized Brainbow images are presented to domain experts with positive feedback. A user survey demonstrates that our synthetic Brainbow technique improves visualizations of volume data with complex structures for biologists.


eurographics | 2014

Real-time dense nucleus selection from confocal data

Yong Wan; Hideo Otsuna; Kristen M. Kwan; Charles D. Hansen

Selecting structures from volume data using direct over-the-visualization interactions, such as a paint brush, is perhaps the most intuitive method in a variety of application scenarios. Unfortunately, it seems difficult to design a universal tool that is effective for all different structures in biology research. In [WOCH12b], an interactive technique was proposed for extracting neural structures from confocal microscopy data. It uses a dual-stroke paint brush to select desired structures directly from volume visualizations. However, the technique breaks down when it was applied to selecting densely packed structures with condensed shapes, such as nuclei from zebrafish eye development research. We collaborated with biologists studying zebrafish eye development and adapted the paint brush tool for real-time nucleus selection from volume data. The morphological diffusion algorithm used in the previous paint brush is restricted to gradient descending directions for improved nucleus boundary definition. Occluded seeds are removed using backward ray-casting. The adapted paint brush is then used in tracking cell movements in a time sequence dataset of a developing zebrafish eye.


international conference on image processing | 2013

Three-dimensional alignment and merging of confocal microscopy stacks

Nisha Ramesh; Hideo Otsuna; Tolga Tasdizen

We describe an efficient, robust, automated method for image alignment and merging of translated, rotated and flipped con-focal microscopy stacks. The samples are captured in both directions (top and bottom) to increase the SNR of the individual slices. We identify the overlapping region of the two stacks by using a variable depth Maximum Intensity Projection (MIP) in the z dimension. For each depth tested, the MIP images gives an estimate of the angle of rotation between the stacks and the shifts in the x and y directions using the Fourier Shift property in 2D. We use the estimated rotation angle, shifts in the x and y direction and align the images in the z direction. A linear blending technique based on a sigmoidal function is used to maximize the information from the stacks and combine them. We get maximum information gain as we combine stacks obtained from both directions.

Collaboration


Dive into the Hideo Otsuna's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge