Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jorji Nonaka is active.

Publication


Featured researches published by Jorji Nonaka.


international asia pacific symposium on visualization | 2007

Particle-based volume rendering

Naohisa Sakamoto; Jorji Nonaka; Koji Koyamada; S. Tanaka

In this paper, we introduce a novel point-based volume rendering technique based on tiny particles. In the proposed technique, a set of tiny opaque particles is generated from a given 3D scalar field based on a user-specified transfer function and the rejection method. The final image is then generated by projecting these particles onto the image plane. The particle projection does not need to be in order since the particle transparency values are not taken into account. During the projection stage, only a simple depth-order comparison is required to eliminate the occluded particles. This is the main characteristic of the proposed technique and should greatly facilitate the distributed processing. Semi-transparency is one of the main characteristics of volume rendering and, in the proposed technique the quantity of projected particles greatly influences the degree of semi-transparency. Sub-pixel processing was used in order to obtain the semi-transparent effect by controlling the projection of multiple particles onto each of the pixel areas. When using sub-pixel processing, the final pixel value is obtained by averaging the contribution from each of the projected particles. In order to verify its usefulness, we applied the proposed technique to volume rendering of multiple volume data as well as irregular volume data.


symposium on volume visualization | 2004

Hybrid hardware-accelerated image composition for sort-last parallel rendering on graphics clusters with commodity image compositor

Jorji Nonaka; Nobuyuki Kukimoto; Naohisa Sakamoto; Hiroshi Hazama; Yasuhiro Watashiba; Xuezhen Liu; Masato Ogata; Masanori Kanazawa; Koji Koyamada

Hardware-accelerated image composition for sort-last parallel rendering has received increasing attention as an effective solution to increased performance demands brought about by the recent advances in commodity graphics accelerators. So far, several different hardware solutions for alpha and depth compositing have been proposed and a few of them have become commercially available. They share impressive compositing speed and high scalability. However, the cost makes it prohibitively expensive to build a large visualization system. In this paper, we used a hardware image compositor marketed by Mitsubishi Precision Co., Ltd. (MPC) which is now available as an independent device enabling the building of our own visualization cluster. This device is based on binary compositing tree architecture, and the scalable cascade interconnection makes it possible to build a large visualization system. However, we focused on a minimal configuration PC cluster using only one compositing device while taking cost into consideration. In order to emulate this cascade interconnection of MPC compositors, we propose and evaluate the hybrid hardware-assisted image composition method which uses the OpenGL alpha blending capability of the graphics boards for assisting the hardware image composition process. Preliminary experiments show that the use of graphics boards diminished the performance degradation when using an emulation based on image feedback through available interconnection network. We found that this proposed method becomes an important alternative for providing high performance image composition at a reasonable cost.


Future Generation Computer Systems | 2017

234Compositor: A flexible parallel image compositing framework for massively parallel visualization environments

Jorji Nonaka; Kenji Ono; Masahiro Fujita

Abstract Leading-edge HPC systems have already been generating a vast amount of time-varying complex data sets, and future-generation HPC systems are expected to produce much higher amounts of such data, thus making their visualization and analysis a much more challenging task. In such scenario, the In-situ visualization approach, where the same HPC system is used for both numerical simulation and visualization, is expected to become more a necessity than an option. On massively parallel environments, the Sort-last approach, which requires final image compositing, has become the de facto standard for parallel rendering. In this work, we present the 234Compositor, a scalable and flexible parallel image compositor framework for massively parallel rendering applications. It is composed of a single-stage power-of-two conversion mechanism based on 234 Scheduling of 3-2 and 2-1 Eliminations, and a final image gathering mechanism based on Data Padding and MPI Rank Reordering for enabling the use of MPI_Gather collective operation. In addition, the hybrid MPI/OpenMP parallelism can also be applied to take advantage of current multi-node, multi-core architecture of modern HPC systems. We confirmed the scalability of the proposed approach by evaluating a Binary-Swap implementation of 234Compositor on the K computer, a Japanese leading-edge supercomputer installed at RIKEN AICS. We also evaluated an integration with HIVE (Heterogeneously Integrated Visual-analytic Environment) in order to verify a real-world usage. From the encouraging scalability results, we expect that this approach can also be useful even on the next-generation HPC systems which may demand higher level of parallelism.


eurographics workshop on parallel graphics and visualization | 2009

A decomposition approach for optimizing large-scale parallel image composition on multi-core MPP systems

Jorji Nonaka; Kenji Ono

In recent years, multi-core processor architecture has emerged as the predominant hardware architecture for high performance computing (HPC) systems. In addition, computational nodes based on SMP (symmetric multiprocessing) and NUMA (non-uniform memory architecture) have become increasingly common. Traditional parallel image composition algorithms were not primarily designed to take advantage of the combined message passing and shared address space parallelism provided by modern massively parallel processing (MPP) systems. This therefore might result in undesirable performance loss. In this study, we have investigated the use of a simple decomposition approach to take advantage of these different hardware characteristics for optimizing the parallel image composition process. Performance evaluation was carried out on a multi-core, multi-processor architecture based T2K Open Supercomputer, and we obtained encouraging results showing the effectiveness of the proposed approach. This approach also seems promising to tackle the large-scale image composition problem on nextgeneration HPC systems where an ever increasing number of processing cores are expected.


international symposium on multimedia | 2006

Volume Rendering Using Tiny Particles

Naohisa Sakamoto; Jorji Nonaka; Koji Koyamada; S. Tanaka

In the present paper, we introduce a novel point-based volume rendering technique based on particle generation from user-specified transfer function. In the proposed technique, a set of tiny particles is generated from a given 3D scalar field. This particle generation process is based on a user-specified transfer function and rejection method. These particles are then projected onto the image plane to generate the final image. The main characteristic of the proposed technique is that the particle projection order is independent and unfixed because the transparency values of the particles are not taken into account. Therefore, only the depth-order comparison between the particles is required during the projection stage, which can greatly facilitate the distributed processing. When the quantity of projected particles is small, for instance, a maximum of one per pixel area, it becomes difficult to achieve semi-transparency, which is the main characteristic of volume rendering. To overcome this problem, sub-pixel processing is applied in order to allow the projection of multiple particles onto each of the pixel areas. The final pixel value is then obtained by averaging the contribution from each of these projected particles. The use of the Metropolis method for particle generation is also investigated as an alternative method for further improving the image quality


international conference on systems | 2014

2-3-4 Decomposition Method for Large-Scale Parallel Image Composition with Arbitrary Number of Nodes

Jorji Nonaka; Chongke Bi; Kenji Ono; Masahiro Fujita

Visual data exploration helps users to get better insight into their data and has been considered an indispensable tool for computational scientists. Sort-last parallel rendering is a proven approach for large-scale scientific visualization however it requires a costly parallel image composition at the final stage. Since it requires interprocess communication among the entire nodes, it usually dominates the total cost of a parallel rendering process. Efficient image composition algorithms for power-of-two number of nodes have already been proposed so far, however when handling non power-of-two number of nodes, an additional processing is required causing performance penalty. The simplest way is to execute this additional processing in the initial stage, or in parts, during the entire parallel image composition process. The latter approach causes less performance penalty, however since it adds performance penalty at every stage of parallel image composition, thus it can suffer in a large-scale image composition where tens, or even hundreds, of thousands of nodes can be involved. In this paper, we propose a decomposition approach, for non-power-of-two number of nodes, named 2-3-4 Decomposition. It works by generating exactly power-of-two number of groups of 2, 3 or 4 nodes. Therefore, by compositing independently each of these groups, at the end, we will obtain a power-of-two number of nodes making it easy to combine with any of the existing image composition algorithms for power-of-two number of nodes. It works as a pre-processing and the performance penalty is limited to the overhead of compositing three or four images. This performance penalty can be further reduced depending on the image compositing algorithm to be applied in the next stage. Our experimental results have shown promising results making this method a potential candidate for large-scale image composition with arbitrary number of nodes.


ieee international conference on high performance computing data and analytics | 2015

Top computational visualization R&D problems 2015: panel

Issei Fujishiro; Bing-Yu Chen; Wei Chen; Seok-Hee Hong; Takayuki Itoh; Koji Koyamada; Kenji Ono; Jorji Nonaka

In this panel discussion, I want to share some opinions about the Data Visualization (or Information Visualization, i.e., InfoVis) without display and/or viewer to make the audiences think about some challenging cases while providing information to users. Some examples are also introduced, which probably may inspire some new directions for the research of Data Visualization as well as Information Visualization.


international conference of the ieee engineering in medicine and biology society | 2006

Framework for Creating New Discriminats for Detecting DTI properties: DTI Mapper

Koji Sakai; Naohisa Sakamoto; Jorji Nonaka; Yukio Yasuhara; Koji Koyamada

We have commonly employed medical images, not including X-ray photography, which have their image enhanced by taking advantage of certain characteristics of the human body to look for some valuable medical information. In general, the medical imaging modalities have employed an image acquisition method which enhances a certain feature of the seat of a disease for later processing of the acquired images. Typical examples of the feature enhancement for medical images are the active area mapping on fMRI using statistical parametric mapping (SPM), and the diffusivity mapping on diffusion tensor imaging (DTI) using fractional anisotropy (FA), apparent diffusion coefficient (ABC) or relative anisotropy (RA). Especially in DTI, many researchers have been trying to reveal the current state of a disease without any invasion of the body by using some variable discriminants. In this paper, we propose a framework, which supports and promotes the creation of new discriminants for DTI. The proposed system enables the users to create new discriminants by using eigen values from the voxels of DTI data, and to search for important clinical information applying discriminant mapping to the DTI slice images


international conference on high performance computing and simulation | 2017

Distributed Particle-Based Rendering Framework for Large Data Visualization on HPC Environments

Jorji Nonaka; Naohisa Sakamoto; Takashi Shimizu; Masahiro Fujita; Kenji Ono; Koji Koyamada

In this paper, we present a distributed data visualization framework for HPC environments based on the PBVR (Particle Based Volume Rendering) method. The PBVR method is a kind of point-based rendering approach where the volumetric data to be visualized is represented as a set of small and opaque particles. This method has the object-space and image-space variants, defined by the place (object or image- space) where the particle data sets are generated. We focused on the object-space approach, which has the advantage when handling large-scale simulation data sets such as those generated by modern HPC systems. In the object-space approach, the particle generation and the subsequent rendering processes can be easily decoupled. In this work, we took advantage of this separability to implement the proposed distributed rendering framework. The particle generation process utilizes the functionalities provided by the KVS (Kyoto Visualization System), and the particle rendering process utilizes the functionalities provided by the HIVE (Heterogeneously Integrated Visual- analytics Environment). The proposed distributed visualization framework is targeted to work also on systems without any hardware graphics acceleration capability, which are commonly found on modern HPC operational environments. We evaluated this PBVR-based distributed visualization infrastructure on the K computer operational environment by utilizing a CPU-only processing server for the particle data generation and rendering. In this preliminary evaluation, using some CFD (Computational Fluid Dynamics) simulation data sets, we obtained encouraging results for pushing further the development in order to make this system available as an effective visualization alternative for the HPC users.


international conference on computer graphics and interactive techniques | 2017

Parallel particle-based volume rendering using adaptive particle size adjustment technique

Kengo Hayshi; Takashi Shimizu; Naohisa Sakamoto; Jorji Nonaka

Numerical simulation results generated from high performance computing (HPC) environments have become extremely concurrent with the recent advances in computer simulation technology, and there is an increase in the demand for extra-scale visualization techniques. In this paper, we propose a parallel particle-based volume rendering method based on adaptive particle size adjustment technique, which is suitable for handling large-scale and complex distributed volume datasets in the HPC environment. In the experiment, the proposed technique is applied to a large-scale unstructured thermal fluid simulation, and a performance model is constructed to confirm the effectiveness of the proposed technique.

Collaboration


Dive into the Jorji Nonaka's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge