Charilaos Papadopoulos
Stony Brook University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Charilaos Papadopoulos.
IEEE Computer Graphics and Applications | 2015
Charilaos Papadopoulos; Kaloian Petkov; Arie E. Kaufman; Klaus Mueller
The Reality Deck is a visualization facility offering state-of-the-art aggregate resolution and immersion. Its a 1.5-Gpixel immersive tiled display with a full 360-degree horizontal field of view. Comprising 416 high-density LED-backlit LCD displays, it visualizes gigapixel-resolution data while providing 20/20 visual acuity for most of the visualization space.
ieee virtual reality conference | 2012
Charilaos Papadopoulos; Daniel Sugarman; Arie Kaufmant
We introduce NuNav3D, a body-driven 3D navigation interface for large displays and immersive scenarios. While 3D navigation is a core component of VR applications, certain situations, like remote displays in public or large visualization environments, do not allow for using a navigation controller or prop. NuNav3D maps hand motions, obtained from a pose recognition framework which is driven by a depth sensor, to a virtual camera manipulator, allowing for direct control of 4 DOFs of navigation. We present the NuNav3D navigation scheme and our preliminary user study results under two scenarios, a path-following case with tight geometrical constraints and an open space exploration case, while comparing our method against a traditional joypad controller.
IEEE Transactions on Visualization and Computer Graphics | 2016
Charilaos Papadopoulos; Ievgeniia Gutenko; Arie E. Kaufman
Empirical, hypothesis-driven, experimentation is at the heart of the scientific discovery process and has become commonplace in human-factors related fields. To enable the integration of visual analytics in such experiments, we introduce VEEVVIE, the Visual Explorer for Empirical Visualization, VR and Interaction Experiments. VEEVVIE is comprised of a back-end ontology which can model several experimental designs encountered in these fields. This formalization allows VEEVVIE to capture experimental data in a query-able form and makes it accessible through a front-end interface. This front-end offers several multi-dimensional visualization widgets with built-in filtering and highlighting functionality. VEEVVIE is also expandable to support custom experimental measurements and data types through a plug-in visualization widget architecture. We demonstrate VEEVVIE through several case studies of visual analysis, performed on the design and data collected during an experiment on the scalability of high-resolution, immersive, tiled-display walls.
Proceedings of SPIE | 2014
Ievgeniia Gutenko; Kaloian Petkov; Charilaos Papadopoulos; Xin Zhao; Ji Hwan Park; Arie E. Kaufman; Ronald Cha
We introduce a novel remote volume rendering pipeline for medical visualization targeted for mHealth (mobile health) applications. The necessity of such a pipeline stems from the large size of the medical imaging data produced by current CT and MRI scanners with respect to the complexity of the volumetric rendering algorithms. For example, the resolution of typical CT Angiography (CTA) data easily reaches 512^3 voxels and can exceed 6 gigabytes in size by spanning over the time domain while capturing a beating heart. This explosion in data size makes data transfers to mobile devices challenging, and even when the transfer problem is resolved the rendering performance of the device still remains a bottleneck. To deal with this issue, we propose a thin-client architecture, where the entirety of the data resides on a remote server where the image is rendered and then streamed to the client mobile device. We utilize the display and interaction capabilities of the mobile device, while performing interactive volume rendering on a server capable of handling large datasets. Specifically, upon user interaction the volume is rendered on the server and encoded into an H.264 video stream. H.264 is ubiquitously hardware accelerated, resulting in faster compression and lower power requirements. The choice of low-latency CPU- and GPU-based encoders is particularly important in enabling the interactive nature of our system. We demonstrate a prototype of our framework using various medical datasets on commodity tablet devices.
IEEE Transactions on Visualization and Computer Graphics | 2013
Charilaos Papadopoulos; Arie E. Kaufman
We present a framework for acuity-driven visualization of super-high resolution image data on gigapixel displays. Tiled display walls offer a large workspace that can be navigated physically by the user. Based on head tracking information, the physical characteristics of the tiled display and the formulation of visual acuity, we guide an out-of-core gigapixel rendering scheme by delivering high levels of detail only in places where it is perceivable to the user. We apply this principle to gigapixel image rendering through adaptive level of detail selection. Additionally, we have developed an acuity-driven tessellation scheme for high-quality Focus-and-Context (F+C) lenses that significantly reduces visual artifacts while accurately capturing the underlying lens function. We demonstrate this framework on the Reality Deck, an immersive gigapixel display. We present the results of a user study designed to quantify the impact of our acuity-driven rendering optimizations in the visual exploration process. We discovered no evidence suggesting a difference in search task performance between our framework and naive rendering of gigapixel resolution data, while realizing significant benefits in terms of data transfer overhead. Additionally, we show that our acuity-driven tessellation scheme offers substantially increased frame rates when compared to naive pre-tessellation, while providing indistinguishable image quality.
ieee virtual reality conference | 2013
Kaloian Petkov; Charilaos Papadopoulos; Arie E. Kaufman
We introduce the concept of the infinite canvas as a metaphor for the immersive visual exploration of very large image datasets using a natural walking interface. The interface allows the user to move along the display surface and to be continuously exposed to new data, essentially exploring the horizontal axis of an arbitrarily long canvas. Our system provides a spiral navigation interface that shows a compressed immersive overview of the data and facilitates the rapid and fluid transition to distant points within the infinite canvas. We demonstrate the implementation of the infinite canvas in the worlds first 1.5 billion pixel tiled immersive display.
symposium on 3d user interfaces | 2015
Qi Sun; Seyedkoosha Mirhosseini; Ievgeniia Gutenko; Ji Hwan Park; Charilaos Papadopoulos; Bireswar Laha; Arie E. Kaufman
With the rapid development and wide-spread availability of hand-held market-level 3D scanner, character modeling has recently gained focus in both academia and industry. Virtual shopping applications have been widely-used in e-business. We present our parameter-based human avatar generation system and ongoing work on expanding the virtual shopping to the immersive virtual reality platforms employing natural user interfaces. We discuss ideas to evaluate buyer satisfaction using our system.
symposium on 3d user interfaces | 2015
Charilaos Papadopoulos; H. Choi; J. Sinha; Kiwon Yun; Arie E. Kaufman; Dimitris Samaras; Bireswar Laha
Chirocentric 3D user interfaces are sometimes hailed as the “holy grail” of human-computer interaction. However, implementations of these UIs can require cumbersome devices (such as tethered wearable datagloves), be limited in terms of functionality or obscure the algorithms used for hand pose and gesture recognition. These limitations inhibit designing, deploying and formally evaluating such interfaces. To ameliorate this situation, we describe the implementation of a practical chirocentric UI platform, targeted at immersive virtual environments with infrared tracking systems. Our main contributions are two machine learning techniques for the recognition of hand gestures (trajectories of the users hands over time) and hand poses (configurations of the users fingers) based on marker clouds and rigid body data. We report on the preliminary use of our system for the implementation of a bimanual 3DUI for a large immersive tiled display. We conclude with plans on using our system as a platform for the design and evaluation of bimanual chirocentric UIs, based on the Framework for Interaction Fidelity Analysis (FIFA).
IEEE Transactions on Visualization and Computer Graphics | 2012
Kaloian Petkov; Charilaos Papadopoulos; Min Zhang; Arie E. Kaufman; Xianfeng David Gu
ieee virtual reality conference | 2011
Kaloian Petkov; Charilaos Papadopoulos; Min Zhang; Arie E. Kaufman; Xianfeng Gu