Lingyun Yu
Hangzhou Dianzi University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Lingyun Yu.
IEEE Transactions on Visualization and Computer Graphics | 2010
Lingyun Yu; Pjotr Svetachov; Petra Isenberg; Maarten H. Everts; Tobias Isenberg
We present the design and evaluation of FI3D, a direct-touch data exploration technique for 3D visualization spaces. The exploration of three-dimensional data is core to many tasks and domains involving scientific visualizations. Thus, effective data navigation techniques are essential to enable comprehension, understanding, and analysis of the information space. While evidence exists that touch can provide higher-bandwidth input, somesthetic information that is valuable when interacting with virtual worlds, and awareness when working in collaboration, scientific data exploration in 3D poses unique challenges to the development of effective data manipulations. We present a technique that provides touch interaction with 3D scientific data spaces in 7 DOF. This interaction does not require the presence of dedicated objects to constrain the mapping, a design decision important for many scientific datasets such as particle simulations in astronomy or physics. We report on an evaluation that compares the technique to conventional mouse-based interaction. Our results show that touch interaction is competitive in interaction speed for translation and integrated interaction, is easy to learn and use, and is preferred for exploration and wayfinding tasks. To further explore the applicability of our basic technique for other types of scientific visualizations we present a second case study, adjusting the interaction to the illustrative visualization of fiber tracts of the brain and the manipulation of cutting planes in this context.
IEEE Transactions on Visualization and Computer Graphics | 2012
Lingyun Yu; Konstantinos Efstathiou; Petra Isenberg; Tobias Isenberg
Data selection is a fundamental task in visualization because it serves as a pre-requisite to many follow-up interactions. Efficient spatial selection in 3D point cloud datasets consisting of thousands or millions of particles can be particularly challenging. We present two new techniques, TeddySelection and CloudLasso, that support the selection of subsets in large particle 3D datasets in an interactive and visually intuitive manner. Specifically, we describe how to spatially select a subset of a 3D particle cloud by simply encircling the target particles on screen using either the mouse or direct-touch input. Based on the drawn lasso, our techniques automatically determine a bounding selection surface around the encircled particles based on their density. This kind of selection technique can be applied to particle datasets in several application domains. TeddySelection and CloudLasso reduce, and in some cases even eliminate, the need for complex multi-step selection processes involving Boolean operations. This was confirmed in a formal, controlled user study in which we compared the more flexible CloudLasso technique to the standard cylinder-based selection technique. This study showed that the former is consistently more efficient than the latter - in several cases the CloudLasso selection time was half that of the corresponding cylinder-based selection.
IEEE Transactions on Visualization and Computer Graphics | 2016
Lingyun Yu; Konstantinos Efstathiou; Petra Isenberg; Tobias Isenberg
We present a family of three interactive Context-Aware Selection Techniques (CAST) for the analysis of large 3D particle datasets. For these datasets, spatial selection is an essential prerequisite to many other analysis tasks. Traditionally, such interactive target selection has been particularly challenging when the data subsets of interest were implicitly defined in the form of complicated structures of thousands of particles. Our new techniques SpaceCast, TraceCast, and PointCast improve usability and speed of spatial selection in point clouds through novel context-aware algorithms. They are able to infer a users subtle selection intention from gestural input, can deal with complex situations such as partially occluded point clusters or multiple cluster layers, and can all be fine-tuned after the selection interaction has been completed. Together, they provide an effective and efficient tool set for the fast exploratory analysis of large datasets. In addition to presenting Cast, we report on a formal user study that compares our new techniques not only to each other but also to existing state-of-the-art selection methods. Our results show that Cast family members are virtually always faster than existing methods without tradeoffs in accuracy. In addition, qualitative feedback shows that PointCast and TraceCast were strongly favored by our participants for intuitiveness and efficiency.
Computer Graphics Forum | 2018
X. Cai; Konstantinos Efstathiou; Xiao-Yu Xie; Yingcai Wu; Y. Shi; Lingyun Yu
Pie and doughnut charts nicely convey the part–whole relationship and they have become the most recognizable chart types for representing proportions in business and data statistics. Many experiments have been carried out to study human perception of the pie chart, while the corresponding aspects of the doughnut chart have seldom been tested, even though the doughnut chart and the pie chart share several similarities. In this paper, we report on a series of experiments in which we explored the effect of a few fundamental design parameters of doughnut charts, and additional visual cues, on the accuracy of such charts for proportion estimates. Since mobile devices are becoming the primary devices for casual reading, we performed all our experiments on such device. Moreover, the screen size of mobile devices is limited and it is therefore important to know how such size constraint affects the proportion accuracy. For this reason, in our first experiment we tested the chart size and we found that it has no significant effect on proportion accuracy. In our second experiment, we focused on the effect of the doughnut chart inner radius and we found that the proportion accuracy is insensitive to the inner radius, except the case of the thinnest doughnut chart. In the third experiment, we studied the effect of visual cues and found that marking the centre of the doughnut chart or adding tick marks at 25% intervals improves the proportion accuracy. Based on the results of the three experiments, we discuss the design of doughnut charts and offer suggestions for improving the accuracy of proportion estimates.
Computer Graphics Forum | 2018
X. Cai; Konstantinos Efstathiou; Xiao-Yu Xie; Yingcai Wu; Y. Shi; Lingyun Yu
Pie and doughnut charts nicely convey the part–whole relationship and they have become the most recognizable chart types for representing proportions in business and data statistics. Many experiments have been carried out to study human perception of the pie chart, while the corresponding aspects of the doughnut chart have seldom been tested, even though the doughnut chart and the pie chart share several similarities. In this paper, we report on a series of experiments in which we explored the effect of a few fundamental design parameters of doughnut charts, and additional visual cues, on the accuracy of such charts for proportion estimates. Since mobile devices are becoming the primary devices for casual reading, we performed all our experiments on such device. Moreover, the screen size of mobile devices is limited and it is therefore important to know how such size constraint affects the proportion accuracy. For this reason, in our first experiment we tested the chart size and we found that it has no significant effect on proportion accuracy. In our second experiment, we focused on the effect of the doughnut chart inner radius and we found that the proportion accuracy is insensitive to the inner radius, except the case of the thinnest doughnut chart. In the third experiment, we studied the effect of visual cues and found that marking the centre of the doughnut chart or adding tick marks at 25% intervals improves the proportion accuracy. Based on the results of the three experiments, we discuss the design of doughnut charts and offer suggestions for improving the accuracy of proportion estimates.
International Workshop on Next Generation Computer Animation Techniques | 2017
Jiechang Guo; Yigang Wang; Peng Du; Lingyun Yu
In the field of scientific visualization, 3D manipulation is a fundamental task for many different scientific datasets, such as particle data in physics and astronomy, fluid data in aerography, and structured data in medical science. Current researches show that large multi-touch interactive displays serve as a promising device providing numerous significant advantages for displaying and manipulating scientific data. Those benefits of direct-touch devices motivate us to use touch-based interaction techniques to explore scientific 3D data. However, manipulating object in 3D space via 2D touch input devices is challenging for precise control. Therefore, we present a novel multi-touch approach for manipulating structured objects in 3D visualization space, based on multi-touch gestures and an extra axis for the assistance. Our method supports 7-DOF manipulations. Moreover, with the help from the extra axis and depth hints, users can have better control of the interactions. We report on a user study to make comparisons between our method and standard mouse-based 2D interface. We show in this work that touch-based interactive displays can be more effective when applied to complex problems if the interactive visualizations and interactions are designed appropriately.
Astronomy and Computing | 2017
Davide Punzo; J. M. van der Hulst; Jos B. T. M. Roerdink; J. C. Fillion-Robin; Lingyun Yu
IEEE Transactions on Visualization and Computer Graphics | 2018
Tan Tang; Sadia Rubab; Jiewen Lai; Weiwei Cui; Lingyun Yu; Yingcai Wu
EuroVis 2018 | 2018
L. Amabili; Jiri Kosinka; M.A.J. van Meersbergen; P. M. A. van Ooijen; Jos B. T. M. Roerdink; P. Svetachov; Lingyun Yu; Jimmy Johansson; Filip Sadlo; Tobias Schreck
Archive | 2016
Lingyun Yu; Konstantinos Efstathiou; Petra Isenberg; Tobias Isenberg