Gerwin de Haan
Delft University of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Gerwin de Haan.
eurographics | 2005
Gerwin de Haan; Michal Koutek; Frits H. Post
We present IntenSelect, a novel selection technique that dynamically assists the user in the selection of 3D objects in Virtual Environments. Ray-casting selection is commonly used, although it has limited accuracy and can be problematic in more difficult situations where the intended selection object is occluded or moving. Selection-byvolume techniques, which extend normal ray-casting, provide error tolerance to cope with the limited accuracy. However, these extensions generally are not usable in the more complex selection situations.We have devised a new selection-by-volume technique to create a more flexible selection technique which can be used in these situations. To achieve this, we use a new scoring function to calculate the score of objects, which fall within a user controlled, conic selection volume. By accumulating these scores for the objects, we obtain a dynamic, time-dependent, object ranking. The highest ranking object, or active object, is indicated by bending the otherwise straight selection ray towards it. As the selection ray is effectively snapped to the object, the user can now select the object more easily. Our user tests indicate that IntenSelect can improve the selection performance over ray-casting, especially in the more difficult cases of small objects. Furthermore, the introduced time-dependent object ranking proves especially useful when objects are moving, occluded and/or cluttered. Our simple scoring scheme can be easily extended for special purpose interaction such as widget or application specific interaction functionality, which creates new possibilities for complex interaction behavior.
virtual reality software and technology | 2008
Gerwin de Haan; Eric J. Griffith; Frits H. Post
We demonstrate the use of the Wii Balance Board#8482; as a low-cost virtual reality input device. We provide an overview of obtaining and working with the sensor input. By processing the sensor values from the balance board, we are able to use it for both discrete and continuous input, which can be used to drive a variety of VR interaction metaphors. Using continuous input, the balance board is well suited for interactions requiring two simultaneous degrees of freedom and up to three total degrees of freedom, such as navigation or rotation. The discrete input is suitable for control input, such as mode switching or object selection.
virtual reality software and technology | 2002
Gerwin de Haan; Michal Koutek; Frits H. Post
In this paper we present a basic set of intuitive exploration tools for the data visualization in a Virtual Environment on the Responsive Workbench. First, we introduce the Plexipad, a transparent acrylic panel which allows two-handed interaction in combination with a stylus. After a description of various interaction scenarios with these two devices, we present a basic set of interaction tools, which support the user in the process of exploring volumetric datasets. Besides the interaction tools for navigation and selection we present tools that are closely coupled with probing tools. These interactive probing tools are used as input for complex visualization tools and for performing virtual measurements. We illustrate the use of our tools in two applications from different research areas which use volumetric and particle data.
symposium on 3d user interfaces | 2009
Gerwin de Haan; Josef Scheuer; Raymond de Vries; Frits H. Post
Current surveillance systems can display many individual video streams within spatial context in a 2D map or 3D Virtual Environment (VE). The aim of this is to overcome some problems in traditional systems, e.g. to avoid intensive mental effort to maintain orientation and to ease tracking of motions between different screens. However, such integrated environments introduce new challenges in navigation and comprehensive viewing, caused by imperfect video alignment and complex 3D interaction. In this paper, we propose a novel, first-person viewing and navigation interface for integrated surveillance monitoring in a VE. It is currently designed for egocentric tasks, such a tracking persons or vehicles along several cameras. For these tasks, it aims to minimize the operators 3D navigation effort while maximizing coherence between video streams and spatial context. The user can easily navigate between adjacent camera views and is guided along 3D guidance paths. To achieve visual coherence, we use dynamic video embedding: according to the viewers position, translucent 3D video canvases are smoothly transformed and blended in the simplified 3D environment. The animated first-person view provides fluent visual flow which facilitates easier maintenance of orientation and can aid in spatial awareness. We discuss design considerations, the implementation of our proposed interface in our prototype surveillance system and demonstrate its use and limitations in various surveillance environments.
Mitigation and Adaptation Strategies for Global Change | 2017
Johannes G. Leskens; Christian Kehl; Tim Tutenel; Timothy R. Kol; Gerwin de Haan; G.S. Stelling; Elmar Eisemann
Developing strategies to mitigate or to adapt to the threats of floods is an important topic in the context of climate changes. Many of the world’s cities are endangered due to rising ocean levels and changing precipitation patterns. It is therefore crucial to develop analytical tools that allow us to evaluate the threats of floods and to investigate the influence of mitigation and adaptation measures, such as stronger dikes, adaptive spatial planning, and flood disaster plans. Up until the present, analytical tools have only been accessible to domain experts, as the involved simulation processes are complex and rely on computational and data-intensive models. Outputs of these analytical tools are presented to practitioners (i.e., policy analysts and political decision-makers) on maps or in graphical user interfaces. In practice, this output is only used in limited measure because practitioners often have different information requirements or do not trust the direct outcome. Nonetheless, literature indicates that a closer collaboration between domain experts and practitioners can ensure that the information requirements of practitioners are better aligned with the opportunities and limitations of analytical tools. The objective of our work is to present a step forward in the effort to make analytical tools in flood management accessible for practitioners to support this collaboration between domain experts and practitioners. Our system allows the user to interactively control the simulation process (addition of water sources or influence of rainfall), while a realistic visualization allows the user to mentally map the results onto the real world. We have developed several novel algorithms to present and interact with flood data. We explain the technologies, discuss their necessity alongside test cases, and introduce a user study to analyze the reactions of practitioners to our system. We conclude that, despite the complexity of flood simulation models and the size of the involved data sets, our system is accessible for practitioners of flood management so that they can carry out flood simulations together with domain experts in interactive work sessions. Therefore, this work has the potential to significantly change the decision-making process and may become an important asset in choosing sustainable flood mitigations and adaptation strategies.
IEEE Computer Graphics and Applications | 2010
Gerwin de Haan; Huib Piguillet; Frits H. Post
Interactive spatial navigation for video surveillance networks can be difficult. This is especially true for live tracking of complex events along many cameras, in which operators must make quick, accurate navigation decisions on the basis of the actual situation. The proposed spatial navigation interface facilitates such video surveillance tasks.
eurographics | 2006
Gerwin de Haan; Eric J. Griffith; Michal Koutek; Frits H. Post
Hybrid user interfaces (UIs) integrate well-known 2D user interface elements into the 3D virtual environment, and provide a familiar and portable interface across a variety of VR systems. However, their usability is often reduced by accuracy and speed, caused by inaccuracies in tracking and a lack of constraints and feedback. To ease these difficulties often large widgets and bulky interface elements must be used, which, at the same time, limit the size of the 3D workspace and restrict the space where other supplemental 2D information can be displayed. In this paper, we present two developments addressing this problem: supportive user interaction and a new implementation of a hybrid interface. First, we describe a small set of tightly integrated 2D windows we developed with the goal of providing increased flexibility in the UI and reducing UI clutter. Next we present extensions to our supportive selection technique, IntenSelect. To better cope with a variety of VR and UI tasks, we extended the selection assistance technique to include direct selection, spring-based manipulation, and specialized snapping behavior. Finally, we relate how the effective integration of these two developments eases some of the UI restrictions and produces a more comfortable VR experience.
Archive | 2013
Christian Kehl; Gerwin de Haan
Floods are a permanent threat for urban environments and coastal regions. Due to the numerous environmental and climatological factors that cause floods, their prevention and prediction is complicated. Flood protection and prevention plans are assessed by computational models. The related risk analysis communication demands simulations of accurate inundation models and their interactive visualisation. In our new Dutch Knowledge for Climate project we work closely together with industrial partners with whom we develop a platform that supports this communication. Our research focuses on real-time flow simulations, their interactive visualisation and steering techniques for flooding scenarios. Our goal is an interactive, realistic problem-solving environment for flooding discussions amongst decision makers, water boards, hydrologists and the general public. Most important in this research are sophisticated algorithms that promote this goal. Related work in the field is done on small-scale examples and abstract computational models. We work on large-scale, high-resolution, realistic computations while maintaining interactivity. For this we use aerial terrain LiDAR point clouds of The Netherlands and most recent, complex Computational Fluid Dynamics (CFD) models. The rendering system will apply a combination of new point cloud compression algorithms and spatial Level-of-Detail data structures. Fast CFD simulations will be achieved by subgridding and parallel processing of non-linear calculation models. Additionally, the integration of various geo-information (i.e. precipitation) is key to educated flooding decision-making. In this paper we describe in detail our project goals, our current progress and upcoming related research tracks.
engineering interactive computing system | 2009
Gerwin de Haan; Frits H. Post
Complex and dynamic interaction behaviors in applications such as Virtual Reality (VR) systems are difficult to design and develop. Reasons for this include the complexity and limitations in specification models and their integration with the underlying architecture, and lack of supporting development tools. In this paper we present our StateStream approach, which uses a dynamic programming language to bridge the gap between the behavioral model descriptions, the underlying VR architecture and customized development tools. Whereas the dynamic language allows full flexibility, the interaction model adds explicit structures for interactive behavior. A dual modeling mechanism is used to capture both discrete and continuous interaction behavior. The models are described and executed in the dynamic language itself, unifying the description of interaction, its execution and the connection with external software components. We will highlight the main features of StateStream, and illustrate how the tight integration of interaction model and architecture enables a flexible and open-ended development environment. We will demonstrate the use of StateStream in a prototype system for studying and adapting complex 3D interaction techniques for VR.
Computer Graphics Forum | 2007
Gerwin de Haan; René Molenaar; Michal Koutek; Frits H. Post
In projection‐based Virtual Reality (VR) systems, typically only one headtracked user views stereo images rendered from the correct view position. For other users, who are presented a distorted image, moving with the first users head motion, it is difficult to correctly view and interact with 3D objects in the virtual environment. In close‐range VR systems, such as the Virtual Workbench, distortion effects are especially large because objects are within close range and users are relatively far apart. On these systems, multi‐user collaboration proves to be difficult. In this paper, we analyze the problem and describe a novel, easy to implement method to prevent and reduce image distortion and its negative effects on close‐range interaction task performance. First, our method combines a shared camera model and view distortion compensation. It minimizes the overall distortion for each user, while important user‐personal objects such as interaction cursors, rays and controls remain distortion‐free. Second, our method retains co‐location for interaction techniques to make interaction more consistent. We performed a user experiment on our Virtual Workbench to analyze user performance under distorted view conditions with and without the use of our method. Our findings demonstrate the negative impact of view distortion on task performance and the positive effect our method introduces. This indicates that our method can enhance the multi‐user collaboration experience on close‐range, projection‐based VR systems.