Yuen C. Law
RWTH Aachen University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Yuen C. Law.
symposium on 3d user interfaces | 2011
Sebastian Ullrich; Thomas Knott; Yuen C. Law; Oliver Grottke; Torsten W. Kuhlen
In this paper, we present the results of a user study with a bimanual haptic setup. The goal of the experiment was to evaluate if Guiards theory of the bimanual frame of reference can be applied to interaction tasks in virtual environments (VE) with haptic rendering. This theory proposes an influence of the non-dominant hand (NDH) on the dominant hand (DH). The experiment was conducted with multiple trials under two different conditions: bimanual and unimanual. The interaction task in this scenario was a sequence of pointing, alignment and docking sub-tasks for the dominant hand. In the bimanual condition, an asynchronous pointing task was added for the non-dominant hand. This additional task was primarily designed to bring the non-dominant hand closer to the other hand and thus enable the creation of a frame of reference. Our results show the potential of this task design extension (with NDH utilization). Task completion times are significantly lower in the bimanual condition compared to the unimanual case, without significant impact on overall precision. Furthermore, the bimanual condition shows better mean accuracy over several measures, e.g., lateral displacement and penetration depth. Additionally, subject performance was not only compared for all participants, but also between subgroups: medical vs. non-medical and gamer vs. non-gamer. User preference for a bimanual system over a unimanual system has been indicated with a post-test questionnaire.
symposium on spatial user interaction | 2015
Daniel Zielasko; Sebastian Freitag; Dominik Rausch; Yuen C. Law; Benjamin Weyers; Torsten W. Kuhlen
In contrast to the wide-spread use of 6-DOF pointing devices, free-hand user interfaces in Immersive Virtual Environments (IVE) are non-intrusive. However, for gesture interfaces, the definition of trigger signals is challenging. The use of mechanical devices, dedicated trigger gestures, or speech recognition are often used options, but each comes with its own drawbacks. In this paper, we present an alternative approach, which allows to precisely trigger events with a low latency using microphone input. In contrast to speech recognition, the user only blows into the microphone. The audio signature of such blow events can be recognized quickly and precisely. The results of an user study show that the proposed method allows to successfully complete a standard selection task and performs better than expected against a standard interaction device, the Flystick.
symposium on 3d user interfaces | 2015
Daniel Zielasko; Dominik Rausch; Yuen C. Law; Thomas Knott; Sebastian Pick; Sven Porsche; Joachim Herber; Johannes Hummel; Torsten W. Kuhlen
Making music by blowing on bottles is fun but challenging. We introduce a novel 3D user interface to play songs on virtual bottles. For this purpose the user blows into a microphone and the stream of air is recreated in the virtual environment and redirected to virtual bottles she is pointing to with her fingers. This is easy to learn and subsequently opens up opportunities for quickly switching between bottles and playing groups of them together to form complex melodies. Furthermore, our interface enables the customization of the virtual environment, by means of moving bottles, changing their type or filling level.
international conference on human haptic sensing and touch enabled computer applications | 2012
Thomas Knott; Yuen C. Law; Torsten W. Kuhlen
In this paper we present a haptic rendering algorithm for simulating the interaction of two independently controlled rigid objects with each other and a rigid environment. Our penalty based approach is based on a linearization model of occurring forces, and employs, for the computation of object positions and orientations, an iterative trust-region-based-optimization method. At this, the combination of a per step passivity condition and an adaptively controlled maximal object displacement achieves a stable and transparent rendering in free space and in contact situations.
eurographics | 2015
Yuen C. Law; Thomas Knott; Sebastian Pick; Benjamin Weyers; Torsten W. Kuhlen
When learning ultrasound (US) imaging, trainees must learn how to recognize structures, interpret textures and shapes, and simultaneously register the 2D ultrasound images to their 3D anatomical mental models. Alleviating the cognitive load imposed by these tasks should free the cognitive resources and thereby improve the learning process. We argue that the amount of cognitive load that is required to mentally rotate the models to match the images to them is too large and therefore negatively impacts the learning process. We present a 3D visualization tool that allows the user to naturally move a 2D slice and navigate around a 3D anatomical model. The slice is displayed in-place to facilitate the registration of the 2D slice in its 3D context. Two duplicates are also shown externally to the model; the first is a simple rendered image showing the outlines of the structures and the second is a simulated ultrasound image. Haptic cues are also provided to the users to help them maneuver around the 3D model in the virtual space. With the additional display of annotations and information of the most important structures, the tool is expected to complement the available didactic material used in the training of ultrasound procedures.
Proceedings of SPIE | 2014
Yuen C. Law; Daniel Tenbrinck; Xiaoyi Jiang; Torsten W. Kuhlen
Computer-assisted processing and interpretation of medical ultrasound images is one of the most challenging tasks within image analysis. Physical phenomena in ultrasonographic images, e.g., the characteristic speckle noise and shadowing effects, make the majority of standard methods from image analysis non optimal. Furthermore, validation of adapted computer vision methods proves to be difficult due to missing ground truth information. There is no widely accepted software phantom in the community and existing software phantoms are not exible enough to support the use of specific speckle models for different tissue types, e.g., muscle and fat tissue. In this work we propose an anatomical software phantom with a realistic speckle pattern simulation to _ll this gap and provide a exible tool for validation purposes in medical ultrasound image analysis. We discuss the generation of speckle patterns and perform statistical analysis of the simulated textures to obtain quantitative measures of the realism and accuracy regarding the resulting textures.
VCBM | 2012
Yuen C. Law; Thomas Knott; Bernd Hentschel; Torsten W. Kuhlen
Brightness modulation (B-Mode) ultrasound (US) images are used to visualize internal body structures during diagnostic and invasive procedures, such as needle insertion for Regional Anesthesia. Due to patient availability and health risks—during invasive procedures—training is often limited, thus, medical training simulators become a viable solution to the problem. Simulation of ultrasound images for medical training requires not only an acceptable level of realism but also interactive rendering times in order to be effective. To address these challenges, we present a generative method for simulating B-Mode ultrasound images using surface representations of the body structures and geometrical acoustics to model sound propagation and its interaction within soft tissue. Furthermore, physical models for backscattered, reflected and transmitted energies as well as for the beam profile are used in order to improve realism. Through the proposed methodology we are able to simulate, in real-time, plausible viewand depth-dependent visual artifacts that are characteristic in B-Mode US images, achieving both, realism and interactivity.
engineering interactive computing system | 2017
Yuen C. Law; Wilken Wehrt; Sabine Sonnentag; Benjamin Weyers
Habitual behavior at work can be beneficial because it allows people to perform their tasks more automatically and with less cognitive load. On the downside, habitual work behavior can foster inefficient work strategies as it is executed with lower levels of awareness and relatively rigidly. We propose the development of an interactive system that can help users to develop beneficial habits and change unwanted ones to reach a target workflow. To achieve this goal, first, we must collect specific information of the users current and target habits. We do this with the help of BPMN diagrams, which are then transformed to reference nets for analysis. In this form, the current and target diagrams are compared to find similarities and differences, which will guide the interactive system. The workflows, described as reference nets, can then be easily embedded into the proposed interactive system, as such description is formally described and is executable.
Archive | 2017
Yuen C. Law; Stéphane Cotin; Torsten W. Kuhlen
Ultrasound (US) imaging is a low-cost, non-invasive and non-radioactive technique, often preferred as a mean to explore the body’s internal structures during diagnosis procedures. Furthermore, it allows the physicians to view these structures interactively, making it ideal in the guidance of invasive procedures such as Regional Anesthesia (RA) and biopsies, where a needle must be inserted into the patient’s body and guided to a specific point. However, obtaining enough training for these procedures is not easy due to the low availability of patients with whom to practice the necessary skills. Moreover, patient safety and comfort are major concerns here. Other traditional training methods include the use of phantoms made of gels or meat and practicing on fellow trainees. Ideally, a training phantom for US procedures would offer variety of scenarios, repeatability and anatomical correctness. In these sense, Virtual Reality (VR) can fulfill these requirements and play an important role in addressing the current challenges in US training. This work presents an US simulation framework aimed to contribute in improving the current training situation by providing the necessary tools to build the aforementioned VR-based training phantoms. In the design of the simulation methodology, attention was focused on reproducing the characteristic features that identify an US image, without compromising too much of the output framerate required for real-time interaction, essential for interactive applications and training purposes. It was also important to provide flexible software interfaces to support the creation of multiple training scenarios, ranging from changing the transducer properties, through creating new patient anatomies, to developing different training tools. The major contribution in this work is the US simulation method and corresponding software framework. The simulation method emulates the functionality of real ultrasound machines to produce the ultrasonic beam, detect echoes and construct the final images. The framework facilitates rapid integration of US simulation in a variety of use cases thanks to its flexible design. An additional contribution, which represents a prerequisite for the simulation, is the review and analysis of the requirements for softtissue modeling. This work resulted not only in a list of guidelines to improve existing anatomical models, but also in a set of tested acoustic properties for the most common types of tissue. Furthermore, a description, an implementation and the respective results of the validation and verification process are provided. The inherent complexity of this process, caused by the lack of ground-truth data against to which compare the results, is further discussed. Finally, the work discusses two use case examples in which the simulation framework was used to integrate US synthetic images to develop different training tools.
EuroRv^3 '16 Proceedings of the EuroVis Workshop on Reproducibility, Verification, and Validation in Visualization | 2016
Yuen C. Law; Benjamin Weyers; Torsten W. Kuhlen
In the simulation of multi-component systems, we often encounter a problem with a lack of ground-truth data. This situation makes the validation of our simulation methods and models a difficult task. In this work we present a guideline to design validation methodologies that can be applied to the validation of multi-component simulations that lack of ground-truth data. Additionally we present an example applied to an Ultrasound Image Simulation for medical training and give an overview of the considerations made and the results for each of the validation methods. With these guidelines we expect to obtain more comparable and reproducible validation results from which other similar work can benefit.