Manfred Lau
Lancaster University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Manfred Lau.
international conference on robotics and automation | 2005
Joel E. Chestnutt; Manfred Lau; German K. M. Cheung; James J. Kuffner; Jessica K. Hodgins; Takeo Kanade
Despite the recent achievements in stable dynamic walking for many humanoid robots, relatively little navigation autonomy has been achieved. In particular, the ability to autonomously select foot placement positions to avoid obstacles while walking is an important step towards improved navigation autonomy for humanoids. We present a footstep planner for the Honda ASIMO humanoid robot that plans a sequence of footstep positions to navigate toward a goal location while avoiding obstacles. The possible future foot placement positions are dependent on the current state of the robot. Using a finite set of state-dependent actions, we use an A* search to compute optimal sequences of footstep locations up to a time-limited planning horizon. We present experimental results demonstrating the robot navigating through both static and dynamic known environments that include obstacles moving on predictable trajectories.
symposium on computer animation | 2005
Manfred Lau; James J. Kuffner
This paper explores a behavior planning approach to automatically generate realistic motions for animated characters. Motion clips are abstracted as high-level behaviors and associated with a behavior finite-state machine (FSM) that defines the movement capabilities of a virtual character. During runtime, motion is generated automatically by a planning algorithm that performs a global search of the FSM and computes a sequence of behaviors for the character to reach a user-designated goal position. Our technique can generate interesting animations using a relatively small amount of data, making it attractive for resource-limited game platforms. It also scales efficiently to large motion databases, because the search performance is primarily dependent on the complexity of the behavior FSM rather than on the amount of data. Heuristic cost functions that the planner uses to evaluate candidate motions provide a flexible framework from which an animator can control character preferences for certain types of behavior. We show results of synthesized animations involving up to one hundred human and animal characters planning simultaneously in both static and dynamic environments.
tangible and embedded interaction | 2011
Greg Saul; Manfred Lau; Jun Mitani; Takeo Igarashi
SketchChair is an application that allows novice users to control the entire process of designing and building their own chairs. Chairs are designed using a simple 2D sketch-based interface and design validation tools, and are then fabricated from sheet materials, cut by a laser cutter or CNC milling machine. This paper presents the concepts and details of SketchChair, and both miniature and full-sized chairs are designed using the application. We conclude with results and insights from a workshop in which novice users designed their own model chairs.
symposium on computer animation | 2006
Manfred Lau; James J. Kuffner
We present a novel approach for interactively synthesizing motions for characters navigating in complex environments. We focus on the runtime efficiency for motion generation, thereby enabling the interactive animation of a large number of characters simultaneously. The key idea is to precompute search trees of motion clips that can be applied to arbitrary environments. Given a navigation goal relative to a current body position, the best available solution paths and motion sequences can be efficiently extracted during runtime through a series of table lookups. For distant start and goal positions, we first use a fast coarse-level planner to generate a rough path of intermediate sub-goals to guide each iteration of the runtime lookup phase.We demonstrate the efficiency of our technique across a range of examples in an interactive application with multiple autonomous characters navigating in dynamic environments. Each character responds in real-time to arbitrary user changes to the environment obstacles or navigation goals. The runtime phase is more than two orders of magnitude faster than existing planning methods or traditional motion synthesis techniques. Our technique is not only useful for autonomous motion generation in games, virtual reality, and interactive simulations, but also for animating massive crowds of characters offline for special effects in movies.
international conference on computer graphics and interactive techniques | 2011
Manfred Lau; Akira Ohgawara; Jun Mitani; Takeo Igarashi
Although there is an abundance of 3D models available, most of them exist only in virtual simulation and are not immediately usable as physical objects in the real world. We solve the problem of taking as input a 3D model of a man-made object, and automatically generating the parts and connectors needed to build the corresponding physical object. We focus on furniture models, and we define formal grammars for IKEA cabinets and tables. We perform lexical analysis to identify the primitive parts of the 3D model. Structural analysis then gives structural information to these parts, and generates the connectors (i.e. nails, screws) needed to attach the parts together. We demonstrate our approach with arbitrary 3D models of cabinets and tables available online.
human factors in computing systems | 2014
Christian Weichel; Manfred Lau; David Kim; Nicolas Villar; Hans-Werner Gellersen
Personal fabrication machines, such as 3D printers and laser cutters, are becoming increasingly ubiquitous. However, designing objects for fabrication still requires 3D modeling skills, thereby rendering such technologies inaccessible to a wide user-group. In this paper, we introduce MixFab, a mixed-reality environment for personal fabrication that lowers the barrier for users to engage in personal fabrication. Users design objects in an immersive augmented reality environment, interact with virtual objects in a direct gestural manner and can introduce existing physical objects effortlessly into their designs. We describe the design and implementation of MixFab, a user-defined gesture study that informed this design, show artifacts designed with the system and describe a user study evaluating the systems prototype.
international conference on computer graphics and interactive techniques | 2009
Manfred Lau; Ziv Bar-Joseph; James J. Kuffner
We present a novel method to model and synthesize variation in motion data. Given a few examples of a particular type of motion as input, we learn a generative model that is able to synthesize a family of spatial and temporal variants that are statistically similar to the input examples. The new variants retain the features of the original examples, but are not exact copies of them. We learn a Dynamic Bayesian Network model from the input examples that enables us to capture properties of conditional independence in the data, and model it using a multivariate probability distribution. We present results for a variety of human motion, and 2D handwritten characters. We perform a user study to show that our new variants are less repetitive than typical game and crowd simulation approaches of re-playing a small number of existing motion clips. Our technique can synthesize new variants efficiently and has a small memory requirement.
sketch based interfaces and modeling | 2010
Manfred Lau; Greg Saul; Jun Mitani; Takeo Igarashi
The products that we use everyday are typically designed and produced for mass consumption. However, it is difficult for such products to satisfy the needs of individual users. We present a framework that allows the end-user to participate in the entire process of designing their own objects, from the initial concept stage to the production of a new real-world object that fits well with the existing complementary objects. We advocate using a single photo as a rough guide for the user to sketch a new customized object that does not exist in the photo. Our system provides a 2D interface for sketching the outline of the new object and annotating certain geometric properties of it directly on the photo. We introduce a Modified Lipson optimization method for generating the 3D shape. We design a variety of real-world everyday objects that are complementary to the existing objects and environment in the photo. We show that novice users can learn and create new objects with our system within minutes.
ACM Transactions on Graphics | 2009
Manfred Lau; Jinxiang Chai; Ying-Qing Xu; Heung-Yeung Shum
This article presents an intuitive and easy-to-use system for interactively posing 3D facial expressions. The user can model and edit facial expressions by drawing freeform strokes, by specifying distances between facial points, by incrementally editing curves on the face, or by directly dragging facial points in 2D screen space. Designing such an interface for 3D facial modeling and editing is challenging because many unnatural facial expressions might be consistent with the users input. We formulate the problem in a maximum a posteriori framework by combining the users input with priors embedded in a large set of facial expression data. Maximizing the posteriori allows us to generate an optimal and natural facial expression that achieves the goal specified by the user. We evaluate the performance of our system by conducting a thorough comparison of our method with alternative facial modeling techniques. To demonstrate the usability of our system, we also perform a user study of our system and compare with state-of-the-art facial expression modeling software (Poser 7).
tangible and embedded interaction | 2013
Christian Weichel; Manfred Lau; Hans Gellersen
This paper explores the problem of designing enclosures (or physical cases) that are needed for prototyping electronic devices. We present a novel interface that uses electronic components as handles for designing the 3D shape of the enclosure. We use the .NET Gadgeteer platform as a case study of this problem, and implemented a proof-of-concept system for designing enclosures for Gadgeteer components. We show examples of enclosures designed and fabricated with our system.