Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Shihui Guo is active.

Publication


Featured researches published by Shihui Guo.


The Visual Computer | 2015

Adaptive motion synthesis for virtual characters: a survey

Shihui Guo; Richard Southern; Jian Chang; David Greer; Jian J. Zhang

Character motion synthesis is the process of artificially generating natural motion for a virtual character. In film, motion synthesis can be used to generate difficult or dangerous stunts without putting performers at risk. In computer games and virtual reality, motion synthesis enriches the player or participant experience by allowing for unscripted and emergent character behavior. In each of these applications the ability to adapt to changes to environmental conditions or to the character in a smooth and natural manner, while still conforming with user-specified constraints, determines the utility of a method to animators and industry practitioners. This focus on adaptation capability distinguishes our survey from other reviews which focus on general technology developments. Three main methodologies (example-based; simulation-based and hybrid) are summarised and evaluated using compound metrics: adaptivity, naturalness and controllability. By assessing existing techniques according to this classification we are able to determine how well a method corresponds to users’ expectations. We discuss optimization strategies commonly used in motion synthesis literature, and also contemporary perspectives from biology which give us a deeper insight into this problem. We also present observations and reflections from industry practitioners to reveal the operational constraints of character motion synthesis techniques. Our discussion and review presents a unique insight into the subject, and provide essential guidance when selecting appropriate methods to design an adaptive motion controller.


IEEE Transactions on Systems, Man, and Cybernetics | 2013

Motion Adaptation With Motor Invariant Theory

Fangde Liu; Richard Southern; Shihui Guo; Xiaosong Yang; Jian J. Zhang

Bipedal walking is not fully understood. Motion generated from methods employed in robotics literature is stiff and is not nearly as energy efficient as what we observe in nature. In this paper, we propose validity conditions for motion adaptation from biological principles in terms of the topology of the dynamic system. This allows us to provide a closed-form solution to the problem of motion adaptation to environmental perturbations. We define both global and local controllers that improve structural and state stability, respectively. Global control is achieved by coupling the dynamic system with a neural oscillator, which preserves the periodic structure of the motion primitive and ensures stability by entrainment. A group action derived from Lie group symmetry is introduced as a local control that transforms the underlying state space while preserving certain motor invariants. We verify our method by evaluating the stability and energy consumption of a synthetic passive dynamic walker and compare this with motion data of a real walker. We also demonstrate that our method can be applied to a variety of systems.


International Journal of Human-computer Studies \/ International Journal of Man-machine Studies | 2017

Understanding the impact of multimodal interaction using gaze informed mid-air gesture control in 3D virtual objects manipulation

Shujie Deng; Nan Jiang; Jian Chang; Shihui Guo; Jian J. Zhang

Multimodal interactions provide users with more natural ways to manipulate virtual 3D objects than using traditional input methods. An emerging approach is gaze modulated pointing, which enables users to perform object selection and manipulation in a virtual space conveniently through the use of a combination of gaze and other interaction techniques (e.g., mid-air gestures). As gaze modulated pointing uses different sensors to track and detect user behaviours, its performance relies on the users perception on the exact spatial mapping between the virtual space and the physical space. An underexplored issue is, when the spatial mapping differs with the users perception, manipulation errors (e.g., out of boundary errors, proximity errors) may occur. Therefore, in gaze modulated pointing, as gaze can introduce misalignment of the spatial mapping, it may lead to users misperception of the virtual environment and consequently manipulation errors. This paper provides a clear definition of the problem through a thorough investigation on its causes and specifies the conditions when it occurs, which is further validated in the experiment. It also proposes three methods (Scaling, Magnet and Dual-gaze) to address the problem and examines them using a comparative study which involves 20 participants with 1040 runs. The results show that all three methods improved the manipulation performance with regard to the defined problem where Magnet and Dual-gaze delivered better performance than Scaling. This finding could be used to inform a more robust multimodal interface design supported by both eye tracking and mid-air gesture control without losing efficiency and stability.


Computer Graphics Forum | 2014

Locomotion Skills for Insects with Sample-based Controller

Shihui Guo; Jian Chang; Xiaosong Yang; Wencheng Wang; Jian J. Zhang

Natural‐looking insect animation is very difficult to simulate. The fast movement and small scale of insects often challenge the standard motion capture techniques. As for the manual key‐framing or physics‐driven methods, significant amounts of time and efforts are necessary due to the delicate structure of the insect, which prevents practical applications. In this paper, we address this challenge by presenting a two‐level control framework to efficiently automate the modeling and authoring of insects’ locomotion. On the top level, we design a Triangle Placement Engine to automatically determine the location and orientation of insects’ foot contacts, given the user‐defined trajectory and settings, including speed, load, path and terrain etc. On the low‐level, we relate the Central Pattern Generator to the triangle profiles with the assistance of a Controller Look‐Up Table to fast simulate the physically‐based movement of insects. With our approach, animators can directly author insects’ behavior among a wide range of locomotion repertoire, including walking along a specified path or on an uneven terrain, dynamically adjusting to external perturbations and collectively transporting prey back to the nest.


computer graphics international | 2017

Pose selection for animated scenes and a case study of bas-relief generation

Meili Wang; Shihui Guo; Minghong Liao; Dongjian He; Jian Chang; Jian Zhang; Zhiyi Zhang

This paper aims to automate the process of generating a meaningful single still image from a temporal input of scene sequences. The success of our extraction relies on evaluating the optimal pose of characters selection, which should maximize the information conveyed. We define the information entropy of the still image candidates as the evaluation criteria. To validate our method and to demonstrate its effectiveness, we generated a relief (as a unique form of art creation) to narrate given temporal action scenes. A user study was conducted to experimentally compare the computer-selected poses with those selected by human participants. The results show that the proposed method can assist the selection of informative pose of character effectively.


Iete Technical Review | 2013

Saliency-based relief generation

Meili Wang; Shihui Guo; Hongming Zhang; Dongjian He; Jian Chang; Jian J. Zhang

Abstract Relief is a special art form that differs from painting and other round sculpture, and is traditionally created by laborious hand carving. Existing methods for digital relief generation focus on direct geometric compression, which transforms a three-dimensional (3D) mesh into a detail-preserving surface with a shallow depth, indicating the presence of 3D figures. We propose to add saliency information into digital relief generation. Novel saliency extraction methods are introduced to preserve relief features, and then a non-linear boosting of details is adopted to generate the final relief models. This work seamlessly combines visual perception and geometrical processing.


Vehicle System Dynamics | 2017

Data-driven train set crash dynamics simulation

Zhao Tang; Yunrui Zhu; Yinyu Nie; Shihui Guo; Fengjia Liu; Jian Chang; Jian J. Zhang

ABSTRACT Traditional finite element (FE) methods are arguably expensive in computation/simulation of the train crash. High computational cost limits their direct applications in investigating dynamic behaviours of an entire train set for crashworthiness design and structural optimisation. On the contrary, multi-body modelling is widely used because of its low computational cost with the trade-off in accuracy. In this study, a data-driven train crash modelling method is proposed to improve the performance of a multi-body dynamics simulation of train set crash without increasing the computational burden. This is achieved by the parallel random forest algorithm, which is a machine learning approach that extracts useful patterns of force–displacement curves and predicts a force–displacement relation in a given collision condition from a collection of offline FE simulation data on various collision conditions, namely different crash velocities in our analysis. Using the FE simulation results as a benchmark, we compared our method with traditional multi-body modelling methods and the result shows that our data-driven method improves the accuracy over traditional multi-body models in train crash simulation and runs at the same level of efficiency.


The Visual Computer | 2018

Action snapshot with single pose and viewpoint

Meili Wang; Shihui Guo; Minghong Liao; Dongjian He; Jian Chang; Jian J. Zhang

Many art forms present visual content as a single image captured from a particular viewpoint. How to select a meaningful representative moment from an action performance is difficult, even for an experienced artist. Often, a well-picked image can tell a story properly. This is important for a range of narrative scenarios, such as journalists reporting breaking news, scholars presenting their research, or artists crafting artworks. We address the underlying structures and mechanisms of a pictorial narrative with a new concept, called the action snapshot, which automates the process of generating a meaningful snapshot (a single still image) from an input of scene sequences. The input of dynamic scenes could include several interactive characters who are fully animated. We propose a novel method based on information theory to quantitatively evaluate the information contained in a pose. Taking the selected top postures as input, a convolutional neural network is constructed and trained with the method of deep reinforcement learning to select a single viewpoint, which maximally conveys the information of the sequence. User studies are conducted to experimentally compare the computer-selected poses and viewpoints with those selected by human participants. The results show that the proposed method can assist the selection of the most informative snapshot effectively from animation-intensive scenarios.


Multimedia Tools and Applications | 2018

3D sunken relief generation from a single image by feature line enhancement

Meili Wang; Liying Yang; Tingting Li; Shihui Guo; Jincen Jiang; Hongming Zhang; Jian Chang

Sunken relief is an art form whereby the depicted shapes are sunk into a given flat plane with a shallow overall depth. In this paper, we propose an efficient sunken relief generation algorithm based on a single image by the technique of feature line enhancement. Our method starts from a single image. First, we smoothen the image with morphological operations such as opening and closing operations and extract the feature lines by comparing the values of adjacent pixels. Then we apply unsharp masking to sharpen the feature lines. After that, we enhance and smoothen the local information to obtain an image with less burrs and jaggies. Differential operations are applied to produce the perceptive relief-like images. Finally, we construct the sunken relief surface by triangularization which transforms two-dimensional information into a three-dimensional model. The experimental results demonstrate that our method is simple and efficient.


Computer Animation and Virtual Worlds | 2018

Semantic modeling of indoor scenes with support inference from a single photograph: Semantic modeling of indoor scenes from a single photo

Yinyu Nie; Jian Chang; Ehtzaz Chaudhry; Shihui Guo; Andi Smart; Jian J. Zhang

We present an automatic approach for the semantic modeling of indoor scenes based on a single photograph, instead of relying on depth sensors. Without using handcrafted features, we guide indoor scene modeling with feature maps extracted by fully convolutional networks. Three parallel fully convolutional networks are adopted to generate object instance masks, a depth map, and an edge map of the room layout. Based on these high‐level features, support relationships between indoor objects can be efficiently inferred in a data‐driven manner. Constrained by the support context, a global‐to‐local model matching strategy is followed to retrieve the whole indoor scene. We demonstrate that the proposed method can efficiently retrieve indoor objects including situations where the objects are badly occluded. This approach enables efficient semantic‐based scene editing.

Collaboration


Dive into the Shihui Guo's collaboration.

Top Co-Authors

Avatar

Jian Chang

Bournemouth University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yinyu Nie

Southwest Jiaotong University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge