Siu-Hang Or
The Chinese University of Hong Kong
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Siu-Hang Or.
eurographics symposium on rendering techniques | 1997
Tien-Tsin Wong; Pheng-Ann Heng; Siu-Hang Or; Wai-Yin Ng
A new image-based rendering method, based on the light field and Lumigraph system, allows illumination to be changed interactively. It does not try to recover or use any geometrical information (e.g., depth or surface normals) to calculate the illumination, but the resulting images are physically correct. The scene is first sampled from different viewpoints and under different illuminations. Treating each pixel on the back plane of the light slab as a surface element,the sampled images are used to find an apparent BRDF of each surface element. The tabular BRDF data of each pixel is further transformed to the spherical harmonic domain for efficient storage. Whenever the user changes the illumination setting, a certain number of views are reconstructed. The correct user perspective view is then displayed using the texture mapping technique of the Lumigraph system. Hence, the intensity, the type and the number of the light sources can be manipulated interactively.
Image and Vision Computing | 1998
Siu-Hang Or; W. S. Luk; Kin Hong Wong; Irwin King
A novel model-based pose estimation algorithm is presented which estimates the motion of a three-dimensional object from a image sequence. The nonlinear estimation process within iteration is divided into two linear estimation stages, namely the depth approximation and the pose calculation. In the depth approximation stage, the depths of the feature points in three-dimensional space are estimated. In the pose calculation stage, the rotation and translation parameters between the estimated feature points and the model point set arer calculated by a fast singular value decomposition method. The whole process is executed recursively until the result is stable. Since both stages can be solved efficiently, the computational cost is low. As a result, the algorithm is well-suited for real computer vision applications. We demonstrate the capability of this algorithm by applying it to a real time head tracking problem. The results are satisfactory.
computer vision and pattern recognition | 2006
Ying Kin Yu; Kin Hong Wong; Siu-Hang Or; Michael Ming-Yuen Chang
Traditional vision-based 3-D motion estimation algorithms for robots require given or calculated 3-D models while the motion is being tracked. We propose a high-speed extended-Kalman-filter-based approach that recovers position and orientation from stereo image sequences without prior knowledge as well as the procedure for the reconstruction of 3-D structures. Empowered by the use of the trifocal tensor, the computation step of 3-D models can be eliminated. The algorithm is thus more flexible and can be applied to a wide range of domains. The twist motion model is also adopted to parameterize the 3-D motion such that the motion representation in the proposed algorithm is robust and minimal. As the number of parameters to be estimated is reduced, our algorithm is more efficient, stable and accurate compared to traditional approaches. The proposed method has been verified using a real image sequence with ground truth.
Real-time Imaging | 2001
Kam-sum Lee; Kin Hong Wong; Siu-Hang Or; Yiu-fai Fung
Three-dimensional human head modeling is useful in video-conferencing or other virtual reality applications. However, manual construction of 3D models using CAD tools is often expensive and time-consuming. Here we present a robust and efficient method for the construction of a 3D human head model from perspective images viewing from different angles. In our system, a generic head model is first used, then three images of the head are required to adjust the deformable contours on the generic model to make it closer to the target head. Our contributions are as follows. Our system uses perspective images that are more realistic than orthographic projection approximations used in earlier works. Also, for shaping and positioning face organs, we present a method for estimating the camera focal length and the 3D coordinates of facial landmarks when the camera transformation is known. We also provide an alternative for the 3D coordinates estimation using epipolar geometry when the extrinsic parameters are absent. Our experiments demonstrate that our approach produces good and realistic results.
asian conference on computer vision | 1998
Siu-Hang Or; W. S. Luk; Kin Hong Wong; Irwin King
We propose a novel model-based algorithm which finds the 3D pose of an object from an image by breaking down the estimation process into two linear processing stages, namely the depth recovery and the pose calculation. The depth recovery stage determines the new positions of the model point set in 3D space whereas the pose calculation step is a least-square estimation of the transformation parameters between the point set formed from the previous stage and the model set. The estimates are iteratively refined until converged. The advantage of using our algorithm is that the computational cost is much reduced. We test our algorithm by applying it to both synthetic as well as real time head tracking problem with satisfactory results.
Journal of Visualization and Computer Animation | 1998
Tien-Tsin Wong; Pheng-Ann Heng; Siu-Hang Or; Wai-Yin Ng
A new data representation of image-based objects is presented. With this representation, the user can change the illumination as well as the viewpoint of an image-based scene. Physically correct imagery can be generated without knowing any geometrical information (e.g. depth or surface normal) of the scene. By treating each pixel on the image plane as a surface element, we can measure its apparent BRDF (bidirectional reflectance distribution function) by collecting information in the sampled images. These BRDFs allow us to calculate the correct pixel colour under a new illumination set-up by fitting the intensity, direction and number of the light sources. We demonstrate that the proposed representation allows re-rendering of the scene illuminated by different types of light sources. Moreover, two compression schemes, spherical harmonics and discrete cosine transform, are proposed to compress the huge amount of tabular BRDF data.
pacific conference on computer graphics and applications | 1997
Tien-Tsin Wong; Pheng-Ann Heng; Siu-Hang Or; Wai-Yin Ng
We present a new scheme of data representation for image-based objects. It allows the illumination to be changed interactively without knowing any geometrical information (e.g. depth or surface normal) of the scene, but the resulting images are physically correct. The scene is first sampled from different view points and under different illuminations. By treating each pixel on the image plane as a surface element, the sampled images are used to measure the apparent BRDF of each surface element. Two compression schemes, spherical harmonics and discrete cosine transform, are proposed to compress the tabular BRDF data. Whenever the user changes the illumination a certain number of views are reconstructed. The correct user perspective view is then displayed using the standard texture mapping hardware. Hence, the intensity, the type and the number of the light sources can be manipulated interactively.
international conference on pattern recognition | 2000
Tze-kin Lao; Kin Hong Wong; Kam-sum Lee; Siu-Hang Or
In this paper, a novel algorithm for creating virtual indoor environments is described. First, a panoramic mosaic is generated from a series of photos taken with a camera rotates along a horizontal axis. Then, from the panoramic mosaic image, a nonfixed viewing point virtual walkthrough system can be created by defining manually the corners in the vertical panoramic mosaic. The side ratio of the virtual walkthrough system can be obtained from the panning angle subsequently. By applying the cylindrical projection technique, the texture for the sides of the virtual walkthrough environment can be projected in a more realistic way. Real images have been used to verify our proposed algorithm with satisfactory results.
international conference on pattern recognition | 2004
Siu-Hang Or; Kin Hong Wong; Michael Ming-Yuen Chang; C. Y. Ip
We propose a method to recover the global structure with local details around a point. To handle a large scale of motion i.e. 360 degree around the point, we use an optimization-based algorithm to estimate the structure from the panorama around the fixed camera point. The global structure estimated can thus be used to initialize a structure from motion algorithm to recover the local details through simple camera motion such as panning. Synthetic as well as real data are used to test the validity of the algorithm. Our method can be used in applications such as authoring of virtual environments from a real scene.
Archive | 2005
Siu-Hang Or; Kin Hong Wong; Ying-kin Yu; Michael Ming-yuan Chang