Simon Prince
National University of Singapore
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Simon Prince.
Physics in Medicine and Biology | 2003
Simon Prince; Ville Kolehmainen; Jari P. Kaipio; Maria Angela Franceschini; David A. Boas; Simon R. Arridge
We apply state space estimation techniques to the time-varying reconstruction problem in optical tomography. We develop a stochastic model for describing the evolution of quasi-sinusoidal medical signals such as the heartbeat, assuming these are represented as a known frequency with randomly varying amplitude and phase. We use the extended Kalman filter in combination with spatial regularization techniques to reconstruct images from highly under-determined time-series data. This system also naturally segments activity belonging to different biological processes. We present reconstructions of simulated data and of real data recorded from the human motor cortex (Franceschini et al 2000 Optics Express 6 49-57). It is argued that the application of these time-series techniques improves both the fidelity and temporal resolution of reconstruction in optical tomography.
IEEE Computer Graphics and Applications | 2002
Mark Billinghurst; Adrian David Cheok; Simon Prince; Hirokazu Kato
Weve been exploring how augmented reality (AR) technology can create fundamentally new forms of remote collaboration for mobile devices. AR involves the overlay of virtual graphics and audio on reality. Typically, the user views the world through a handheld or head-mounted display (HMD) thats either see-through or overlays graphics on video of the surrounding environment. Unlike other computer interfaces that draw users away from the real world and onto the screen, AR interfaces enhance the real world experience. For example, with this technology doctors could see virtual ultrasound information superimposed on a patients body.
international symposium on mixed and augmented reality | 2002
Simon Prince; Adrian David Cheok; Farzam Farbiz; Todd Williamson; N Johnson; Mark Billinghurst; Hirokazu Kato
We present a complete system for live capture of 3D content and simultaneous presentation in augmented reality. The user sees the real world from his viewpoint, but modified so that the image of a remote collaborator is rendered into the scene. Fifteen cameras surround the collaborator, and the resulting video streams are used to construct a three-dimensional model of the subject using a shape-from-silhouette algorithm. Users view a two-dimensional fiducial marker using a video-see-through augmented reality interface. The geometric relationship between the marker and head-mounted camera is calculated, and the equivalent view of the subject is computed and drawn into the scene. Our system can generate 384 /spl times/ 288 pixel images of the models at 25 fps, with a latency of < 100 ms. The result gives the strong impression that the subject is a real part of the 3D scene. We demonstrate applications of this system in 3D videoconferencing and entertainment.
IEEE Computer Graphics and Applications | 2002
Simon Prince; Ke Xu; Adrian David Cheok
To realistically integrate 3D graphics into an unprepared environment, camera position must be estimated by tracking natural image features. We apply our technique to cases where feature positions in adjacent frames of an image sequence are related by a homography, or projective transformation. We describe this transformations computation and demonstrate several applications. First, we use an augmented notice board to explain how a homography, between two images of a planar scene, completely determines the relative camera positions. Second, we show that the homography can also recover pure camera rotations, and we use this to develop an outdoor AR tracking system. Third, we use the system to measure head rotation and form a simple low-cost virtual reality (VR) tracking solution.
international symposium on wearable computers | 2002
Adrian David Cheok; K. Ganesh Kumar; Simon Prince
Human interaction with wearable computers is an important research issue, especially when combined with mixed reality (MR) applications. Natural and non-obtrusive means of interaction calls for new devices, which should be simple to use. This paper considers the design of new interaction hardware, such as a wearable computer pen, a tilt pad, a wand, and a gesture pad designed using accelerometers for such scenarios. The very difficult problem of noise in small hardware accelerometers (in the form of random bias drifts), which seriously impedes its application in position measurement is also examined in detail. Kalman filtering has resulted in improved results, which are presented herewith. The application of accelerometers to design interfaces for use in mixed reality environments is also explained.
international symposium on mixed and augmented reality | 2002
Kar Wee Chia; Adrian David Cheok; Simon Prince
We present a complete scalable system for 6 DOF camera tracking based on natural features. Crucially, the calculation is based only on pre-captured reference images and previous estimates of the camera pose and is hence suitable for online applications. We match natural features in the current frame to two spatially separated reference images. We overcome the wide baseline matching problem by matching to the previous frame and transferring point positions to the reference images. We then minimize deviations from the two-view and three-view constraints between the reference images and the current frame as a function of camera position parameters. We stabilize this calculation using a recursive form of temporal regularization that is similar in spirit to the Kalman filter. We can track camera pose over hundreds of frames and realistically integrate virtual objects with only slight jitter.
international symposium on mixed and augmented reality | 2002
Adrian David Cheok; Wang Weihua; Xubo Yang; Simon Prince; Fong Siew Wan; Mark Billinghurst; Hirokazu Kato
This paper presents an interactive theatre based on an embodied mixed reality space and wearable computers. Embodied computing mixed reality spaces integrate ubiquitous computing, tangible interaction and social computing within a mixed reality space, which enables intuitive interaction with physical world and virtual world. We believe it has potential advantages to support novel interactive theatre experiences. Therefore, we explored the novel interactive theatre experience supported in the embodied mixed reality space, and implemented live 3D characters to interact with user in such a system.
IEEE Transactions on Multimedia | 2005
Farzam Farbiz; Adrian David Cheok; Liu Wei; Zhou Zhiying; Xu Ke; Simon Prince; Mark Billinghurst; Hirokazu Kato
We describe an augmented reality system for superimposing three-dimensional (3-D) live content onto two-dimensional fiducial markers in the scene. In each frame, the Euclidean transformation between the marker and the camera is estimated. The equivalent virtual view of the live model is then generated and rendered into the scene at interactive speeds. The 3-D structure of the model is calculated using a fast shape-from-silhouette algorithm based on the outputs of 15 cameras surrounding the subject. The novel view is generated by projecting rays through each pixel of the desired image and intersecting them with the 3-D structure. Pixel color is estimated by taking a weighted sum of the colors of the projections of this 3-D point in nearby real camera images. Using this system, we capture live human models and present them via the augmented reality interface at a remote location. We can generate 384/spl times/288 pixel images of the models at 25 fps, with a latency of <100 ms. The result gives the strong impression that the model is a real 3-D part of the scene.
ubiquitous computing | 2003
Ke Xu; Simon Prince; Adrian David Cheok; Yan Qiu; Krishnamoorthy Ganesh Kumar
Despite the increasing sophistication of augmented reality (AR) tracking technology, tracking in unprepared environments still remains an enormous challenge according to a recent survey. Most current systems are based on a calculation of the optical flow between the current and previous frames to adjust the label position. Here we present two alternative algorithms based on geometrical image constraints. The first is based on epipolar geometry and provides a general description of the constraints on image flow between two static scenes. The second is based on the calculation of a homography relationship between the current frame and a stored representation of the scene. A homography can exactly describe the image motion when the scene is planar, or when the camera movement is a pure rotation, and provides a good approximation when these conditions are nearly met. We assess all three styles of algorithms with a number of criteria including robustness, speed and accuracy. We demonstrate two real-time AR systems here, which are based on the estimation of homography. One is an outdoor geographical labelling/overlaying system, and the other is an AR Pacman game application.
international conference on computer graphics and interactive techniques | 2002
Simon Prince; Adrian David Check; Farzam Farbiz; Todd Williamson; N Johnson; Mark Billinghurst; Hirokazu Kato
We demonstrate a real-time 3-D augmented reality video-conferencing system. The observer sees the real world from his viewpoint, but modified so that the image of a remote collaborator is rendered into the scene. For each frame, we estimate the transformation between the camera and a fiducial marker using techniques developed in Kato and Billinghurst [1999]. We use a shape-from-silhouette algorithm to generate the appropriate view of the collaborator in real time. This is based on simultaneous measurements from fifteen calibrated cameras that surround the collaborator. The novel view is then superimposed upon the real world image and appropriate directional audio is added. The result gives the strong impression that the virtual collaborator is a real part of the scene.