Raymond Chun Hing Lo
University of Toronto
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Raymond Chun Hing Lo.
acm multimedia | 2011
Steve Mann; Jason Huang; Ryan E. Janzen; Raymond Chun Hing Lo; Valmiki Rampersad; Alexander Chen; Taqveer Doha
We present a way finding system that uses a range camera and an array of vibrotactile elements which we built into a helmet. The range camera is a Kinect 3D sensor from Microsoft that is meant to be kept stationary, and used to watch the user (i.e., to detect the persons gestures). Rather than using the camera to look at the user, we reverse the situation, by putting the Kinect range camera on a helmet for being worn by the user. In our case, the Kinect is in motion rather than stationary. Whereas stationary cameras have previously been used for gesture recognition, which the Kinect does very well, in our new modality, we take advantage of the Kinects resilience against rapidly changing background scenery, where the background in our case is now in motion (i.e., a conventional wearable camera would be presented with a constantly changing background that is difficult to manage by mere background subtraction). The goal of our project is collision avoidance for blind or visually impaired individuals, and for workers in harsh environments such as industrial environments with significant 3-dimensional obstacles, as well as for use in low-light environments.
canadian conference on electrical and computer engineering | 2012
Steve Mann; Raymond Chun Hing Lo; Kalin Ovtcharov; Shixiang Gu; David Dai; Calvin Ngan; Tao Ai
Realtime video HDR (High Dynamic Range) is presented in the context of a seeing aid designed originally for task-specific use (e.g. electric arc welding). It can also be built into regular eyeglasses to help people see better in everyday life. Our prototype consists of an EyeTap (electric glasses) welding helmet, with a wearable computer upon which are implemented a set of image processing algorithms that implement realtime HDR (High Dynamic Range) image processing together with applications such as mediated reality, augmediatedTM, and augmented reality. The HDR video system runs in realtime and processes 120 frames per second, in groups of three frames or four frames (e.g. a set of four differently exposed images captured every thirtieth of a second). The processing method, for implementation on FPGAs (Field Programmable Gate Arrays), achieves a realtime performance for creating HDR video using our novel compositing methods, and runs on a miniature self-contained battery-operated head-worn circuit board, without the need for a host computer. The result is an essentially self-contained miniaturizable hardware HDR camera system that could be built into smaller eyeglass frames, for use in various wearable computing and mediated/ aug-mediated reality applications, as well as to help people see better in their everyday lives.
Toxicology and Applied Pharmacology | 2011
Raymond Chun Hing Lo; Trine Celius; Agnes L. Forgacs; Edward Dere; Laura MacPherson; Patricia A. Harper; Timothy R. Zacharewski; Jason Matthews
Genome-wide, promoter-focused ChIP-chip analysis of hepatic aryl hydrocarbon receptor (AHR) binding sites was conducted in 8-week old female C57BL/6 treated with 30 μg/kg/body weight 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD) for 2 h and 24 h. These studies identified 1642 and 508 AHR-bound regions at 2h and 24h, respectively. A total of 430 AHR-bound regions were common between the two time points, corresponding to 403 unique genes. Comparison with previous AHR ChIP-chip studies in mouse hepatoma cells revealed that only 62 of the putative target genes overlapped with the 2 h AHR-bound regions in vivo. Transcription factor binding site analysis revealed an over-representation of aryl hydrocarbon response elements (AHREs) in AHR-bound regions with 53% (2 h) and 68% (24 h) of them containing at least one AHRE. In addition to AHREs, E2f-Myc activator motifs previously implicated in AHR function, as well as a number of other motifs, including Sp1, nuclear receptor subfamily 2 factor, and early growth response factor motifs were also identified. Expression microarray studies identified 133 unique genes differentially regulated after 4 h treatment with TCDD. Of which, 39 were identified as AHR-bound genes at 2 h. Ingenuity Pathway Analysis on the 39 AHR-bound TCDD responsive genes identified potential perturbation in biological processes such as lipid metabolism, drug metabolism, and endocrine system development as a result of TCDD-mediated AHR activation. Our findings identify direct AHR target genes in vivo, highlight in vitro and in vivo differences in AHR signaling and show that AHR recruitment does not necessarily result in changes in target gene expression.
Biochemical and Biophysical Research Communications | 2009
Laura MacPherson; Raymond Chun Hing Lo; Shaimaa Ahmed; Andrea Pansoy; Jason Matthews
We investigated the role of the activation function 1 (AF1) and AF2 domains of estrogen receptor alpha (ERalpha) in mediating dioxin-dependent recruitment of ERalpha to cytochrome P4501A1 (CYP1A1) and CYP1B1 in HuH-7 human hepatoma cells. Dioxin-induced recruitment of ERalpha wildtype (ERalpha-WT) and an ERalpha AF1 deletion mutant (ERalpha-DeltaAF1), but not a transcriptional inactive AF2 mutant (ERalpha-AF2mut) to CYP1A1 and CYP1B1. Direct interactions between AHR and the AF1 and AF2 domains of ERalpha were observed, and were independent of mutations in the AF2. Expression of ERalpha-WT increased dioxin-induced CYP1A1 and CYP1B1-regulated reporter activity, and CYP1A1 and CYP1B1 mRNA levels. However, no increases in gene expression above vector controls were observed in cells transfected with ERalpha-DeltaAF1 or ERalpha-AF2mut. Our data show that the AF2 domain contributes to dioxin-induced recruitment of ERalpha to AHR target genes, but that both the AF1 and AF2 domains are required for ERalpha-dependent increases in AHR activity.
international symposium on technology and society | 2013
Raymond Chun Hing Lo; Alexander Chen; Valmiki Rampersad; Jason Huang; Hang Wu; Steve Mann
Three-Dimensional (3D) range cameras have recently appeared in the marketplace for use in surveillance (e.g. cameras affixed to inanimate objects) applications. We present FreeGlass™ as a wearable hands-free 3D gesture-sensing Digital Eye Glass system. FreeGlass comprises a head-mounted display with an infrared range camera, both connected to a wearable computer. It is based on the MannGlas™ computerized welding glass, which embodies HDR (High Dynamic Range) and AR (Augmented/Augmediated Reality). FreeGlass recontextualizes the 3D range camera as a sousveillance (e.g. cameras attached to people) camera. In this sousveillance context, the range camera is worn by the user and shares the same point-of-view as the user. Computer vision algorithms therefore benefit from the use of the range camera to allow image segmentation by using both the infrared and depth information from the device for 3D hand gesture recognition system. The gesture recognition is then accomplished by using a neural network on the segmented hand. Recognized gestures are used to provide the user with interactions in an augmediated reality environment. Additionally, we present applications of FreeGlass for serendipitous gesture recognition in everyday life, as well as for interaction with real-world objects (with and without gesture recognition). A plurality of FreeGlass units can be used together, each sensor having a different spreading sequence, or the like, so that a number of people can collaborate and share the same or similar Augmediated Reality space(s).
acm multimedia | 2012
Raymond Chun Hing Lo; Steve Mann; Jason Huang; Valmiki Rampersad; Tao Ai
We present highly parallelizable and computationally efficient High Dynamic Range (HDR) image compositing, reconstruction, and spatotonal mapping algorithms for processing HDR video. We implemented our algorithms in the EyeTap Digital Glass electric seeing aid, for use in everyday life. We also tested the algorithms in extreme dynamic range situations, such as, electric arc welding. Our system runs in real-time, and requires no user intervention, and no fine-tuning of parameters after a one-time calibration, even under a wide variety of very difficult lighting conditions (e.g. electric arc welding, including detailed inspection of the arc, weld puddle, and shielding gas in TIG welding). Our approach can render video at 1920x1080 pixel resolution at interactive frame rates that vary from 24 to 60 frames per second with GPU acceleration. We also implemented our system on FPGAs (Field Programmable Gate Arrays) for being miniaturized and built into eyeglass frames.
international symposium on technology and society | 2013
Raymond Chun Hing Lo; Valmiki Rampersad; Jason Huang; Steve Mann
This paper presents the invention and implementation of 3D (Three Dimensional) HDR (High Dynamic Range) sensing, along with examples. We propose a method of 3D HDR veillance (sensing, computer vision, video capture, or the like) by integrating tonal and spatial information obtained from multiple HDR exposures for use in conjunction with one or more 3D cameras. In one embodiment, we construct a 3D HDR camera from multiple 3D cameras such as Kinect sensors. In this embodiment the 3D cameras are arranged in a fixed array, such that the geometric relationships between them remain constant over time. Only a single camera calibration step is required at the initial time of assembling and fixing the cameras into the array. Preferably the cameras either view from the same position through beam splitters, or are fixed close to one another, so that they capture approximately the same subject matter. The system is designed so the cameras each capture a differently exposed image or video of approximately the same subject matter. In one embodiment, two Kinect cameras are attached together facing in the same direction, with an ND (Neutral Density) filter over one of them, so as to obtain a darker exposure. The dark and light exposures are combined to obtain more accurate 3D sensing in high contrast scenes. In another embodiment, a single 3D camera is exposure-sequenced (alternating light and dark exposures). 3D HDR might, more generally, be incorporated into existing 3D cameras, resulting in a new kind of 3D sensor that can work in nearly any environment, including high contrast scenes such as outdoor scenes, or scenes where a bright light is shining directly into the sensor.
international conference on computer graphics and interactive techniques | 2016
Raymond Chun Hing Lo
Overlaying and registering interactive virtual content on the real world requires sophisticated hardware and advanced software: cutting-edge optics, state-of-the-art sensors, and the latest in computer vision and computer graphics algorithms. Meta 2 provides an easy-to-use platform for creating augmented reality applications that abstracts these complexities.
BMC Genomics | 2011
Edward Dere; Raymond Chun Hing Lo; Trine Celius; Jason Matthews; Timothy R. Zacharewski
Toxicological Sciences | 2012
Raymond Chun Hing Lo; Jason Matthews