Adrian Ilie
University of North Carolina at Chapel Hill
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Adrian Ilie.
international conference on computer vision | 2005
Adrian Ilie; Greg Welch
Most multi-camera vision applications assume a single common color response for all cameras. However different cameras - even of the same type - can exhibit radically different color responses, and the differences can cause significant errors in scene interpretation. To address this problem we have developed a robust system aimed at inter-camera color consistency. Our method consists of two phases: an iterative closed-loop calibration phase that searches for the per-camera hardware register settings that best balance linearity and dynamic range, followed by a refinement phase that computes the per-camera parametric values for an additional software-based color mapping
computer vision and pattern recognition | 2008
Ram Kumar; Adrian Ilie; Jan Michael Frahm; Marc Pollefeys
Calibrating a network of cameras with non-overlapping views is an important and challenging problem in computer vision. In this paper, we present a novel technique for camera calibration using a planar mirror. We overcome the need for all cameras to see a common calibration object directly by allowing them to see it through a mirror. We use the fact that the mirrored views generate a family of mirrored camera poses that uniquely describe the real camera pose. Our method consists of the following two steps: (1) using standard calibration methods to find the internal and external parameters of a set of mirrored camera poses, (2) estimating the external parameters of the real cameras from their mirrored poses by formulating constraints between them. We demonstrate our method on real and synthetic data for camera clusters with small overlap between the views and non-overlapping views.
non-photorealistic animation and rendering | 2004
Ramesh Raskar; Adrian Ilie; Jingyi Yu
We present a class of image fusion techniques to automatically combine images of a scene captured under different illumination. Beyond providing digital tools for artists for creating surrealist images and videos, the methods can also be used for practical applications. For example, the non-realistic appearance can be used to enhance the context of nighttime traffic videos so that they are easier to understand. The context is automatically captured from a fixed camera and inserted from a day-time image (of the same scene). Our approach is based on a gradient domain technique that preserves important local perceptual cues while avoiding traditional problems such as aliasing, ghosting and haloing. We presents several results in generating surrealistic videos and in increasing the information density of low quality nighttime videos.We present a class of image fusion techniques to automatically combine images of a scene captured under different illumination. Beyond providing digital tools for artists for creating surrealist images and videos, the methods can also be used for practical applications. For example, the non-realistic appearance can be used to enhance the context of nighttime traffic videos so that they are easier to understand. The context is automatically captured from a fixed camera and inserted from a day-time image (of the same scene). Our approach is based on a gradient domain technique that preserves important local perceptual cues while avoiding traditional problems such as aliasing, ghosting and haloing. We presents several results in generating surrealistic videos and in increasing the information density of low quality nighttime videos.
Virtual Reality | 2011
Peter Lincoln; Greg Welch; Andrew Nashel; Andrei State; Adrian Ilie; Henry Fuchs
Applications such as telepresence and training involve the display of real or synthetic humans to multiple viewers. When attempting to render the humans with conventional displays, non-verbal cues such as head pose, gaze direction, body posture, and facial expression are difficult to convey correctly to all viewers. In addition, a framed image of a human conveys only a limited physical sense of presence—primarily through the display’s location. While progress continues on articulated robots that mimic humans, the focus has been on the motion and behavior of the robots rather than on their appearance. We introduce a new approach for robotic avatars of real people: the use of cameras and projectors to capture and map both the dynamic motion and the appearance of a real person onto a humanoid animatronic model. We call these devices animatronic Shader Lamps Avatars (SLA). We present a proof-of-concept prototype comprised of a camera, a tracking system, a digital projector, and a life-sized styrofoam head mounted on a pan-tilt unit. The system captures imagery of a moving, talking user and maps the appearance and motion onto the animatronic SLA, delivering a dynamic, real-time representation of the user to multiple viewers.
international symposium on mixed and augmented reality | 2009
Peter Lincoln; Greg Welch; Andrew Nashel; Adrian Ilie; Andrei State; Henry Fuchs
Applications such as telepresence and training involve the display of real or synthetic humans to multiple viewers. When attempting to render the humans with conventional displays, non-verbal cues such as head pose, gaze direction, body posture, and facial expression are difficult to convey correctly to all viewers. In addition, a framed image of a human conveys only a limited physical sense of presence—primarily through the displays location. While progress continues on articulated robots that mimic humans, the focus has been on the motion and behavior of the robots. We introduce a new approach for robotic avatars of real people: the use of cameras and projectors to capture and map the dynamic motion and appearance of a real person onto a humanoid animatronic model. We call these devices animatronic Shader Lamps Avatars (SLA).We present a proof-of-concept prototype comprised of a camera, a tracking system, a digital projector, and a life-sized styrofoam head mounted on a pan-tilt unit. The system captures imagery of a moving, talking user and maps the appearance and motion onto the animatronic SLA, delivering a dynamic, real-time representation of the user to multiple viewers.
International Journal of Pattern Recognition and Artificial Intelligence | 2005
Adrian Ilie; Ramesh Raskar; Jingyi Yu
We propose a class of enhancement techniques suitable for scenes captured by fixed cameras. The basic idea is to increase the information density in a set of low quality images by extracting the context from a higher-quality image captured under different illuminations from the same viewpoint. For example, a night-time surveillance video can be enriched with information available in daytime images. We also propose a new image fusion approach to combine images with sufficiently different appearance into a seamless rendering. Our method ensures the fidelity of important features and robustly incorporates background contexts, while avoiding traditional problems such as aliasing, ghosting and haloing. We show results on indoor as well as outdoor scenes.
IEEE MultiMedia | 2005
Greg Welch; Andrei State; Adrian Ilie; Kok-Lim Low; Anselmo Lastra; Bruce A. Cairns; Herman Towles; Henry Fuchs; Ruigang Yang; Sascha Becker; Daniel Russo; Jesse Funaro; A. van Dam
Immersive electronic books (IEBooks) for surgical training will let surgeons explore previous surgical procedures in 3D. The authors describe the techniques and tools for creating a preliminary IEBook, embodying some of the basic concepts.
Presence: Teleoperators & Virtual Environments | 2004
Adrian Ilie; Kok-Lim Low; Grog Welch; Anselmo Lastra; Henry Fuchs; Bruce A. Cairns
We introduce and present preliminary results for a hybrid display system combining head-mounted and projector-based displays. Our work is motivated by a surgical training application where it is necessary to simultaneously provide both a highfidelity view of a central close-up task (the surgery) and visual awareness of objects and events in the surrounding environment. In this article, we motivate the use of a hybrid display system, discuss previous work, describe a prototype along with methods for geometric calibration, and present results from a controlled human subject experiment. This article is an invited resubmission of work presented at IEEE Virtual Reality 2003. The article has been updated and expanded to include (among other things) additional related work and more details about the calibration process.
computer vision and pattern recognition | 2007
Hua Yang; Marc Pollefeys; Greg Welch; Jan Michael Frahm; Adrian Ilie
The appearance of a scene is a function of the scene contents, the lighting, and the camera pose. A set of n-pixel images of a non-degenerate scene captured from different perspectives lie on a 6D nonlinear manifold in Rn. In general, this nonlinear manifold is complicated and numerous samples are required to learn it globally. In this paper, we present a novel method and some preliminary results for incrementally tracking camera motion through sampling and linearizing the local appearance manifold. At each frame time, we use a cluster of calibrated and synchronized small baseline cameras to capture scene appearance samples at different camera poses. We compute a first-order approximation of the appearance manifold around the current camera pose. Then, as new cluster samples are captured at the next frame time, we estimate the incremental camera motion using a linear solver. By using intensity measurements and directly sampling the appearance manifold, our method avoids the commonly-used feature extraction and matching processes, and does not require 3D correspondences across frames. Thus it can be used for scenes with complicated surface materials, geometries, and view-dependent appearance properties, situations where many other camera tracking methods would fail.
ACM Transactions on Sensor Networks | 2014
Adrian Ilie; Greg Welch
Large networks of cameras have been increasingly employed to capture dynamic events for tasks such as surveillance and training. When using active cameras to capture events distributed throughout a large area, human control becomes impractical and unreliable. This has led to the development of automated approaches for on-line camera control. We introduce a new automated camera control approach that consists of a stochastic performance metric and a constrained optimization method. The metric quantifies the uncertainty in the state of multiple points on each target. It uses state-space methods with stochastic models of the target dynamics and camera measurements. It can account for static and dynamic occlusions, accommodate requirements specific to the algorithm used to process the images, and incorporate other factors that can affect its results. The optimization explores the space of camera configurations over time under constraints associated with the cameras, the predicted target trajectories, and the image processing algorithm. The approach can be applied to conventional surveillance tasks (e.g., tracking or face recognition), as well as tasks employing more complex computer vision methods (e.g., markerless motion capture or 3D reconstruction).