Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Clifford Lindsay is active.

Publication


Featured researches published by Clifford Lindsay.


international symposium on visual computing | 2006

Physically-based real-time diffraction using spherical harmonics

Clifford Lindsay; Emmanuel Agu

Diffraction, interference, dispersive refraction and scattering are four wavelength-dependent mechanisms that produce iridescent colors. Wavelength-dependent functions need to be sampled at discrete wavelengths in the visible spectrum, which increases the computational intensity of rendering iridescence. Furthermore, diffraction requires careful sampling since its response function varies at a higher frequency variation with sharper peaks than interference or dispersive refraction. Consequently, rendering physically accurate diffraction has previously either been approximated using simplified color curves, or been limited to offline rendering techniques such as ray tracing. We propose a technique for real-time rendering of physically accurate diffraction on programmable hardware. Our technique adaptively samples the diffraction BRDF and precomputes it to Spherical Harmonic (SH) basis that preserves the peak intensity of the reflected light. While previous work on diffraction used low dynamic range lights, we preserve the full dynamic range of the incident illumination and the diffractive response over the entire hemisphere of incoming light directions. We defer conversion from a wavelength representation to a tone mapped RGB triplet until display.


Physics in Medicine and Biology | 2014

Digital anthropomorphic phantoms of non-rigid human respiratory and voluntary body motion for investigating motion correction in emission imaging

Arda Konik; Caitlin M. Connolly; Karen Johnson; Paul Dasari; Paul Segars; P. H. Pretorius; Clifford Lindsay; Joyoni Dey; Michael A. King

The development of methods for correcting patient motion in emission tomography has been receiving increased attention. Often the performance of these methods is evaluated through simulations using digital anthropomorphic phantoms, such as the commonly used extended cardiac torso (XCAT) phantom, which models both respiratory and cardiac motion based on human studies. However, non-rigid body motion, which is frequently seen in clinical studies, is not present in the standard XCAT phantom. In addition, respiratory motion in the standard phantom is limited to a single generic trend. In this work, to obtain a more realistic representation of motion, we developed a series of individual-specific XCAT phantoms, modeling non-rigid respiratory and non-rigid body motions derived from the magnetic resonance imaging (MRI) acquisitions of volunteers. Acquisitions were performed in the sagittal orientation using the Navigator methodology. Baseline (no motion) acquisitions at end-expiration were obtained at the beginning of each imaging session for each volunteer. For the body motion studies, MRI was again acquired only at end-expiration for five body motion poses (shoulder stretch, shoulder twist, lateral bend, side roll, and axial slide). For the respiratory motion studies, an MRI was acquired during free/regular breathing. The magnetic resonance slices were then retrospectively sorted into 14 amplitude-binned respiratory states, end-expiration, end-inspiration, six intermediary states during inspiration, and six during expiration using the recorded Navigator signal. XCAT phantoms were then generated based on these MRI data by interactive alignment of the organ contours of the XCAT with the MRI slices using a graphical user interface. Thus far we have created five body motion and five respiratory motion XCAT phantoms from the MRI acquisitions of six healthy volunteers (three males and three females). Non-rigid motion exhibited by the volunteers was reflected in both respiratory and body motion phantoms with a varying extent and character for each individual. In addition to these phantoms, we recorded the position of markers placed on the chest of the volunteers for the body motion studies, which could be used as external motion measurement. Using these phantoms and external motion data, investigators will be able to test their motion correction approaches for realistic motion obtained from different individuals. The non-uniform rational B-spline data and the parameter files for these phantoms are freely available for downloading and can be used with the XCAT license.


international symposium on visual computing | 2008

Adaptive CPU Scheduling to Conserve Energy in Real-Time Mobile Graphics Applications

Fan Wu; Emmanuel Agu; Clifford Lindsay

Graphics rendering on mobile devices is severely restricted by available battery energy. The frame rate of real-time graphics applications fluctuates due to continual changes in the LoD, visibility and distance of scene objects, user interactivity, complexity of lighting and animation, and many other factors. Such frame rate spikes waste precious battery energy. We introduce an adaptive CPU scheduler that predicts the applications workload from frame to frame and allocates just enough CPU cycles to render the scene at a target rate of 25 FPS. Since the applications workload needs to be re-estimated whenever the scenes LoD changes, we integrate our CPU scheduler with LoD management. To further save energy, we try to render scenes at the lowest LoD at which the user does not see visual artifacts on a given screen. Our integrated Energy-efficient Adaptive Real-time Rendering (EARR) heuristic reduces energy consumption by up to 60% while maintaining acceptable image quality at interactive frame rates.


international symposium on visual computing | 2014

Automatic Multi-light White Balance Using Illumination Gradients and Color Space Projection

Clifford Lindsay; Emmanuel Agu

White balance algorithms try to remove color casts in images caused by non-white scene illuminants, transforming the images to appear as if they were taken under a canonical light source. We propose a new white balance algorithm for scenes with multiple lights, which requires that the colors of all scene illuminants and their relative contributions to each image pixel are determined. Prior work on multi-illuminant white balance either required user input or made restrictive assumptions. We identify light colors as areas of maximum gradients in the indirect lighting component. The colors of each maximal point are clustered in RGB space in order to estimate the distinct global light colors. Once the light colors are determined, we project each image pixel in RGB space to determine the relative contribution of each distinct light color to each image pixel. Our white balance method for images with multiple light sources is fully automated and amenable to hardware implementation.


network computing and applications | 2009

Unequal Error Protection (UEP) for Wavelet-Based Wireless 3D Mesh Transmission

Fan Wu; Emmanuel Agu; Clifford Lindsay; Chung-Han Chen

The recent popularity of networked graphicsapplications such as distributed military simulators andonline games, has increased the need to transmit large 3Dmeshes and textures over wireless networks. To speed up large mesh transmission over low-bandwidth wireless links, we use a wavelet-based technique that aggressively compresses large meshes and enables progressive (piece-wise) transmission. Using wavelets, a server only needs to send the full connectivity information of a small base mesh along with wavelet coefficients that refine it, saving memory and bandwidth. To mitigate packet losses caused by high wireless error rates, we propose a novel Forward Error Correction (FEC) scheme based on Unequal Error Protection (UEP). UEP adds more error correction bits to regions of the mesh that have more details. Our work uses UEP to make wavelet-encoded meshes more resilient to wireless errors. Experimental results shows that our proposed UEP scheme is more error-resilient than NoError Protection (NEP) and Equal Error Protection (EEP) asthe packet loss rate increases by achieving 50% less relative errors and maintaining the decoded mesh structure. Our scheme can be integrated into future mobile devices and shall be useful in application areas such as military simulators on mobile devices.


Proceedings of SPIE | 2012

Interactive generation of digital anthropomorphic phantoms from XCAT shape priors

Clifford Lindsay; M. A. Gennert; Caitlin M. Connolly; Arda Konik; Paul Dasari; W. P. Segars; M.A. King

In SPECT imaging, patient respiratory and body motion can cause artifacts that degrade image quality. Developing and evaluating motion correction algorithms are facilitated by simulation studies where a numerical phantom and its motion are precisely known, from which image data can be produced. Previous techniques to test motion correction methods generated XCAT phantoms modeled from MRI studies and motion tracking but required manually segmenting the major structures within the whole upper torso, which can take 8 hours to perform. Additionally, segmentation in two dimensional MRI slices and interpolating into three dimensional shapes can lead to appreciable interpolation artifacts as well as requiring expert knowledge of human anatomy in order to identify the regions to be segmented within each slice. We propose a new method that mitigates the long manual segmentation times for segmenting the upper torso. Our interactive method requires that a user provide only an approximate alignment of the base anatomical shapes from the XCAT model with an MRI data. Organ boundaries from aligned XCAT models are warped with displacement fields generated from registering a baseline MR image to MR images acquired during pre-determined motions, which amounts to automated segmentation each organ of interest. With our method we can show the quality of segmentation is equal that of expert manual segmentation does not require a user who is an expert in anatomy, and can be completed in minutes not hours. In some instances, due to interpolation artifacts, our method can generate higher quality models than manual segmentation.


International Journal of Handheld Computing Research | 2012

Imperceptible Simplification on Mobile Displays

Fan Wu; Emmanuel Agu; Clifford Lindsay; Chung-Han Chen

Graphics on mobile devices is becoming popular because untethered computing is convenient and makes workers more productive. Mobile displays have a wide range of resolutions that affect the scene Level-of-Detail LoD that users can perceive: smaller displays show less detail, therefore lower resolution meshes and textures are acceptable. Mobile devices frequently have limited battery energy, low memory and disk space. To minimize wasting limited system resources, the authors render mobile graphics scenes at the lowest LoD at which users do not perceive distortion due to simplification. This is called LoD the Point of Imperceptibility PoI. Increasing the mesh or texture resolution beyond the PoI wastes valuable system resources without increasing perceivable visual realism. The authors propose a perceptual metric that can easily be evaluated to identify the LoD corresponding to a target mobile displays PoI and accounts for object geometry, lighting and shading. Previous work did not directly compute changes in the PoI due to target screen resolution. The perceptual metric generates a screen-dependent Pareto distribution with a knee point that corresponds to the PoI. We employ wavelets for simplification, which gives direct access to the mesh undulation frequency that we then use to parameterize the CSF curve.


international symposium on visual computing | 2014

3D Previsualization Using a Computational Photography Camera

Clifford Lindsay; Emmanuel Agu

During movie production, movie directors use previsualization tools to convey the movie visuals as they see them in their minds eye. Traditional methods of previsualization include hand-drawn sketches, storyboards and still photographs. Recently, video game engines have been used for previsualization so that once the movie set is modeled, scene lighting, geometry, textures and various scene elements can be changed interactively and the effects of many potential changes can be previewed quickly. The use of video games for previsualization involves manually modeling the movie set by artists to create a digital version, which is expensive. We envision that a computational photography camera can be used for capturing images of a physical set from which a model of the scene can be automatically generated. A wide range of possible changes can be explored interactively and previewed on-set including scene geometry and textures. Since our vision is large, we focus initially on an initial prototype (a computational photography camera and previsualization algorithms), which enable scene lighting to be captured, inferred, manipulated and new lights applied (relighting). Evaluations of our light previsualization prototype shows low photometric error rates and encouraging feedback from experts.


Proceedings of SPIE | 2014

Separating complex compound patient motion tracking data using independent component analysis

Clifford Lindsay; Karen Johnson; M.A. King

In SPECT imaging, motion from respiration and body motion can reduce image quality by introducing motion-related artifacts. A minimally-invasive way to track patient motion is to attach external markers to the patient’s body and record their location throughout the imaging study. If a patient exhibits multiple movements simultaneously, such as respiration and body-movement, each marker location data will contain a mixture of these motions. Decomposing this complex compound motion into separate simplified motions can have the benefit of applying a more robust motion correction to the specific type of motion. Most motion tracking and correction techniques target a single type of motion and either ignore compound motion or treat it as noise. Few methods account for compound motion exist, but they fail to disambiguate super-position in the compound motion (i.e. inspiration in addition to body movement in the positive anterior/posterior direction). We propose a new method for decomposing the complex compound patient motion using an unsupervised learning technique called Independent Component Analysis (ICA). Our method can automatically detect and separate different motions while preserving nuanced features of the motion without the drawbacks of previous methods. Our main contributions are the development of a method for addressing multiple compound motions, the novel use of ICA in detecting and separating mixed independent motions, and generating motion transform with 12 DOFs to account for twisting and shearing. We show that our method works with clinical datasets and can be employed to improve motion correction in single photon emission computed tomography (SPECT) images.


Proceedings of SPIE | 2013

Automatic generation of digital anthropomorphic phantoms from simulated MRI acquisitions

Clifford Lindsay; Michael A. Gennert; A. Kӧnik; Paul Dasari; M.A. King

In SPECT imaging, motion from patient respiration and body motion can introduce image artifacts that may reduce the diagnostic quality of the images. Simulation studies using numerical phantoms with precisely known motion can help to develop and evaluate motion correction algorithms. Previous methods for evaluating motion correction algorithms used either manual or semi-automated segmentation of MRI studies to produce patient models in the form of XCAT Phantoms, from which one calculates the transformation and deformation between MRI study and patient model. Both manual and semi-automated methods of XCAT Phantom generation require expertise in human anatomy, with the semiautomated method requiring up to 30 minutes and the manual method requiring up to eight hours. Although faster than manual segmentation, the semi-automated method still requires a significant amount of time, is not replicable, and is subject to errors due to the difficulty of aligning and deforming anatomical shapes in 3D. We propose a new method for matching patient models to MRI that extends the previous semi-automated method by eliminating the manual non-rigid transformation. Our method requires no user supervision and therefore does not require expert knowledge of human anatomy to align the NURBs to anatomical structures in the MR image. Our contribution is employing the SIMRI MRI simulator to convert the XCAT NURBs to a voxel-based representation that is amenable to automatic non-rigid registration. Then registration is used to transform and deform the NURBs to match the anatomy in the MR image. We show that our automated method generates XCAT Phantoms more robustly and significantly faster than the previous semi-automated method.

Collaboration


Dive into the Clifford Lindsay's collaboration.

Top Co-Authors

Avatar

Emmanuel Agu

Worcester Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar

Fan Wu

Tuskegee University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

M.A. King

University of Massachusetts Medical School

View shared research outputs
Top Co-Authors

Avatar

Paul Dasari

University of Massachusetts Medical School

View shared research outputs
Top Co-Authors

Avatar

Arda Konik

University of Massachusetts Medical School

View shared research outputs
Top Co-Authors

Avatar

Caitlin M. Connolly

Beth Israel Deaconess Medical Center

View shared research outputs
Top Co-Authors

Avatar

Karen Johnson

University of Massachusetts Medical School

View shared research outputs
Top Co-Authors

Avatar

Michael A. Gennert

Worcester Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar

Michael A. King

University of Massachusetts Medical School

View shared research outputs
Researchain Logo
Decentralizing Knowledge