Carmen J. Carrano
Lawrence Livermore National Laboratory
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Carmen J. Carrano.
High-Resolution Wavefront Control: Methods, Devices, and Applications IV | 2002
Carmen J. Carrano
Atmospheric aberrations reduce the resolution and contrast in surveillance images recorded over horizontal or slant paths. This paper describes our recent horizontal and slant-path imaging experiments of extended scenes as well as the results obtained using speckle imaging. The experiments were performed with an 8-inch diameter telescope placed on either a rooftop or hillside and cover ranges of interest from 0.5 km up to 10 km. The scenery includes resolution targets, people, vehicles, and other structures. The improvement in image quality using speckle imaging is dramatic in many cases, and depends significantly upon the atmospheric conditions. We quantify resolution improvement through modulation transfer function measurement comparisons.
acm multimedia | 2014
Jaeyoung Choi; Bart Thomee; Gerald Friedland; Liangliang Cao; Karl Ni; Damian Borth; Benjamin Elizalde; Luke R. Gottlieb; Carmen J. Carrano; Roger A. Pearce; Douglas N. Poland
The Placing Task is a yearly challenge offered by the MediaEval Multimedia Benchmarking Initiative that requires participants to develop algorithms that automatically predict the geo-location of social media videos and images. We introduce a recent development of a new standardized web-scale geo-tagged dataset for Placing Task 2014, which contains 5.5 million photos and 35,000 videos. This standardized benchmark with a large persistent dataset allows research community to easily evaluate new algorithms and to analyze their performance with respect to the state-of-the-art approaches. We discuss the characteristics of this years Placing Task along with the description of the new dataset components and how they were collected.
Advanced Wavefront Control: Methods, Devices, and Applications | 2003
Carmen J. Carrano
We have previously demonstrated and reported on the use of sub-field speckle processing for the enhancement of both near and far-range surveillance imagery of people and vehicles that have been degraded by atmospheric turbulence. We have obtained near diffraction-limited imagery in many cases and have shown dramatic image quality improvement in other cases. As it is possible to perform only a limited number of experiments in a limited number of conditions, we have developed a computer simulation capability to aid in the prediction of imaging performance in a wider variation of conditions. Our simulation capability includes the ability to model extended scenes in distributed turbulence. Of great interest is the effect of the isoplanatic angle on speckle imaging performance as well as on single deformable mirror and multiconjugate adaptive optics system performance. These angles are typically quite small over horizontal and slant paths. This paper will begin to explore these issues which are important for predicting the performance of both passive and active horizontal and slant-path imaging systems.
High-power lasers and applications | 2003
Carmen J. Carrano
The difficulty in terrestrial imaging over long horizontal or slant paths is that atmospheric aberrations and distortions reduce the resolution and contrast in images recorded at high resolution. This paper will describe the problem of horizontal-path imaging, briefly cover various methods for imaging over horizontal paths and then describe the speckle imaging method actively being pursued at LLNL. We will review some closer range (1-3 km range) imagery of people we have already published, as well as show new results of vehicles we have obtained over longer slant-range paths greater than 20 km.
acm multimedia | 2015
Julia Bernd; Damian Borth; Carmen J. Carrano; Jaeyoung Choi; Benjamin Elizalde; Gerald Friedland; Luke R. Gottlieb; Karl Ni; Roger A. Pearce; Douglas N. Poland; Khalid Ashraf; David A. Shamma; Bart Thomee
The publication of the Yahoo Flickr Creative Commons 100 Million dataset (YFCC100M)--to date the largest open-access collection of photos and videos--has provided a unique opportunity to stimulate new research in multimedia analysis and retrieval. To make the YFCC100M even more valuable, we have started working towards supplementing it with a comprehensive set of precomputed features and high-quality ground truth annotations. As part of our efforts, we are releasing the YLI feature corpus, as well as the YLI-GEO and YLI-MED annotation subsets. Under the Multimedia Commons Project (MMCP), we are currently laying the groundwork for a common platform and framework around the YFCC100M that (i) facilitates researchers in contributing additional features and annotations, (ii) supports experimentation on the dataset, and (iii) enables sharing of obtained results. This paper describes the YLI features and annotations released thus far, and sketches our vision for the MMCP.
Proceedings of SPIE | 2009
Petersen F. Curt; Michael R. Bodnar; Fernando E. Ortiz; Carmen J. Carrano; Eric J. Kelmelis
While imaging over long distances is critical to a number of security and defense applications, such as homeland security and launch tracking, current optical systems are limited in resolving power. This is largely a result of the turbulent atmosphere in the path between the region under observation and the imaging system, which can severely degrade captured imagery. There are a variety of post-processing techniques capable of recovering this obscured image information; however, the computational complexity of such approaches has prohibited real-time deployment and hampers the usability of these technologies in many scenarios. To overcome this limitation, we have designed and manufactured an embedded image processing system based on commodity hardware which can compensate for these atmospheric disturbances in real-time. Our system consists of a reformulation of the average bispectrum speckle method coupled with a high-end FPGA processing board, and employs modular I/O capable of interfacing with most common digital and analog video transport methods (composite, component, VGA, DVI, SDI, HD-SDI, etc.). By leveraging the custom, reconfigurable nature of the FPGA, we have achieved performance twenty times faster than a modern desktop PC, in a form-factor that is compact, low-power, and field-deployable.
Proceedings of SPIE | 2009
Carmen J. Carrano
Overhead persistent surveillance systems are becoming more capable at acquiring wide-field image sequences for long time-spans. The need to exploit this data is becoming ever greater. The ability to track a single vehicle of interest or to track all the observable vehicles, which may number in the thousands, over large, cluttered regions while they persist in the imagery either in real-time or quickly on-demand is very desirable. With this ability we can begin to answer a number of interesting questions such as, what are normal traffic patterns in a particular region or where did that truck come from? There are many challenges associated with processing this type of data, some of which we will address in the paper. Wide-field image sequences are very large with many thousands of pixels on a side and are characterized by lower resolutions (e.g. worse than 0.5 meters/pixel) and lower frame rates (e.g. a few Hz or less). The objects in the scenery can vary in size, density, and contrast with respect to the background. At the same time the background scenery provides a number of clutter sources both man-made and natural. We describe our current implementation of an ultrascale capable multiple-vehicle tracking algorithm for overhead persistent surveillance imagery as well as discuss the tracking and timing performance of the currently implemented algorithm which is aimed at utilizing grayscale electrooptical image sequences alone for the track segment generation.
International Symposium on Optical Science and Technology | 2002
Bruce A. Macintosh; Scot S. Olivier; Brian J. Bauman; James M. Brase; Emily Carr; Carmen J. Carrano; Donald T. Gavel; Claire E. Max; Jennifer Patience
Direct detection of photons emitted or reflected by an extrasolar planet is an extremely difficult but extremely exciting application of adaptive optics. Typical contrast levels for an extrasolar planet would be 109 - Jupiter is a billion times fainter than the sun. Current adaptive optics systems can only achieve contrast levels of 106, but so-called extreme adaptive optics systems with 104 -105 degrees of freedom could potentially detect extrasolar planets. We explore the scaling laws defining the performance of these systems, first set out by Angel (1994), and derive a different definition of an optimal system. Our sensitivity predictions are somewhat more pessimistic than the original paper, due largely to slow decorrelation timescales for some noise sources, though choosing to site an ExAO system at a location with exceptional r0 (e.g. Mauna Kea) can offset this. We also explore the effects of segment aberrations in a Keck-like telescope on ExAO; although the effects are significant, they can be mitigated through Lyot coronagraphy.
Astronomical Telescopes and Instrumentation | 2000
Erik M. Johansson; D. Scott Acton; Jong R. An; Kenneth Avicola; Barton V. Beeman; James M. Brase; Carmen J. Carrano; J. Gathright; Donald T. Gavel; Randall L. Hurd; Olivier Lai; William Lupton; Bruce A. Macintosh; Claire E. Max; Scot S. Olivier; J. C. Shelton; Paul J. Stomski; Kevin Tsubota; Kenneth E. Waltjen; J. Watson; Peter L. Wizinowich
The wavefront controller for the Keck Observatory AO system consists of two separate real-time control loops: a tip-tilt control loop to remove tilt from the incoming wavefront, and a deformable mirror control loop to remove higher-order aberrations. In this paper, we describe these control loops and analyze their performance using diagnostic data acquired during the integration and testing of the AO system on the telescope. Disturbance rejection curves for the controllers are calculated from the experimental data and compared to theory. The residual wavefront errors due to control loop bandwidth are also calculated from the data, and possible improvements to the controller performance are discussed.
SPIE's International Symposium on Optical Science, Engineering, and Instrumentation | 1999
Donald T. Gavel; Brian J. Bauman; Eugene Warren Campbell; Carmen J. Carrano; Scot S. Olivier
Any adaptive optics system must be calibrated with respect to internal aberrations in order for it to properly correct the starlight before it enters the science camera. Typical internal calibration consists of using a point source stimulus at the input to the AO system and recording the wavefront at the output. Two methods for such calibration have been implemented on the adaptive optics system at Lick Observatory. The first technique, Phase Diversity, consists of taking out of focus images with the science camera and using an iterative algorithm to estimate the system wavefront. A second technique sues a newly installed instrument, the Phase-Shifting Diffraction Interferometer, which has the promise of providing very high accuracy wavefront measurements. During observing campaigns in 1998, both of these methods were used for initial calibrations. In this paper we present results and compare the two methods in regard to accuracy and their practical aspects.