Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where William Ross is active.

Publication


Featured researches published by William Ross.


Attention Perception & Psychophysics | 2000

Lightness from contrast: A selective integration model

William Ross; Luiz Pessoa

As has been observed by Wallach (1948), perceived lightness is proportional to the ratio between the luminances of adjacent regions in simple disk-annulus or bipartite scenes. This psychophysical finding resonates with neurophysiological evidence that retinal mechanisms of receptor adaptation and lateral inhibition transform the incoming illuminance array into local measures of luminance contrast. In many scenic configurations, however, the perceived lightness of a region is not proportional to its ratio with immediately adjacent regions. In a particularly striking example of this phenomenon, called White’s illusion, the relationship between the perceived lightnesses of two gray regions is the opposite of what is predicted by local edge ratios or contrasts. This paper offers a new treatment of how local measures of luminance contrast can be selectively integrated to simulate lightness percepts in a wide range of image configurations. Our approach builds on a tradition of edge integration models (Horn, 1974; Land & McCann, 1971) and conteast/fiJJing-in models (Cohen & Grossberg, 1984; Gerrits & Vendrik 1970; Grossberg & Mingolla, 1985a, 1985b). Our selective integration model (SIM) extends the explanatory power of previous models, allowing simulation of a number of phenomena, including White’s effect, the Benary Cross, and shading and transparency effects reported by Adelson (1993), as well as aspects of motion, depth, haploscopiC., and Gelb induced contrast effects. We also include an independently derived variant of a recent depthful version of White’s illusion, showing that our model can inspire new stimuli.


Proceedings of SPIE | 1998

Real-time fusion of low-light CCD and uncooled IR imagery for color night vision

Mario Aguilar; David A. Fay; William Ross; Allen M. Waxman; David B. Ireland; Joseph P. Racamato

We present an approach to color night vision through fusion of information derived from visible and thermal infrared sensors. Building on the work reported at SPIE in 1996 and 1997, we show how opponent-color processing and center-surround shunting neural networks can achieve informative multi-band image fusion. In particular, by emulating spatial and color processing in the retina, we demonstrate an effective strategy for multi-sensor color-night vision. We have developed a real- time visible/IR fusion processor from multiple C80 DSP chips using commercially available Matrox Genesis boards, which we use in conjunction with the Lincoln Lab low-light CCD and a Raytheon TI Systems uncooled IR camera. Limited human factors testing of visible/IR fusion is presented showing improvements in human performance using our color fused imagery relative to alternative fusion strategies or either single image modality alone. We conclude that fusion architectures that match opponent-sensor contrast to human opponent-color processing will yield fused image products of high image quality and utility.


international conference on information fusion | 2000

Fused multi-sensor image mining for feature foundation data

William W. Streilein; Allen M. Waxman; William Ross; Fang Liu; Michael Braun; David A. Fay; Paul Harmon; Chung Hye Read

Presents work on methods and user interfaces developed for interactive mining for feature foundation data (e.g. roads, rivers, orchards, forests) in fused multi-sensor imagery. A suite of client/server-based tools, including the Site Mining Tool and Image Map Interface, enable image analysts (IAs) to mine multi-sensor imagery for feature foundation data and to share trainable search agents, search results and image annotations with other IAs connected via a computer network. We discuss extensions to the fuzzy ARTMAP neural network which enable the Site Mining Tool to report confidence measures for detected search targets and to automatically select the critical features in the input vector which are most relevant for particular searches. Examples of the use of the Site Mining Tool and Image Map Interface are shown for an electo-optical (EO), IR and SAR data set derived from Landsat and Radarsat imagery, as well as multispectral (4-band) and hyperspectral (224-band) data sets. In addition, we present an architecture for the enhancement of hyperspectral fused imagery that utilizes internal category activity maps of a trained fuzzy ARTMAP network to enhance the visualization of targets in the color-fused imagery.


international conference on information fusion | 2000

Multi-sensor 3D image fusion and interactive search

William Ross; Allen M. Waxman; William W. Streilein; M. Aguiiar; Jacques Verly; Fang Liu; Michael Braun; Paul Harmon; Steve J. Rak

Describes a system under development for the 3D fusion of multi-sensor surface surveillance imagery, including electro-optical (EO), IR, SAR, multispectral and hyperspectral sources. Our approach is founded on biologically-inspired image processing algorithms. We have developed an image processing architecture enabling the unified interactive visualization of fused multi-sensor site data which utilizes a color image fusion algorithm based on retinal and cortical processing of color. We have also developed interactive Web-based tools for training neural network search agents that are capable of automatically scanning site data for the fused multi-sensor signatures of targets and/or surface features of interest. Each search agent is an interactively trained instance of a neural network model of cortical pattern recognition called a fuzzy ARTMAP. The utilization of 3D site models is central to our approach because it enables the accurate multi-platform image registration that is necessary for color image fusion and the designation, learning and searching for multi-sensor fused pixel signatures. Interactive stereo 3D viewing and fly-through tools enable efficient and intuitive site exploration and analysis. Web-based remote visualization and search agent training and utilization tools facilitate rapid, distributed and collaborative site exploitation and dissemination of results.


Enhanced and Synthetic Vision 1999 | 1999

Field evaluations of dual-band fusion for color night vision

Mario Aguilar; David A. Fay; David B. Ireland; Joseph P. Racamato; William Ross; Allen M. Waxman

As part of an advanced night vision program sponsored by DARPA, a method for real-time color night vision based on the fusion of visible and infrared sensors has been developed and demonstrated. The work, based on principles of color vision in humans and primates, achieves an effective strategy for combining the complementary information present in the two sensors. Our sensor platform consists of a 640 X 480 low- light CCD camera developed at MIT Lincoln Laboratory and a 320 X 240 uncooled microbolometer thermal infrared camera from Lockheed Martin Infrared. Image capture, data processing, and display are implemented in real-time (30 fps) on commercial hardware. Recent results from field tests at Lincoln Laboratory and in collaboration with U.S. Army Special Forces at Fort Campbell will be presented. During the tests, we evaluated the performance of the system for ground surveillance and as a driving aid. Here, we report on the results using both a wide-field of view (42 deg.) and a narrow field of view (7 deg.) platforms.


Enhanced and synthetic vision 2000. Conference | 2000

Fusion of 2- /3- /4-sensor imagery for visualization, target learning, and search

David A. Fay; Allen M. Waxman; Mario Aguilar; David B. Ireland; Joseph P. Racamato; William Ross; William W. Streilein; Michael Braun

We present recent work on methods for fusion of imagery from multiple sensors for night vision capability. The fusion system architectures are based on biological models of the spatial and opponent-color processes in the human retina and visual cortex. The real-time implementation of the dual-sensor fusion system combines imagery from either a low-light CCD camera (developed at MIT Lincoln Laboratory) or a short-wave infrared camera (from Sensors Unlimited, Inc.) With thermal long-wave infrared imagery (from a Lockheed Martin microbolometer camera). Example results are shown for an extension of the fusion architecture to include imagery from all three of these sensors as well as imagery from a mid- wave infrared imager (from Raytheon Amber Corp.). We also demonstrate how the results from these multi-sensor fusion systems can be used as inputs to an interactive tool for target designation, learning, and search based on a Fuzzy ARTMAP neural network.


2011 8th International Conference & Expo on Emerging Technologies for a Smarter World | 2011

Bi-directional power architectures for electric vehicles

Christopher W. Hinkle; Alan Millner; William Ross

Although electric vehicles (EVs) and plug in hybrid electric vehicles (PHEVs) yield significant gains in driving efficiency and CO2 reduction, the value of these systems are diminished by the cost of the battery subsystem. To offset the cost of electric vehicles, this paper presents two bi-directional charging architectures that use the batteries from electric vehicles for grid ancillary and facility level services. Grid-scale frequency regulation with Vehicle to Grid (V2G) technology and facility peak demand reduction with Vehicle to Base technology (V2B) for military bases, commonly known as Vehicle to Building for commercial applications, are shown as promising potential avenues to make the economics of electric vehicles viable. To effectively analyze the benefit of either technology, the cost of battery cycle life lost and bi-directional inverter purchase are quantified as the main costs of the services. The revenues or savings from using the battery in either scenario are calculated using real utility usage data and rates for three utilities and typical military installations. The optimum V2B vehicle fleet sizes for cars and trucks are derived from these models. V2B operations are sized to offset the transient peak demands in facility electrical loads, while V2G operations make use of contracted frequency regulation with the local grid. The results show that both V2B and V2G can offset a large part of the cost premium of electric vehicles. These services, combined with the enhanced driving efficiency of electricity over liquid fuel, make the life cycle cost of the electric vehicles attractive compared to conventional ones. V2B and V2G can be implemented separately or in combination to greatly improve the economics of electric vehicles in the near future. The magnitude of benefits of either make electric vehicles more attractive to early buyers, and present an opportunity to push the technology into production quantities sufficient to drive the price of electric vehicles down to a level that is economically feasible for the general population.


2010 IEEE Conference on Innovative Technologies for an Efficient and Reliable Electricity Supply | 2010

Enhanced plug-in hybrid electric vehicles

Alan Millner; Nicholas Judson; Bobby Ren; Ellen Johnson; William Ross

Plug-in hybrid electric vehicles (PHEVs) have the potential to reduce fossil fuel use, decrease pollution, and allow renewable energy sources for transportation, but their lithium ion battery subsystems are presently too expensive. Three enhancements to PHEVs are proposed here that can improve the economics. First, the incorporation of location information into the cars energy management algorithm allows predictive control to reduce fuel consumption through prior knowledge of the upcoming route and energy required. Second, the use of the vehicle battery while parked, offsetting the short peaks in commercial-scale facility electrical demand to reduce demand charges, can provide additional revenue to pay for the battery. Third, the battery cycle life must be maximized to avoid high replacement costs; a model of battery wear out for lithium ion batteries is presented and is used to confirm that the above strategies are compatible with long battery life.


Behavioral and Brain Sciences | 1998

Filling-in while finding out: Guiding behavior by representing information

William Ross

Discriminating behavior depends on neural representations in which the sensory activity patterns guiding different responses are decorrelated from one another. Visual information can often be parsimoniously transformed into these behavioral bridge-locus representations within neuro-computational visuo-spatial maps. Isomorphic inverse-optical world representation is not the goal. Nevertheless, such useful transformations can involve neural filling-in. Such a subpersonal representation of information is consistent with personal-level vision theory.


Archive | 2011

Imaging systems and methods for immersive surveillance

Daniel B. Chuang; Lawrence M. Candell; William Ross; Mark E. Beattie; Cindy Y. Fang; Bobby Ren; Jonathan P. Blanchard; Gary M. Long; Lauren L. White; Svetlana V. Panasyuk; Mark Bury

Collaboration


Dive into the William Ross's collaboration.

Top Co-Authors

Avatar

Allen M. Waxman

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

William W. Streilein

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

David A. Fay

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Michael Braun

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Fang Liu

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David B. Ireland

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Joseph P. Racamato

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Mario Aguilar

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Alan Millner

Massachusetts Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge