Parametric Design of Underwater Optical Systems
PParametric Design of Underwater Optical Systems
Gideon Billings , Eduardo Iscar , Matthew Johnson-Roberson Abstract — The design of optical systems for underwatervehicles is a complex process where the selection of cameras,lenses, housings, and operational parameters greatly influencethe performance of the complete system. Determining thecorrect combination of components and parameters for a givenset of operational requirements is currently a process basedon trial and error as well as the specialized knowledge andexperience of the designer. In this paper, we introduce an open-source tool for the parametric exploration of the design spaceof underwater optical systems and review the most significantunderwater light effects with the corresponding models toestimate the response and performance of the complete imagingsystem.
I. I
NTRODUCTION
Optical cameras are increasingly being applied in theunderwater domain for a range of applications includinginspection tasks [1], ecosystem monitoring [2] and vehiclenavigation [3]. Cameras represent low cost, low powersensors that provide rich information about the underwaterscene and frequently complement other sensors deployedon autonomous underwater vehicles (AUVs) or remotelyoperated vehicles (ROVs). However, the design of an under-water camera system presents a very large space of possibledesign choices and system configurations, with many inter-dependencies. Additionally, field tuning of the camera set-tings is frequently cumbersome and time consuming due toreduced equipment accessibility when deploying underwater.In this paper we review a simplified underwater imageformation model that allows the estimation of the averagecamera sensor response given different lens, light, water andseafloor characteristics. The sensor response is the averageintensity of pixels in a camera image and is a metric thatcan be used to determine correct image exposure. A user-friendly interface for the model is developed that will allowresearchers and scientists to narrow down the equipmentrequirements and operational settings for an underwaterimaging system by parametrically exploring the design space.In order to estimate the camera response, a model ofunderwater image formation is required. One of the maindrivers for the study of the underwater image formationprocess and the development of models has been the need tocorrect underwater image degradation such as haze, low con-trast and color cast due to water impurities and wavelengthdependent attenuation. Early efforts by Duntley [4] laidthe foundation for modelling underwater light propagation.Computer models developed by McGlamery [5], [6] wereextended by Jaffe in 1990 [7], leveraging advances in compu-tational processing capabilities to create the UNCLES com-puter simulation system, which is capable of analyzing the E. Iscar, G. Billings and M. Johnson-Roberson are with the De-partment of Naval Architecture and Marine Engineering, Universityof Michigan, Ann Arbor, MI 48109 USA { eiscar,gidobot,mattjr } @umich.edu Fig. 1: Schematic of underwater light propagation from lightsource to camera sensor, where the light signal is affectedby scattering and absorption through the water column andthe reflection characteristics of the seafloor.performance of underwater camera systems. The UNCLESsimulator helped guide the design of the video equipment forthe ARGO underwater imaging platform [8], but the tool wasnot released for public use. The theory for underwater lightpropagation has previously been developed, but there lacksa consolidation of this knowledge into a framework broadlyusable by the science and engineering communities for thedesign of underwater camera systems. The tool introducedin this paper incorporates the model developed through theseprior works with an interface focused on user friendlinessand minimal complexity. Some assumptions are made tosimplify the model, based on common characteristics ofunderwater imaging systems, and the validity of this modelis demonstrated through experimentation. The contributionspresented in this work are 1) A review of the underwaterimage formation model with a procedure to characterizeunderwater camera systems. 2) An open source tool to aidthe design process for an underwater camera system throughexploration of the parameter space. 3) Validation experimentssupporting the presented model as a good characterization ofan underwater camera system.The rest of the paper is structured as follows: Section IIintroduces the underwater image formation model used in thesoftware toolbox to compute sensor responses underwater;Section III presents the developed software toolbox, withan overview of the intended design use and user interface;Section IV presents the experiments validating the proposedimage formation model; and Section V outlines our conclu-sions and future work.II. U
NDERWATER I MAGE F ORMATION
In this section we introduce the underwater image forma-tion model. As light travels from a source through the watercolumn, it is attenuated through absorption and scattering.The light that reaches the seafloor or other obstacle isreflected by a fractional amount, dependant on the albedo a r X i v : . [ ee ss . I V ] A p r f the surface. The reflected light is further attenuated in thewater column as it travels back towards the camera. Lightis refracted at the water interface of the camera housingviewport before reaching the camera lens. Photons passingthrough the lens generate electrical signals on the camerasensor that are amplified and digitized to form the finalimage. This process is illustrated in Figure 1, and Figure 2provides an overview of how the model equations describethe image formation pipeline. A. Artificial Light systems
Natural light is attenuated exponentially in the oceans andfrequently does not penetrate deeper than m. Our modelassumes all light in the scene is generated from artificial lightsources mounted on the vehicle. This situation representsthe worst case scenario, as constraints on camera systemsare relaxed if natural light is present. The presented modeldescribes a light source by three main parameters:1) Luminous flux emitted by the light source, measured inlumens: This can be obtained for most underwater lights,strobes or LED modules in custom designs.2) Normalized light spectrum: The spectrum of the lightsource describes how the luminous flux is spread over thedifferent wavelengths. When the spectrum is not available,it can be approximated based on known spectra for commonlight sources. Figure 3 shows spectrum characteristics ofcommon light types such as LED, fluorescent or naturalsunlight.3) Beam pattern: The beam pattern describes how the lightspreads as it travels away from the source. We assume asimple conical beam pattern defined by its aperture half-angle β , which is typical for most underwater strobes. B. Underwater Light Propagation
Light traveling underwater from the strobe to the camerasensor is modified through absorption, scattering, reflection,and refraction at optical interfaces. We describe how each ofthese effects is modeled in our system.
1) Attenuation:
The Jaffe-McGlamery model describesthe propagation of light underwater as the sum of direct,backscatterred and forward-scatterred light. Attenuation ofthe light signal is modeled as an exponential decay, withfunction parameters depending on the water type and clarity.Coefficients describing the attenuation effects for differentclasses of water, known as Jerlov water bodies, have beencataloged [9]. The exponential decay modeling attenuationof the light signal in water is given as L = Re − b ( λ ) d (1)where R is the initial irradiance, b ( λ ) the wavelength depen-dent attenuation coefficient and d the distance of propagation.Absorption and scattering coefficients are mostly dependenton chlorophyll and dissolved organic matter in the water col-umn [9]. Experiments performed by Jerlov [10] established aset of attenuation profiles for different types of water bodies,both coastal and oceanic, with varying clarity levels. Theseprofiles are provided with our model as default selections.The user also has the option to load custom profiles. Light SourceUnderwater Attenuation (Eq. 1)Scene Reflectance (Eq. 2)Underwater Attenuation (Eq. 1)Lens (Eq. 4)Image Sensor (Eq. 5)Digitization (Eq. 6)Digital Image
Fig. 2: Image formation pipeline describing the differentsteps through which light is subjected to form the underwaterdigital image.
2) Object reflectivity:
The reflectance of light by a surfaceis modeled by the Bidirectional Reflectance DistributionFunction (BRDF) [11] that relates the outgoing radiance L of the surface with the incoming irradiance E . Assumingdiffuse reflection in our model, where θ i is the light incidentangle and M ( λ ) is the material and wavelength dependentreflection coefficient, the BRDF is simplified to: L = E M ( λ ) π cos ( θ i ) (2)
3) Light refraction:
Underwater cameras are housed in-side enclosures that protect the electronic systems from waterdamage and pressure. In order for light to reach the sensor,these enclosures employ an optical port made of translucentmaterial such as glass or acrylic, most frequently in either aspherical or flat geometry. As light travels through the port,it is refracted at each optical interface as a function of thechange in index of refraction and the direction of the incidentray relative to the surface normal. In effect, the optical portof the housing must be considered as part of the camera lenssystem.In the case of a domed viewport, the dome is treated as athick lens formed by two concentric hemispherical surfaces.Analysis of the thick lens equations show that objects atinfinity are mapped to a virtual image in the front of thedome that is curved concentrically with the dome [12], [13].A camera housed with a dome viewport must be focused atthe distance of the virtual image when immersed in waterrather than the distance to the imaging target in air. Thedistance of the virtual image from the front of the dome isderived in [12], [13], and we incorporate these equations intothe camera system design tool.hen the camera lens principal point is aligned with thedome center of curvature, the field of view of the cameraremains unchanged [12], [13]. A common method to verifythe camera is correctly aligned with the center of the dome isto look at an image of a checkerboard taken with the camerain the housing while only half immersed in water. Thereshould be no magnification difference between the part ofthe image below the water and the part above the water ifthe camera is centered.For the case of flat viewports, the effects of refractionresult in a change in the effective lens focal length [14],given as f uw = 1 . f air (3)where f uw is the effective focal length in water and f air thefocal length in air. This increase in the effective focal lengthof the system reduces the camera field of view and must beaccounted for when computing the lens aperture number. C. Lensing effects
The fundamental radiometric relation expresses theamount of light incident on the lens that reaches a pixelat the sensor surface [15]: E I = L π N cos ( α ) (4)where L is the scene radiance, N is the lens aperturenumber and α is the angle between the principal ray andthe ray through the pixel. The cos ( α ) term models naturalvignetting, a process by which illumination decays towardsthe sensor edges. Additionally, some light is lost as it travelsthrough the lens. This transmission loss depends on thequality and construction of the lens and usually rangesbetween and [16]. D. Camera response
Light that reaches the camera sensor is converted into anelectrical signal. In our model, we assume the use of machinevision cameras with linear sensor response functions, thoughwe note some consumer cameras have non-linear cameraresponse functions, designed to mimic the chemical responseof analog film. Grossberg et al. [17] studied the space ofcamera response functions. Debevec et al. [18] presentedexperimental methods to determine the camera responsefunction from a set of images. Jiang et al. [19] further mod-elled spectral sensitivity functions of color camera sensorsand proposed experimental methods to obtain them fromcolor board images. Our model assumes the sensor responseis linearly dependent on the light intensity, with varyingsensitivity to different wavelengths. The dependency of thesensor response on wavelength is described by the quantumefficiency curve. The total number of absorbed photons canbe computed by dividing the spectrum energy, weighted withthe quantum efficiency curve, by the energy of a photon: µ e = At exp h c (cid:90) λ b λ a Φ( λ ) · λ · η ( λ ) dλ (5)where A is the pixel area [ m ], Φ is the irradiance spectrum[W/( m nm)], t exp [s] is the exposure time, h is Planck’s Fig. 3: Radiance spectrum for different light typesconstant, c is the speed of light in air [m/s], λ is thewavelength [m] and η ( λ ) is the sensor quantum efficiencyas a function of wavelength. Following the EMVA1288standard [20], the digital sensor response signal µ y can becomputed as: µ y = µ y.dark + Kµ e (6)where µ y.dark is the sensor mean dark signal, and K is thesystem gain.The physical parameters for each sensor are publishedby camera manufacturers (eg. [21]) or can be obtainedexperimentally. E. Gain and Signal to Noise Ratio
Similar to changing the ISO for film cameras, digitalmachine vision cameras can have a gain applied to thesensor response signal. This decreases the amount of scenelight necessary to expose the image. However, the imagenoise is also amplified when a gain is applied, resultingin a reduction of the image Signal to Noise Ratio (SNR).SNR is an important consideration, especially for image tasksrequiring feature matching [22], and should be a parameterdecided by the camera system designer. There are threesources of image noise: dark current noise, described by thenormally distributed variance σ d ; quantization noise fromthe analog digital conversion, described by the normallydistributed variance σ q and the overall system gain K ; andshot noise inherent to light, described by the number ofabsorbed photons in the sensor µ p and the sensor quantumefficiency η . The image SNR is calculated as SN R = ηµ p (cid:113) σ d + σ q /K + ηµ p . (7)The camera system design tool allows setting a gain valueand will display the calculated image SNR for the targetaverage exposure value. F. Operational Considerations
Besides the physical characteristics of the water andselected equipment (camera, lens and lights), the operationalrequirements also highly influence the design space. Themost significant of these requirements include:) Minimum overlap between images: Overlap between con-secutive images is required in order to perform photomosaics,3D reconstructions or visual navigation. The amount of re-quired overlap, together with the vehicle speed and workingdistance will determine the image acquisition frequency f : f = v F OV x/y (1 − OV R ) (8)where v is vehicle speed [m/s], F OV x/y is the spacial fieldof view of the image in the direction of motion [m], andOVR is the fraction of consecutive image overlap.2) Focal depth of field (DoF): When running AUV imagingsurveys over rocky bottoms or coral reefs, it is frequent forthe terrain height to vary significantly. It is desirable thatthe entire image remains in focus, so the required focalDoF must be selected accordingly. Whether a pixel is infocus or not is determined by the circle of confusion, whichdescribes the area of the sensor across which a point sourceof light is spread. Light rays originating within the focalrange will project a circle of confusion on the sensor underan acceptable area threshold. The DoF is controlled by aninverse relationship with the camera aperture. However, thereis a trade off, as decreasing the size of the camera aperturedecreases the amount of light that reaches the lens andtherefore increases the required amount of light in the scene.The DoF can be computed as:
DoF = 2
N cf s f − N c s (9)where N is the lens aperture number, c is the diameter of thecircle of confusion, f is the focal length, and s is the distanceat which the camera is focused.3) Motion blur: Motion blur is a great concern for under-water imaging platforms operating in low light. The amountof blur is dependent on the speed of the vehicle v [m/s], thecamera field of view in the direction of motion F OV x/y , thesensor resolution in the direction of motion
RES x/y , andthe exposure time. The maximum exposure time t exp [s] tokeep motion blur less than a set number of pixels P IX blur is given as: t exp = P IX
Blur · F OV x/y v · RES x/y (10)4) Spacial field of view (FOV): The camera spacial FOV orarea covered by the image is influenced by lens selection anddistance to the target D [m]. It can be computed as:
F OV x/y = D ∗ SS x/y f (11)where f is the lens focal length [mm], and SS x,y is thephysical dimension of the sensor in x or y [mm].III. S OFTWARE
Taking the previously defined relations between sensors,lenses, light sources and water light propagation into account,users and designers of underwater camera systems may wishto answer questions like what sensor is best for a givenoperational profile? What are the lighting requirements for a Fig. 4: Depth of field as a function of focus distance andapertureFig. 5: Spectrum of light as it propagates through the water,attenuates, reflects and travels through the lens onto thesensor.specific camera? Or what aperture and shutter speed shouldbe used for a given deployment scenario? In order to quicklyanswer questions like these we have developed an opensource software design tool that performs parametric analysisof an underwater camera system.The tool allows the user to either input the light typeand lumen intensity or load a custom light spectrum ifavailable. Three Jerlov oceanic water types and five coastalwater profiles are provided to analyze different attenuationrates, with an option to also load custom attenuation profiles.Lenses are defined by their focal length and their transmis-sion loss, which may be specified either as a constant or byloading a custom wavelength dependent attenuation profile.Profiles are included with the program for five differentcamera sensors, and new sensors can easily be added ifEMVA specifications are available from the manufacturer.The operational requirements of the camera system arespecified in terms of the maximum acceptable motion blur,ig. 6: Experimental setup for verifying image formationmodel.Fig. 7: Comparison of measured and estimated light spec-trums at both the target board as well as the camera positionthe minimum acceptable DoF, the expected vehicle altitudeand speed above the seafloor, and the desired percentageoverlap of consecutive images. Other selectable parametersinclude the camera orientation with respect to the directionof vehicle motion, and the geometry of the camera housingviewport. With a given set of these parameters, the softwarecomputes the average camera response, minimum operationalframerate, minimum exposure time, and minimum aperturenumber. In addition to the average camera response, thesoftware can also generate visualizations of the parameterspace for the given configuration. Figure 4 shows an exampleplot over a set of parameters, where the dependence of theDoF on aperture and the distance to the imaged target isvisualized. Figure 5 shows an example plot of how the lightspectrum is decayed as it propagates from the light source tothe camera, helping contextualize the main sources of lightreduction for a specified water environment. Similar plotsmay be generated by the software for the camera frame rate,exposure time or water attenuation profiles.IV. V
ALIDATION E XPERIMENTS
The camera response simulation pipeline is validatedexperimentally in a lab environment. We tested with twomonochrome cameras, a Blackfly BFS-U3-51S5M from Fig. 8: Measured and model predicted camera responsecurves for two different sensors under the same experimentalconditions.Fig. 9: Camera response for two different lenses and withouta lens.FLIR with a Sony IMX250 sensor and a Prosilica GT-1380from Allied Vision with a Sony ICX285 sensor. The cameraswere mounted on the outside of an
46 cm x
46 cm x
46 cm freshwater tank, with the camera axial direction perpen-dicular to the clear acrylic tank wall. A diffuse whitetarget board was placed on the opposite side of the tank.Figure 6 illustrates the experimental setup. Measurementswere taken in dark ambient light conditions, with scene lightbeing provided by a FixNeo25000DX
25 klm diving lightpositioned above the camera and against the outside tankwall. The light spectrum incident on the camera sensor wasmeasured using a Sekonic SpectroMaster C-7000 lightmeter.The spectrometer was placed in a waterproof enclosure toperform spectrum measurements inside the tankFigure 7 shows the measured light spectra versus thosepredicted by the model for a generic LED light source. Thespectra are plotted for the light that was incident on thetarget surface, in red, and the light reflected back to theamera lens, in green. The model source light spectrum wascalculated with the nominal luminous intensity provided bythe manufacturer and a half beam angle of deg, accountingfor the change in beam angle from the manufacturer statedvalue due to refraction. The predicted model spectra, both atthe target surface and at the camera lens, are very similar tothe measured spectra in shape and size. Figure 8 shows theresponse of the two different cameras to the light spectrumshown in Figure 7. Both cameras had the same
30 mm lensmounted with the aperture set at F . . The predicted re-sponses from the model closely follow the measured values.We also compared the response of one camera with differentlens and aperture configurations, including no lens, a
30 mm lens with aperture F . , and a
12 mm lens with aperture F . .Figure 9 shows the measured versus the model predictedaverage camera responses for this experiment. For all cameraexperiments, the predicted responses from the model closelyfollow the measured responses, demonstrating the model is agood approximation of the real system and will give reliablepredictions over the design space.V. C ONCLUSION AND F UTURE W ORK
In this paper we have shown how underwater optical sys-tems can be coarsely simulated by a set of simple equations,and we have developed a user-friendly interface to guidethe component and parameter selections of such systems.The presented tool will enable researchers and engineerstasked with the development of underwater camera systemsto better understand the available design space, analyze trade-offs in light, sensor and lens selection, and guide early designchoices.Future work will include extending the model to considersystems with varying and multiple light source configurationsand the addition of program features to aid in the focusingof domed and flat viewport camera systems.R
EFERENCES [1] O. Calvo, A. Rozenfeld, A. Souza, F. Valenciaga, P. F.Puleston, and G Acosta, “Experimental results on smoothpath tracking with application to pipe surveying on inex-pensive auv,” in
Intelligent Robots and Systems, 2008. IROS2008. IEEE/RSJ International Conference on , IEEE, 2008,pp. 3647–3653.[2] S. B. Williams, O. R. Pizarro, M. V. Jakuba, C. R. Johnson,N. S. Barrett, R. C. Babcock, G. A. Kendrick, P. D. Stein-berg, A. J. Heyward, P. J. Doherty, et al. , “Monitoring ofbenthic reference sites: Using an autonomous underwatervehicle,”
IEEE Robotics & Automation Magazine , vol. 19,no. 1, pp. 73–84, 2012.[3] R. M. Eustice, O. Pizarro, and H. Singh, “Visually aug-mented navigation for autonomous underwater vehicles,”
IEEE Journal of Oceanic Engineering , vol. 33, no. 2,pp. 103–122, 2008.[4] S. Q. Duntley, “Light in the Sea*,” EN,
JOSA , vol. 53, no.2, pp. 214–233, Feb. 1963.[5] B. McGlamery, “Computer analysis and simulation of un-derwater camera system performance,”
SIO ref , vol. 75, p. 2,1975.[6] ——, “A computer model for underwater camera systems,”in
Ocean Optics VI , International Society for Optics andPhotonics, vol. 208, 1980, pp. 221–232. [7] J. S. Jaffe, “Computer modeling and the design of optimalunderwater imaging systems,”
IEEE Journal of OceanicEngineering , vol. 15, no. 2, pp. 101–111, 1990.[8] ——, “To sea and to see: That is the answer,”
Methods inOceanography , vol. 15, pp. 3–20, 2016.[9] M. G. Solonenko and C. D. Mobley, “Inherent opticalproperties of jerlov water types,”
Appl. Opt. , vol. 54, no.17, pp. 5392–5401, May 2015.[10] N. G. Jerlov and F. F. Koczy,
Photographic measurementsof daylight in deep water . Elanders boktr., 1951.[11] F. E. Nicodemus, “Directional reflectance and emissivity ofan opaque surface,”
Appl. Opt. , vol. 4, no. 7, pp. 767–775,Jun. 1965.[12] F. A. Jenkins and H. E. White,
Fundamentals of optics fourthedition . McGraw-Hill, Inc, 1976, ch. 5.[13]
Optics of dome ports
European Conference on ComputerVision , Springer, 2000, pp. 654–668.[15] R. Szeliski,
Computer vision: Algorithms and applications .Springer Science & Business Media, 2010.[16] M. Oelund,
Photons missing in action: Part1: Lens t-stop
IEEE Transactions on Pattern Anal-ysis and Machine Intelligence , vol. 26, no. 10, pp. 1272–1282, Oct. 2004.[18] P. E. Debevec and J. Malik, “Recovering High DynamicRange Radiance Maps from Photographs,” en, p. 10,[19] J. Jiang, D. Liu, J. Gu, and S. Ssstrunk, “What is the spaceof spectral sensitivity functions for digital color cameras?”In , Jan. 2013, pp. 168–179.[20] B. J¨ahne, “Emva 1288 standard for machine vision,”
Optik& Photonik , vol. 5, no. 1, pp. 53–54, 2010.[21] FLir,
Flir blackfly usb3 imaging performance specification12th International Conference on ImageAnalysis and Processing, 2003. Proceedings.