Object 3D Reconstruction based on Photometric Stereo and Inverted Rendering
OObject 3D Reconstruction based on PhotometricStereo and Inverted Rendering st Anish R. Khadka
Kingston University London
London, [email protected] 2 nd Paolo Remagnino
Kingston University London
London, [email protected] rd Vasileios Argyriou
Kingston University London
London, [email protected]
Abstract —Methods for 3D reconstruction such as Photometricstereo recover the shape and reflectance properties using multipleimages of an object taken with variable lighting conditionsfrom a fixed viewpoint. Photometric stereo assumes that ascene is illuminated only directly by the illumination source.As result, indirect illumination effects due to inter-reflectionsintroduce strong biases in the recovered shape. Our suggestedapproach is to recover scene properties in the presence of indirectillumination. To this end, we proposed an iterative PS methodcombined with a reverted Monte-Carlo ray tracing algorithm toovercome the inter-reflection effects aiming to separate the directand indirect lighting. This approach iteratively reconstructs asurface considering both the environment around the object andits concavities. We demonstrate and evaluate our approach usingthree datasets and the overall results illustrate improvement overthe classic PS approaches.
Index Terms —Photometric Stereo, 3D Reconstruction, RayTracing
I. I
NTRODUCTION
Scene and object 3D reconstruction is the process ofcapturing their shape and appearance using various methodsand approaches such as stereo, structure from motion, shapefrom shading, and many more [35]. The reconstruction ishighly applicable in a number of fields as it provides theability to understand 3D scenes and objects on basis of 2Dimages. The applications ranging from robotics and automatedindustrial quality inspection over human-machine interaction[6] (example action, gesture and face recognition), satellite3D data analysis [25], to movies and architectural applications[17]. Additionally, the method is commonly used to analysethe surfaces of a celestial object, such as the Moon [18].Photometric stereo (PS) is a well-established technique thatis used for 3D surface reconstruction [10]. The approachgenerally inherits the principle of appearance analysis of a 3Dobject on its 2D images. Based on the intensity information,these approaches attempt to infer the shape of the depictedobject [17]. It estimates shape and recovers surface normalsof a scene by utilising several intensity images obtained undervarying lighting conditions with an identical viewpoint [16],[41]. By default, PS assumes a Lambertian surface reflectance;a standard reflectance model which defines a linear depen-dency between the normal vectors and image intensities. The definition of the model then can be used to determine the 3Dspace in the image [5]. However, just a single Lambertianimage is not adequate to correctly determine the surfaceshape. Therefore, the PS uses several images whose pixelscorresponds to a single point on the object and is able torecover surface normals and albedos [40].Light displays complicated attributes while interacting withobjects resulting direct and indirect illumination as shown infigure 1.However, classical PS naively assumes that a scene isilluminated only directly by the emitting source. In presenceof indirect illumination, it produces erroneous results withreduced reconstruction accuracy [21]. For example, an indirectillumination such as inter-reflections makes concave objectsappear shallower [30].In this paper, we present an iterative 3D reconstructionmethod considering inter-reflections due to the concavities andthe environment. We propose a novel method that accountsfor inter-reflections in a calibrated photometric stereo environ-ment. This approach utilises a reverted Monte Carlo ray tracingmethod to extract the environmental colour trying to minimisethe inter-reflections within images used for photometric stereo.This approach not only accommodates the concave surface butalso applies to any object in a scene with inter-reflections. Theproposed method Iterative Ray Tracing Photometric Stereo- IRT PS iteratively applies Photometric Stereo (PS) and areverted ray tracing algorithm based on a Monte-Carlo im-plementation to reconstruct with higher accuracy the observedsurfaces. This approach iteratively reconstructs the surface andseparates the indirect from direct lighting considering also theenvironment around the object. Likewise, the proposed IPT-PS method can be integrated to any PS technique removingthe effects of inter-reflections and improving the overall re-construction accuracy.Our approach is extensively evaluated on three datasets andthe overall results demonstrate improvement over the classicapproaches. The main contributions of our work are: • a reverted Monte Carlo ray tracing algorithm to estimatethe indirect lighting both from the environment and theobject’s concavities. a r X i v : . [ c s . C V ] N ov an iterative surface reconstruction method that is utilisedby the reverted Monte Carlo ray tracing • the proposed methodology that allows IRT-PS to becombined with any other PS algorithm improving theoverall performance.The paper is organised as follows: Section II providesbackground material on Photometric Stereo, followed by invertlight transport, their properties and related works. In section3, we introduce the mathematical definition of necessary theterms. In section 4, we propose a novel iterative PS methodand discuss the suggested reverted Monte Carlo ray tracingalgorithm. The performance of this approach is investigatedin section 5, with section 6 concluding the work.II. P HOTOMETRIC S TEREO
Photometric stereo (PS) is an approach to estimate thesurface normal and reflectance (i.e albedo) of an object basedon three or more intensity images with the fixed view undervarying lighting condition [16]. A number of solutions havebeen proposed to address this problem. Woodham [45] wasthe first to introduce the PS method. He proposed an approachsimple and effective. However, he only considered Lambertiansurface which suffers from noise. In his method, it is assumedthat the surface albedo is known a prior for each point on thesurface, the surface gradient can be obtained by using a three-point light source. Onn and Bruckstein [33] developed a two-image PS method. Their work was based on the assumptionthat the objects are smooth and no self-shadows are present.The PS was further extended by Coleman and Jain [9], whichutilises four light sources, discards the specular reflections andestimates the surface shape by performing mean of diffusereflections and the use of the Lambertian reflection model.Nayar et al. [29] proposed a PS method which used alinear combination of an impulse specular component and theLambertian model to recover the shape and reflectance for asurface. Similarly, an algorithm for estimating the local surfacegradient and real albedo from four sources in the presence ofhighlights and shadows was proposed by Barsky and Petrou[3]. Chandraker et al. [7] proposed an algorithm that requiredat least four light sources and images to reconstruct surface inpresence of shadow. It is also worth mentioning the relatedwork presented in [1], [11], [28], [34] following similararchitectures and approaches.Furthermore, over the previous years, methods that considerimages produced by more general lighting conditions notknown a prior. Basri et al. [4] proposed a PS method whereno prior knowledge of the light source and its type is required,however the emitting source should be distant or uncon-strained. They utilised low order spherical harmonics andoptimised it to low-dimensional space to represent Lambertianobjects. Likewise, Shi et al. [36] used colour and intensityprofiles, which are obtained from registered pixels acrossimages to propose a self-calibrating PS method. They automat-ically determine a radiometric response function and resolvedthe generalised bas-relief for estimating surface normals and albedos. While lighting conditions could be unknown, theyrequired fixed viewpoint.Nevertheless, a majority of the methods and models whileworking well with with matte objects, under-perform when thereconstructed objects are specular, transparent or with inter-reflections. Non-Lambertian reflection and specifically inter-reflection may be difficult to solve in photometric stereo.Solomon and Ikeuchi [37] developed a method where theyutilised four lights and tried to extract the surface shape androughness of an object which has specular lobe. They used asimplified version of Torrance-Sparrow reflectance model todetermine the surface roughness. Bajcsy et al. [2] presentedan algorithm for detecting diffuse and specular interfacereflections and some inter-reflections. They used brightness,hue, and saturation values instead of RGB as they point outthat the values have a direct correspondence to body coloursand to diffuse and specular, shading, shadows and inter-reflections. But, the algorithm requires uniformly coloureddielectric surface under single coloured scene illumination.Tozz et al. [43] proposed a PS method that is independent ofthe albedo values and uses image ratio formulation. However,their method requires an initial separation of diffuse andspecular components.In addition, because of the nature of light, inter-reflectionis unavoidable even in a controlled environment. This mayvary in magnitude depending on the environment itself, thestructure, and the material of the object. Moreover, it may notbe uniform over the whole surface. As a result, the imagesare blurred locally in shade. Most photometric methods donot consider inter-reflection from an environment and concavesurfaces, and those that do have considered one of the two cuesonly. One of the first attempts at scene recovery under inter-reflection was purpose by Nayar et al. [30]. They presentedan iterative algorithm which recovers shapes from a concavesurface which first estimates the shape from intensity data;then this shape is used as input, and the radiosity methodis applied to estimate a corrected, no-interreflection imageintensity distribution. These steps are carried iteratively un-til convergence. Nevertheless, the algorithm only examinesthe inter-reflection in concave shapes, Lambertian reflectancemodels and does not take into account the colour of the inter-reflected light. Funt and Drew [14] proposed an algorithmwhich is based on singular value decomposition of the colourfor a convex surface. They proposed a “one-bounce” modelwhich measured inter-reflection between two matte convexsurfaces with a uniform colour and illumination can varyspatially in its intensity but not in its spectral composition.Again, the algorithm is specific to convex surface assuminga uniform colour and illumination that can vary spatially.Langer [27] did the study on the shadows which becomesinter-reflections. They proposed a method for inferring surfacecolour in a uni-chromatic scene which is based on the relativecontrast of the scene in different colour channels. Again, themethod is highly specific and only deals with inter-reflectionrelated to shadow.Most existing shape from intensity techniques accounts forn only direct component of light transport. Nayar et al. [31] proposed using high-frequency illumination patterns toseparate direct and indirect illumination from more generalscenes.Gupta et al. [15] studied the relation between illumina-tion defocus and global light transport. Again, Chen et al. [8]used modulated structured light patterns with high-frequencypatterns to mitigate the effects of indirect illumination. La-mond et al. [26] used high-frequency light patterns to sepa-rate the diffuse and specular components of BRDF. Holroyd et al. [19] constructed a high-accuracy imaging system formeasuring the surface shape and BRDF. All these techniqueseither are an active method or they assume that the indirectillumination in each of the acquired images is caused by asingle source. In contrast, we consider separation of indirectcomponents by simulating the inter-reflections and removingit from the source images.III. F
ORWARD L IGHT P ROPAGATION
An image captured by the camera is the results of a complexsequence of reflections and inter-reflections. When light isemitted from the source, it bounces off the scene’s surfaceone or more times before reaching to a camera.Fig. 1: (Left)Direct and (Middle)(Right)indirect light bouncearound the environmentIn theory, every image can be captured as infinity sum, I = I + I + I + .. + I n , where I n denotes the total contributionof light that bounces n times before reaching the camera asshown in figure 1. For example, I is the captured image ifit was possible to remove all the indirect illumination fromreaching the camera sensor, while the infinite sum I + I + .. + I n describes the total contribution of indirect illumination.Although we can capture the final image I using a camera,the individual n-bounce images are not directly measurable inthe real-world scenario.Nevertheless, the techniques for simulating inter-reflectionsand other light transport effects are not new in the computervision and graphics. The algorithm that simulated the forwardlight transport was solved by Kajiya [24]. The algorithm isalso known as rendering equation . The rendering equation isan integral in which the radiance leaving a point is given asthe sum of emitted plus reflected radiance under a geometricoptics approximation. I ( x, x (cid:48) ) = g ( x, x (cid:48) ) (cid:34) e ( x, x (cid:48) ) + (cid:90) s p ( x, x (cid:48) , x (cid:48)(cid:48) ) I ( x (cid:48) , x (cid:48)(cid:48) ) dx (cid:48)(cid:48) (cid:35) (1)Where I ( x, x (cid:48) ) is related to the intensity of light passingfrom x (cid:48) to point x . g ( x, x (cid:48) ) is a ”geometry” term, e ( x, x (cid:48) ) is related to the intensity of emitted light from x (cid:48) to x and p ( x, x (cid:48) x (cid:48)(cid:48) ) is related to the intensity of light scattered from x (cid:48)(cid:48) to x by a patch of surface at x (cid:48) .An algorithm such as ray tracing [12] [23] solved theequation 1 by using Monte-Carlo methods, whereas radiosity[12] [22] used finite element method to produce near realisticlooking images in the field.For a Lambertian object illuminated by a light source ofparallel rays, the observed image intensity a at each pixelis given by the product of the albedo ρ and the cosine ofthe incidence angle θ i (the angle between the direction ofthe incident light and the surface normal) [20]. The aboveincidence angle can be expressed as the dot product of twounit vectors, the light direction l and the surface normal n , a = ρ cos( θ i ) = ρ ( l · n ) .Let us now consider a Lambertian surface patch with albedo ρ and normal n , illuminated in turn by several fixed and knownillumination sources with directions l , l , ..., l ˜ Q . In this casewe can express the intensities of the obtained pixels as: a k = ρ ( l k · n ) , where k = 1 , , ..., ˜Q . (2)We stack the pixel intensities to obtain the pixel intensityvector A a = ( a , a , ..., a ˜ Q ) T . Also the illumination vectors arestacked row-wise to form the illumination matrix L =( l , l , ..., l ˜ Q ) T . Equation (2) could then be rewritten in matrixform: A a = ρ Ln (3)If there are at least three illumination vectors which are notcoplanar, we can calculate ρ and n using the Least SquaresError technique, which consists of using the transpose of L ,given that L is not a square matrix: L T A a = ρ L T Ln ⇒ ( L T L ) − L T A a = ρ n (4)Since n has unit length, we can estimate both the surfacenormal (as the direction of the obtained vector) and the albedo(as its length). Extra images allow one to recover the surfaceparameters more robustly.IV. P ROPOSED I TERATIVE R AY T RACING P HOTOMETRIC S TEREO M ETHOD (IRT-PS)In nature, when we illuminate a surface, light not onlyreflects towards the viewer but also among all surfaces in theenvironment. This is always true, with exception of scenes thatconsists only of a single convex surface. In general, scenesinclude concave surfaces where points reflect light betweenthemselves. Furthermore, inter-reflections can occur due to theenvironment and appreciably can alter a scene’s appearance. Ingure 2, to simulate the inter-reflections the sphere is placedwithin the Cornell box [32] and highlights the inter-reflectionsi.e sphere receive the colours from its environment. (a) (Left)Image with no inter-reflection, (Middle) Image with inter-reflection from Environment only, (Right) Combined Image(b) (Left)Image with no inter-reflection, (Middle) Image with inter-reflection from Concavity only, (Right) Combined Image
Fig. 2: Example images of Inter-reflection from environmentand concavityExisting computer vision algorithms do not account foreffects of inter-reflections and hence often produce erroneousresults. The algorithms that are directly affected by inter-reflections are the shape-from-intensity algorithms includingPhotometric Stereo. Due to the common assumption of sin-gle surface reflections (direct illumination) and disregardinghigher order (inter-reflections, a subset of global illumination),photometric methods produce erroneous results when appliedto open scenes.The first stage of this approach (stage 0), is performed onlyonce throughout the process and involves the acquisition ofthe initial input images. It is assumed that inter-reflectionsare present and that the captured surface is within the knownenvironment. In our case within a Cornell Box.Moving to the following stage, PS is applied to the imagesacquired at stage 0 using equation 4 to obtain the initial albedo ρ t and normals n t . Integrating over the obtained normals a3D surface H t is obtained using the M-estimator technique.This initial surface that is affected by the presence of theinter-reflections becomes the input to the following stage, thatinvolves the proposed reverted ray tracing algorithm.As environment information is known prior to reconstruc-tion, we can implement our environment. The Cornell Boxwas setup as the environment at the following stage 3. Morerealistic textures can be used for the walls without affectingthe proposed methodology.In stage 4, we simulate the environment assuming theCornell box is given or estimated. In our case, this approachcan be extended to other realistic environmental projectionsuch as Hemispherical Dome Projection [39] without affecting Fig. 3: An overview of the proposed IRT-PS algorithm.the proposed methodology. Then we place the generated H t surface within this environment.In the following stage, based on equation 7, the reverted raytracing algorithm is applied. Since we are only interested ininter-reflections, only the indirect illumination is calculated.Toimplement the ray tracer for Lambertian surface, we solve therendering equation by integrating Monte Carlo estimator L ( p, w o ) = (cid:90) Ω f ( p, w , w i ) L i ( p, w i ) cosθ i dw i (5)Where L ( p, w ) is the total outgoing radiance reflected at p along the w direction. L i ( p, w i ) is the radiance incident at p along the w i direction. f ( p, w , w i ) determines how muchradiance is reflected at p in direction w , due to irradiance in-cident at p along the w i direction. cosθ i is from the Lambert’scosine law: diffuse reflection is directly proportional to cos ( θ ) ig. 4: Sample images from Stage 0 with inter-reflections dueto the environmentof the normals and the incident illumination ( i ). Finally, (cid:82) Ω dw i is an integral over a given hemisphere.As Monte-Carlo approximation is a method to approximatethe expectation of a random variable, using samples. E ( X ) ≈ N n (cid:88) i =1 X i (6)where, E ( X ) is an approximation of average value ofrandom variable X . N is the sample size. And when weintegrate it to equation 5 we solve the rendering equation. (cid:104) L ( p, w o ) (cid:105) = 1 N N (cid:88) i =1 f ( p, w , w i ) L i ( p, w i ) cosθ i dw i p ( w i ) (7)However, Monte-Carlo estimator is affected by noise, theray tracer algorithm also inherited such a problem. For exam-ple, to half the noise in an image rendered by ray tracing, weneed to quadruple the number of samples.To estimate the environmental colour, we first hit the H t surface with rays from each pixel, consider techniques such ashemisphere sampling and we randomly reflect the rays towardthe environment. As a result, the images of the environmentare captured for the various levels/depths of ray reflection. Inthis study, we only use up to 3 reflection rays (1 to 3) with justa single sampling, as shown in figure 5. Because we are notcalculating all the ray reflections within the environment, wewill have pixel locations without intensity values. An examplecan be seen in figure 6. Therefore, we are using a non-uniforminterpolation algorithm [42] to approximate the missing valuesin the obtained environmental intensity images E rt , where r corresponds to the number of ray reflections.In figure 6, we see that the more ray reflects, the less brightthe pixels become. The main reason behind this phenomenonis because of ray tracing algorithm and considering that thefirst ray r has more influence on the final pixel intensity than Fig. 5: Extraction of Environment Intensities in 3 differentways (a) Only extract colour (c1), (b) reflect ray one time andcombine the intensities (c1 * c2), and (c), reflect one moretime and combine all the colours (c3*c2*c1). (a) Environment Colour extracted for Sphere(b) The Interpolated images of Environment Colour Fig. 6: Sample image of Environment colour captured by R1- R3 rays and their interpolated imagesthe ray r . Therefore, when we have more ray reflections, theintensity of the pixels needs to be reduced, accordingly.In stage 5, we generate the new input images A t +1 = A t − E rt by subtracting the environmental intensity reducing theinter-reflections from the original input images. There are threedifferent sets of images for each ray reflection r , r and r .Finally, the obtained images which have fewer inter-reflections (example difference image is shown in figure 7)are used for as input to photometric stereo, generating a new H t +1 surface. The whole process can be applied iterativelyfor a certain number of iterations or until the difference D H = H t +1 − H t between a new 3D surface and the previousone is less than a given threshold.V. E XPERIMENTS AND R ESULTS
In our comparative evaluation study, three different datasetswith ground truth were used. Scan data from the HarvardPS dataset [13], a dataset with faces [44] and synthetic datagenerated by simulated objects (see figures 8 and 11).We used the photometric stereo approach to reconstructthe sets of the acquired H t surface, with and without inter-reflections considering different numbers (1 to 3) of rayreflections in the proposed reverted Monte-Carlo ray tracingalgorithm. We then estimate the height-, albedo- and normal-ig. 7: (Left) Image with inter-reflections, (Right) estimatedenvironmental intensity image and (Bottom) obtained imagewithout inter-reflections.Fig. 8: Ground truth used for rendering and evaluation purpose.Synaptic Matlab Sphere, Harvards Photometric data, Scan Dat.Fig. 9: Image samples with rendered inter-reflections.error comparing to classic PS method [38] using the availableground truth.To calculate the height-error we used the equation, H err = 1 n (cid:32) n (cid:88) i =1 | H GT − H t | i (cid:33) (8) H err is the mean for height error. H GT is the height valueof ground truth surface, whereas H t is the height value of reconstructed surface. Regarding the albedo-error we use theequation below, P rerr = | P rGT − P rH | P gerr = | P gGT − P gH | P berr = | P bGT − P bH | P rgberr = P rerr + P gerr + P berr (9)where P rgberr is the albedo-error from mean of individualcolour channel; Red P rerr , Green P gerr , and Blue P berr channel.Likewise, to calculate normal-error we utilise the followingequation: N xerr = | N xGT − N xH | N yerr = | N yGT − N yH | N zerr = | N zGT − N zH | N xyzerr = N xerr + N yerr + N zerr (10) N xyzerr denote the mean normal-error for all the axis x, y, and z .Where N xerr is a mean error for X axis, N yerr is mean errorfor Y, and N zerr is mean error for Z, N xyzH is normal fromreconstructed surface.Fig. 10: Example of the estimated albedo using classic PS[38], and the proposed IRT-PS method using 1-, 2- and 3-rayreflections.From the table 1, and charts in figure 11, we can see thatthe overall trend of mean Height, Albedo, and Normal errorsare reduced with our approach than the classic photometricstereo one. In table 1, text highlighted in red are the averageoverall results of the [38] photometric stereo method. Whereasbest results from our IRT-PS approach are highlighted in thegreen text. From the charts figure 11, we can see the generaltrend of the height error: Results improve with each additionalray and the best result is achieved by Ray 3. Likewise, thebest result for Albedo and Normal are given by Ray 2. Theindirect illumination captured by Rays R3 and R2 of theABLE I: Obtained results for the synthetic data, the Harvardand the face PS database comparing the [38] method, with the3 variations of the proposed IRT-PS approach. Synthetic PS [38]
IRTPSr1 IRTPSr2 IRTPSr3Height
Normal [38]
IRTPSr1 IRTPSr2 IRTPSr3Height
Normal
Face PS [38]
IRTPSr1 IRTPSr2 IRTPSr3Height
Normal [38]
IRTPSr1 IRTPSr2 IRTPSr3Height 12.049
Normal 0.829 environment were able to reduce the inter-reflection effectfrom the original images. Furthermore, looking at the overalltable and comparing to PS [38], we again see that our methodimproves in all the estimation. The greatest improvement canbe seen in Height, followed by Normal, and finally the Albedoerror. This shows that if we improve the captured indirectillumination then it should result in more accurate and detailedreconstructed surfaces.VI. C
ONCLUSIONS
In this work, a novel iterative method considering inter-reflections both due to concavities and the environmentwas proposed. The IRT-PS approach iteratively appliesPhotometric Stereo and a reverted Monte-Carlo ray tracingalgorithm, reconstructing the observed surface and separatingthe indirect from direct lighting. A comparative study wasperformed evaluating the reconstruction accuracy of theproposed solution on three different datasets and the overallresults demonstrate improvement over the classic approachesthat do not consider environmental inter-reflections.A
CKNOWLEDGEMENTS
This work is co-funded by the NATO within the WITNESSproject under grant agreement number G5437. The Titan XPascal used for this research was donated by the NVIDIACorporation. R
EFERENCES[1] V. Argyriou and M. Petrou. Photometric stereo: an overview.
Adv.Imaging Electron. Phys. , 156:1–54, 2009.[2] R. Bajcsy, S. W. Lee, and A. Leonardis. Detection of diffuse and specularinterface reflections and inter-reflections by color image segmentation.
International Journal of Computer Vision , 17:241–272, 1996.[3] S. Barsky and M. Petrou. Shadows and highlights detection in 4-sourcecolour photometricstereo.
ICIP , 3:967–970, October 2001.[4] R. Basri, D. Jacobs, and I. Kemelmacher”. ”photometric stereo withgeneral, unknown lighting.”. ”IJCV” , ”72”(”3”):”239–257”, ”2007”.[5] P. N. Belhumeur and D. J. Kriegman. What is the set of images of anobject under all possible lighting conditions? In
CVPR , 1996. (a) Overall Results for syntactic database: (left) Height Error, (Mid-dle) Albedo Error (Right) Normal Error(b) Overall Results for face database [44] :(left) Height Error,(Middle) Albedo Error (Right) Normal Error(c) Overall Results for Harvard database [13] : (left) Height Error,(Middle) Albedo Error (Right) Normal Error
Fig. 11: Overall results of the performed experiments demon-strating the r1 and r3 are the best methods for albedo andheight estimation, respectively. [6] V. Bloom, D. Makris, and V. Argyriou. Clustered spatio-temporalmanifolds for online action recognition. In , pages 3963–3968, Aug 2014.[7] M. Chandraker, S. Agarwal, and D. Kriegman. Shadowcuts: Photometricstereo with shadows.
CVPR , June 2007.[8] T. Chen, H.-P. Seidel, and H. P. A. Lensch. Modulated phase-shifting for3d scanning. , pages 1–8, 2008.[9] E. Coleman and R. Jain. Obtaining 3-dimensional shape of textured andspecular surfaces using four-source photometry.
CVGIP , 18:309, 1982.10] C. H. Esteban, G. Vogiatzis, and R. Cipolla. Multiview photometricstereo.
IEEE Transactions on Pattern Analysis and Machine Intelligence ,30:548–554, 2008.[11] G. Finlayson, M. Drew, and C. Lu. Intrinsic images by entropyminimisation.
In Proc. ECCV , pages 582–595, 2004.[12] J. D. Foley, A. van Dam, S. K. Feiner, and J. F. Hughes. Computergraphics - principles and practice, 2nd edition. 1990.[13] R. T. Frankot and R. Chellappa. A method for enforcing integrability inshape from shading algorithms.
IEEE Tran PAMI , 10(4):439–451, Jul1988.[14] B. V. Funt and M. S. Drew. Color space analysis of mutual illumination.
IEEE Trans. Pattern Anal. Mach. Intell. , 15:1319–1326, 1993.[15] M. Gupta, Y. Tian, S. G. Narasimhan, and L. Zhang. (de) focusing onglobal light transport for active scene recovery. , pages 2969–2976, 2009.[16] H. Hayakawa. Photometric stereo under a light source with arbitrarymotion. 2002.[17] S. Herbort and C. W¨ohler. An introduction to image-based 3d surfacereconstruction and a survey of photometric stereo methods. 2011.[18] M. Hicks, B. J. Buratti, J. W. Nettles, M. Staid, J. Sunshine, C. Pieters,S. Besse, and J. M. Boardman. A photometric function for analysisof lunar images in the visual and infrared based on moon mineralogymapper observations. 2011.[19] M. Holroyd, J. Lawrence, and T. E. Zickler. A coaxial optical scanner forsynchronous acquisition of 3d geometry and surface reflectance.
ACMTrans. Graph. , 29:99:1–99:12, 2010.[20] B. Horn. Understanding image intensities.
Artificial Intelligence ,8(11):201–231, 1977.[21] K. Ikeuchi. Determining surface orientations of specular surfaces byusing the photometric stereo method.
IEEE Transactions on PatternAnalysis and Machine Intelligence , PAMI-3:661–669, 1981.[22] D. S. Immel, M. F. Cohen, and D. P. Greenberg. A radiosity methodfor non-diffuse environments. In
SIGGRAPH , 1986.[23] W. Jarosz, H. W. Jensen, and C. Donner. Advanced global illuminationusing photon mapping. In
SIGGRAPH ’08 , 2008.[24] J. T. Kajiya. The rendering equation. In
SIGGRAPH , 1986.[25] D. Konstantinidis, T. Stathaki, V. Argyriou, and N. Grammalidis. Build-ing detection using enhanced hoglbp features and region refinement pro-cesses.
IEEE Journal of Selected Topics in Applied Earth Observationsand Remote Sensing , 10(3):888–905, March 2017.[26] B. Lamond, P. Peers, A. Ghosh, and P. Debevec. Image-based sepa-ration of diffuse and specular reflection using environmental structuralillumination. 2009 IEEE International Conference on ComputationalPhotography.[27] M. S. Langer. When shadows become interreflections.
InternationalJournal of Computer Vision , 34:193–204, 1999.[28] M. Levine and J. Bhattacharyya. Removing shadows.
Pattern Recogni-tion Letters , 26(3):251–265, 2005.[29] S. Nayar, K. Ikeuchi, and T. Kanade. Determining shape and reflectanceof hybrid surfaces by photometric sampling.
IEEE T. RA , 6(4):418–431,1990.[30] S. K. Nayar, K. Ikeuchi, and T. Kanade. Shape from interreflections.
International Journal of Computer Vision , 6:173–195, 1990.[31] S. K. Nayar, G. Krishnan, M. D. Grossberg, and R. Raskar. Fastseparation of direct and global components of a scene using highfrequency illumination.
ACM Trans. Graph. , 25:935–944, 2006.[32] S. Niedenthal. Learning from the cornell box.
Leonardo , 35:249–254,2002.[33] R. Onn and A. M. Bruckstein. Integrability disambiguates surfacerecovery in two-image photometric stereo.
International Journal ofComputer Vision , 5:105–113, 1990.[34] H. Ragheb and E. Hancock. A probabilistic framework for specularshape-from-shading.
Pattern Recognition , 36(2):407–427, 2003.[35] F. Remondino. Image-based 3d modelling: a review. 2006.[36] B. Shi, Y. Matsushita, Y. Wei, C. Xu, and P. Tan. Self-calibratingphotometric stereo. , pages 1118–1125, 2010.[37] F. Solomon and K. Ikeuchi. Extracting the shape and roughness ofspecular lobe objects using four light photometric stereo.
IEEE TransPAMI , 18(4):449–454, 1996.[38] J. Sun, M. Smith, L. Smith, S. Midha, and J. Bamber. Object surfacerecovery using a multi-light photometric stereo technique for non-lambertian surfaces subject to shadows and specularities.
Image andVision Computing , 25(7):1050–1057, July 2007.[39] P. B. Swinburne. Spherical mirror : A new approach to hemisphericaldome projection. 2005.[40] P. Tan, S. Lin, and L. Quan. Subpixel photometric stereo.
IEEE Transactions on Pattern Analysis and Machine Intelligence , 30:1460–1471, 2008.[41] A. Tankus and N. Kiryati. Photometric stereo under perspectiveprojection.
Tenth IEEE International Conference on Computer Vision(ICCV’05) Volume 1 , 1:611–616 Vol. 1, 2005.[42] P. Th´evenaz, T. Blu, M. Unser, and philippe. thevenaz. Image interpo-lation and resampling. 1999.[43] S. Tozza, R. Mecca, M. Duocastella, and A. D. Bue. Direct differentialphotometric stereo shape recovery of diffuse and specular surfaces.
Journal of Mathematical Imaging and Vision , 56:57–76, 2016.[44] V.Argyriou and M. Petrou. Recursive photometric stereo when multipleshadows and highlights are present.
Proceedings of CVPR , 2008.[45] R. Woodham. Photometric stereo: A reflectance map technique fordetermining surface orientation from image intesit.