Photometric Stereo by Hemispherical Metric Embedding
JJournal of Mathematical Imaging and Vision manuscript No. (will be inserted by the editor)
Photometric Stereo by Hemispherical Metric Embedding
Ofer Bartal · Nati Ofir · Yaron Lipman · Ronen Basri
Received: 10.8.16 / Accepted: 21.6.17
Abstract
Photometric Stereo methods seek to reconstructthe 3d shape of an object from motionless images obtainedwith varying illumination. Most existing methods solve a re-stricted problem where the physical reflectance model, suchas Lambertian reflectance, is known in advance. In contrast,we do not restrict ourselves to a specific reflectance model.Instead, we offer a method that works on a wide variety ofreflectances. Our approach uses a simple yet uncommonlyused property of the problem - the sought after normals arepoints on a unit hemisphere. We present a novel embed-ding method that maps pixels to normals on the unit hemi-sphere. Our experiments demonstrate that this approach out-performs existing manifold learning methods for the task ofhemisphere embedding. We further show successful recon-structions of objects from a wide variety of reflectances in-cluding smooth, rough, diffuse and specular surfaces, evenin the presence of significant attached shadows. Finally, weempirically prove that under these challenging settings weobtain more accurate shape reconstructions than existing meth-ods.
Keywords
Photometric Stereo · Shape from Shading · Embedding · Manifold Learning
Ofer BartalWeizmann Institute of ScienceE-mail: [email protected] OfirWeizmann Institute of ScienceE-mail: yehonatan.ofi[email protected] LipmanWeizmann Institute of ScienceE-mail: [email protected] BasriWeizmann Institute of ScienceE-mail: [email protected]
Photometric stereo (PS) methods aim to recover the shapeof objects from collections of motionless images taken withvarying illumination. PS is challenging due to a number ofreasons. In particular, object reflectance can range from Lam-bertian to specular with varying degrees of roughness. Inaddition, surrounding lights may be complex and includemultiple light sources along with reflections from surround-ing objects. Furthermore, in many practical cases neither thelight setting nor the reflectance properties will be known.While most existing methods (see review in Section 2) applyPS in restricted settings, for example, assuming Lambertianreflectance or point source lighting, there is growing interestin developing approaches to PS that may be able to handlethis problem under more general settings.This paper takes a generic view of PS. PS can be thoughtof as the problem of discovering a mapping (or embedding)of intensity measurements to surface normals. Each pixel isassociated with a vector of its intensity values in the n inputimages. Our approach aims to embed these intensity vectorsonto the hemisphere of forward facing normal vectors usingonly local distances between the intensity vectors.We then use the surface normals to recover a depth mapof the object by integration, as done in [6]. Our main contri-bution in this work is in introducing a novel metric embed-ding technique which is specifically designed to target thehemisphere. We show that our technique outperforms exist-ing embedding techniques for the task of hemisphere em-bedding. We further show empirically that by using somegeneral statistical assumptions on the lighting, our embed-ding technique allows the recovery of shapes under a vari-ety of lighting conditions and reflectance properties, and isrobust to attached shadows. We illustrate this on real im-ages by showing reconstructions of Lambertian as well asnon-Lambertian surfaces, acquired under unknown complex a r X i v : . [ c s . C V ] J un Ofer Bartal et al. lighting conditions that include point sources as well as vary-ing ambient illumination.Generic embedding methods of high-dimensional pointclouds are well known in the machine learning and multi-dimensional scaling (MDS) literature as effective methodsfor dimensionality reduction (e.g., [40,36,41]). These meth-ods map a manifold residing in high-dimensional space intoa low dimensional Euclidean target space, which is typicallychosen to be the intrinsic dimension of the manifold. Em-bedding to a hemisphere is inherently different: while theintrinsic dimension of a hemisphere is 2, its ambient or “tar-get” dimension is 3, as we wish to preserve the hemispherestructure and not flatten it into a 2 dimensional space. Weshow that for existing methods, embedding a hemispheremanifold onto 3 dimensions leads to distortions. For somealgorithms, these distortions occur even in the trivial identitymapping of a hemisphere that already lies in a 3 dimensionalspace. Our method is specifically designed for this kind ofembedding and allows direct recovery of surface normals.To realize this embedding we use distances between in-tensity vectors to approximate the Laplace-Beltrami (LB)differential operator over the hemisphere, by solving an op-timization problem. We then make the observation that withsuitable boundary conditions on the hemisphere’s equator,the eigenfunctions of the LB operator are the spherical har-monic functions. The linear spherical harmonics correspond exactly to the X,Y, and Z coordinates of the normals on thehemisphere. The idea of spherical harmonics representationof lighting is described in details in [7]. The key observationbeing that since the Laplacian is a local operator, good ap-proximation of local distances leads to a robust global em-bedding over the hemisphere.
Early methods of photometric stereo focused on handlingLambertian surfaces under known distant point light sources[43]. More advanced methods address the case of a known[45] or unknown [30] nearby point light source. In addition,other works solve the problem of unknown distant pointlight sources and Lambertian surface. [16] solves the prob-lem up to linear ambiguity. By further enforcing integrabil-ity, ambiguity reduces to GBR [8,46]. [3,28,32,37] use aprior on surface albedo or shape to solve the GBR, while[29] solves it using a perspective camera model. Furtherwork generalizes lighting to multi point or arbitrary lighting[6,31], or generalizes to non-Lambertian reflectance with ei-ther known [17,18] or unknown lighting [13,14,23,25,27,35,44]. [1,2] extends PS to outdoor webcam images and[15] couples PS with other techniques. For integrating nor-mals, [9,12,38] use a depth map, while [20,33,24,42] avoidintegration by following a direct differential approach to PS.This idea was also applied to other computer vision prob- lems [39]. [9,10] presents robust methods for surface inte-gration in the presence of discontinuities.Sato et al. [35] have used a generic embedding tech-nique for photometric stereo, employing the Isomap algo-rithm for the embedding [40]. In [23], Lu et al. employ aniterative approach that is based on matrix decompositionwith missing data. Their method requires the estimation of areflectance-based scaling parameter. They provide extensiveexperiments to show the behavior of this parameter undervarious reflectances. We complement their work by provid-ing a proof for the existence of this parameter for generalisotropic, band limited and single lobe reflectance kernelsthat explains and further validates their empirical findings.These band limited kernels are typically diffuse kernels withno specular component. In addition, we take a general hemi-sphere embedding technique approach that avoids the needto estimate any such reflectance-based parameter. Further-more, we show that empirically our method achieves moreaccurate reconstructions on a variety of simulated and realexperiments.Hemisphere and sphere embeddings have also been founduseful in different contexts. Dimensionality reduction intoa (full) sphere was used in [11] for the problem of tomo-graphic reconstruction from unknown viewing angles, andin [22] for the problem of spherical embeddings of silhou-ettes. They restrict the Locally Linear Embedding [36] intothe sphere by formulating the problem as a hard to opti-mize quadratic program with non-convex cubic constraints.In contrast, our method embeds into the (hemi) sphere viasolving an eigenvector problem and is guaranteed to find thecorrect embedding up to approximation errors without therisk of finding a local minimum. Moreover, it does not suf-fer from challenging conditions such as attached-shadowsand varying albedo.
Our approach views Photometric Stereo as a problem ofmetric embedding. We are given a set of input images I , I . . . I n ,with N pixels, of the same object seen from the same view-ing direction but illuminated with different directional lightsources. For each pixel p we want to find its normal n p ∈ (cid:99) S ,where (cid:99) S = (cid:8) ( x , y , z ) | x + y + z = , z ≥ (cid:9) denotes theunit hemisphere representing the set of forward facing sur-face normals (below we call this the normal hemisphere).These normals (cid:8) n p (cid:9) form a discrete sampling of the normalhemisphere (cid:99) S . For each pixel p we denote by the vector v p the set of intensities observed at that pixel over the n images.It has been shown in [35,23] that when the images areproduced with single directional light sources whose direc-tion is distributed uniformly over the unit sphere and whoseintensity is constant, normalized (cid:96) distances of nearby in-tensity vectors approximate fairly well the corresponding hotometric Stereo by Hemispherical Metric Embedding 3 spherical distances between the normals, namely (cid:107) ˆ v p − ˆ v q (cid:107) ≈ d ( n p , n q ) , where ˆ v p = v p / (cid:13)(cid:13) v p (cid:13)(cid:13) and ˆ v q = v q / (cid:13)(cid:13) v q (cid:13)(cid:13) . Since the albedoof the surface is a multiplicative factor of the intensities, thisnormalization factors it out. Note that we use the notion ofalbedo to refer to the diffuse component of more complex re-flectances. The albedo is an attribute of the object, not of thelighting, and can be changed from pixel to pixel. Note thatsince we approximate the spherical distance between nearbynormals, the shapes that can be recovered by our approachare those with enough samples of normals from all the sidesof the hemisphere. We extend the analysis done in [35] toreflectances produced by an isotropic, band limited and sin-gle lobe reflectance kernel. This extends this result to im-ages obtained with general light source configurations and toobjects made of some non-Lambertian materials. With thissetup we prove the following claim: Claim
Under uniformly distributed directional light sourcesand an isotropic, band limited and single lobe reflectancekernel, the (cid:96) distances of normalized intensity vectors ˆ v p and ˆ v q of points p and q with nearby normals n p and n q areproportional to the spherical distance d ( n p , n q ) between thecorresponding surface normals, i.e., (cid:107) ˆ v p − ˆ v q (cid:107) = c · d ( n p , n q ) , (1)where the constant c depends on the reflectance kernel. Proof
See appendix.Our proof relies on a spherical harmonic decompositionof the reflectance function. [7,34] showed that images ofLambertian objects can be well approximated by a convo-lution of a low frequency kernel with the incoming light-ing function. Formally, let (cid:96) ( u ) : S → R represent incominglight intensity as a function of direction (a vector on the unitsphere S ) then the image intensity I ( p ) at a pixel p withnormal n ∈ S and albedo ρ can be expressed as I ( p ) = ρ (cid:90) u ∈ S k ( u , n ) (cid:96) ( u ) du , (2)where k ( u , n ) = max ( u T n , ) is the Lambertian kernel. Theyfurther showed that k acts as a low frequency kernel, and soall images of a Lambertian object, under arbitrary complexlightings, are well approximated by a spherical harmonic ex-pansion with just 9 low-order terms.Our claim applies to Lambertian objects illuminated byarbitrary light sources when the set of light source directionsover the entire set of images is distributed uniformly overthe unit sphere. It further applies to non-Lambertian objectswhose reflectance can be described with an isotropic, single-lobe band limited kernel , see e.g. [26] for a list of such ma-terials. Note that in either case it also accounts for attached(self) shadows. Fig. 1 A neighborhood in the image and on the hemisphere:
Left:A pixel in the owl object marked in red and its neighborhood in blue.Right: The same pixel and neighborhood shown on the normal hemi-sphere. Far away pixels in the image can map to nearby points on thehemisphere and vice versa.
Our claim compliments the extensive empirical experi-ments conducted in [23] that show that a constant as in (1)empirically exists for a wide variety of reflectances. Unlike[23], we do not need to estimate this parameter. Its affect onthe hemisphere manifold structure is a uniform global scal-ing, effectively making it a smaller or a larger hemisphere.As we take a general hemisphere embedding approach, weare indifferent to this kind of uniform scaling. Nonetheless,we prove the following claim:
Claim
The reflectance-dependent constant c can be com-puted analytically for Lambertian reflectance, and is approx-imately 0.93. Proof
See appendix for proof. Intuitively, one might expect c to equal 1 for Lambertian reflectance. Since Lambertianreflectance is non-smoothly clamped at zero (recall that theLambertian reflectance kernel, k ( u , n ) = max ( u T n , ) , is clampedat zero due to attached shadows, intensity differences be-come smaller than they would have been for an unclampedLambertian reflectance function. This effect brings the in-tensity distance of two pixels closer compared to their geodesicdistance. The magnitude of this effect is analytically com-puted in our proof.Similarly to [35], we define the neighborhood N p = { q , ..., q k } of a pixel p by taking the k -nearest neighbors w.r.t. the met-ric (cid:107) ˆ v p − ˆ v q (cid:107) . Locally, this neighborhood should be similarto the neighborhood of n p defined on the hemisphere usingthe spherical distances.Figure 1 shows an example neighborhood of pixels. Ascan be seen, pixels with close normals may be spread faraway in terms of image coordinates. Also, it may happenthat many pixels in the image occupy a small area on thenormal hemisphere.This local pixel information, like pieces of a puzzle, needsto be combined in some form to obtain the global hemi-sphere structure. We present a novel hemisphere embeddingapproach to tackle this problem. The outline for our ap-proach is as follows. First, we use local distances betweenpoints on the manifold to build an approximation of the Laplace-Beltrami (LB) differential operator ∆ over the hemisphere (cid:99) S while incorporating suitable boundary conditions. We Ofer Bartal et al. then compute specific eigenfunctions of this operator. Fi-nally, we show how to use these eigenfunctions to recoverthe unknown normals n p . As motivation, we begin by con-sidering our embedding based on the LB operator in the con-tinuous case and later on, in Section 5, describe our approachin the discrete case. The LB operator is a differential operator that is local . Thismeans that although we do not know yet where each n p islocated on the hemisphere, knowing its neighbors and thespherical distances to them is enough to define the LB opera-tor ∆ . Over the full unit sphere S = (cid:8) ( x , y , z ) | x + y + z = (cid:9) ,the first four eigenfunctions (e.g., φ that satisfy ∆ φ = λ φ with smallest value of λ is the first eigenfunction), also knownas spherical harmonics are (up-to normalizing constants): φ ( n p ) = φ ( n p ) = n xp , φ ( n p ) = n yp , and φ ( n p ) = n zp ,see [19]. If the pixels’ normals n p lay on the full sphere,we could approximate the LB operator over the sampling (cid:8) n p (cid:9) of the sphere and take the second to fourth eigenfunc-tions, set them as coordinates, and get the solution up-to aglobal rotation and reflection. Since we are dealing with ahemisphere (cid:99) S , we have to set the conditions on the equa-tor of the hemisphere ∂ (cid:99) S = (cid:8) ( x , y , z ) | x + y = , z = (cid:9) appropriately in order to retain the spherical harmonics aseigenfunctions.Let ∆ D be the LB operator, ∆ , with Dirichlet boundarycondition. Setting these conditions by forcing φ | ∂ (cid:99) S = ∆ D φ = λ φ will produce spheri-cal harmonics that vanish on the equator. In particular, thefirst eigenfunction would be the Z-coordinate. Similarly, let ∆ N be the LB operator with Neumann boundary condition.Setting these conditions to force the normal derivative at theequator to vanish will produce all eigenfunctions that havethis property over the equator. We denote the resulting oper-ator ∆ N . In particular, the first eigenfunction will be the con-stant function, followed by the X and Y coordinates, up torotation and reflection. More generally, it can be shown thatthe set of spherical harmonics Y nm , n ≥ − n ≤ m ≤ n can be split into two sets. The harmonics with odd m + n ,which form the eigenfunctions of the Dirichlet LB operatorand the harmonics with even m + n , which form the eigen-functions of the Neumann LB operator.We calculate the first eigenfunction of ∆ D and call it (cid:101) n zp . Similarly, we calculate the second and third (first twonon-constant) eigenfunctions of ∆ N and denote them by (cid:101) n xp and (cid:101) n yp , respectively. We now claim that up to a rotationabout the Z-axis (possibly with a reflection in the XY-plane) (cid:101) n p = ( (cid:101) n xp , (cid:101) n yp , (cid:101) n zp ) = T n p . We later show how this orthogonaltransformation T can be resolved using surface integrabil-ity constraints, and due to the normalization that factors outalbedo. Using this continuous setting as motivation we developan algorithm that approximates the normals of pixels by firstconstructing a discrete version of the LB operator and thenusing its eigenvectors to recover the normals. We proceedby elaborating on each part of our algorithm, the outline ofwhich was provided above. Drawing motivation from the continuous case, we next presentan algorithm to approximate the pixels’ normals, and in turnthe depth-map, from intensity vectors, see Alg. 1. The lo-cal distances in the point cloud (cid:8) ˆ v p (cid:9) ⊂ R n provide an ap-proximation to the spherical distances between the corre-sponding (unknown) cloud of normal vectors (cid:8) n p (cid:9) . Largedistances, however, may deviate significantly from their cor-responding spherical distances. We therefore think of thispoint cloud (cid:8) ˆ v p (cid:9) as an (approximate) isometric (i.e., lo-cal length preserving) embedding of the normal hemispherevectors (cid:8) n p (cid:9) ⊂ (cid:99) S into a higher ( n -dimensional) Euclideanspace. Our goal is to embed the point cloud (cid:8) ˆ v p (cid:9) back intoits natural domain (cid:99) S by using only the local distances.5.1 Outline of the Photometric Stereo AlgorithmThe outline of our algorithm is as follows:1. Approximate local spherical distances and neighborhoodsover the (unknown) cloud of normals (cid:8) n p (cid:9) ⊂ (cid:99) S usinglocal distances extracted from (cid:8) ˆ v p (cid:9) ⊂ R n . See Alg. 2.2. Use these local spherical distances to build a global least-squares optimization problem whose solution is the dis-crete Laplace-Beltrami matrix L ≈ ∆ over the (unknown)cloud of normals (cid:8) n p (cid:9) . See Alg. 3.3. Apply appropriate Dirichlet boundary conditions and Neu-mann boundary conditions to the global optimization prob-lem and solve it to obtain the matrices L D ≈ ∆ D , L N ≈ ∆ N , respectively. See Alg. 4.4. Recover the approximated normals (cid:101) n p = ( (cid:101) n xp , (cid:101) n yp , (cid:101) n zp ) forevery pixel p by calculating the first eigenvector (cid:101) n z = (cid:8)(cid:101) n zp (cid:9) of L D , and the two first non-constant eigenvectorsof L N , (cid:101) n x and (cid:101) n y . Use the approximated normals to re-cover the depth map z ( x , y ) . See Alg. 5.Next we describe each part of this algorithm in detail.5.2 Local distances and neighborhoodsSince the LB operator is local, i.e., the value of the (contin-uous) LB operator at a point, ∆ Φ ( n p ) , is determined by val-ues Φ ( n q ) at (infinitesimally) close points n q , we will onlyneed approximations of spherical distances between nearbypixel’s normals n p ≈ n q . These will be taken as the k nearest hotometric Stereo by Hemispherical Metric Embedding 5 Algorithm 1
PhotometricStereo ( I , ..., I n ) Require:
Image sequence I ,..., I n , each image is of the same size of N pixels, and under different lightings. { N p } Np = , { ˆ v p } Np = ← Neighbourhoods ( I ,..., I n ) L ← LaplacianApproximation ( { N p } Np = , { ˆ v p } Np = ) L D , L N ← BoundaryConditions ( L ) z ( x , y ) ← RecoverDepth ( L D , L N ) return z ( x , y ) neighbors of ˆ v p and then made symmetric by an AND rela-tion. That is, two pixels are maintained as neighbors if theyare neighbors of each other. Due to the reconstruction of lo-cal neighborhoods, the shapes that can be recovered by ourapproach are those with enough samples of normals fromall the sides of the hemisphere. In particular, smooth shapeswhose boundaries in the image correspond to occluding con-tours of the surface satisfy this condition. Algorithm 2
Neighbourhoods ( I , ..., I n ) Require:
Image sequence I ,..., I n , each image is of the same size of N pixels, and under different lightings. for ∀ p = ( x , y ) do v p ← ( I ( x , y , ) ,..., I ( x , y , n )) ˆ v p ← v p / || v p || end for ∀ p , q : d p , q ← || ˆ v p − ˆ v q || ∀ p : N p ← { p , q ,..., q k } { Set the neighborhoods by computing the k NN of p , according to the distances d p , q } for ∀ p , q : doif q ∈ N p ∧ p / ∈ N q then remove q from N p end ifend forreturn { N p } Np = , { ˆ v p } Np = L of size N × N where N denotes the number of pixels in each image. If φ = (cid:8) φ p (cid:9) Np = is a vector representing a function Φ : (cid:99) S → R sampled overthe normals φ p = Φ ( n p ) , L φ is the approximation of the LBoperator applied to Φ , that is ( L φ ) p ≈ ∆ Φ ( n p ) , (3)where ( L φ ) p is the p th coordinate of the vector L φ .At the core of our algorithm is the approximation of cer-tain eigenfunctions of the Laplace-Beltrami operator overthe point cloud of the (unknown) pixels’ normals (cid:8) n p (cid:9) . Sincethe point cloud of pixels’ normals (cid:8) n p (cid:9) is an irregular sam-pling of the hemisphere (cid:99) S , constructing a reliable approxi-mation to the LB operator over this point cloud is a key stepin our algorithm. Our approximation of the LB operator over the pointcloud (cid:8) n p (cid:9) is motivated by the following observation (see,e.g., the lemma in [4], p.107): in calculating the LB operatorat some point n p on a sphere, one can replace the sphere byits tangent plane at n p and calculate the standard EuclideanLaplacian at n p . Thus, in our construction, to approximatethe LB at some point n p we reconstruct locally a small patchof the sphere in a neighborhood N p and parameterize it overits tangent plane. We then approximate the Euclidean planarLaplacian over that plane. More specifically, if we denoteby ( u , v ) the coordinates of the tangent plane at n p , then ourgoal is to find a set of scalar weights L p , q , q ∈ N p , such that ∑ q ∈ N p L p , q φ q ≈ Φ uu ( n p ) + Φ vv ( n p ) = ∆ Φ ( n p ) , (4)where, as before, φ p = Φ ( n p ) is a sampling vector of somesmooth function over our hemisphere (cid:99) S .Next we describe how to find the local tangent plane ap-proximation at n p , and how to build the weights L p , q to ap-proximate the planar Laplacian over that tangent plane. Algorithm 3
LaplacianApproximation ( { N p } Np = , { ˆ v p } Np = ) Require:
Neighborhood of each pixel N p , and normalized intensitiesvector of each pixel ˆ v p . for ∀ p = ( x , y ) do ∀ q ∈ N p : ( n uq , n vq ) ← PCA n → ( ˆ v q ) { reduce dimension of ˆ v q from n to 2 by applying PCA on every ˆ v q of q ∈ N p (parameterizationover tangent plane) } r p ← max q ∈ N p (cid:113) ( n uq − n up ) + ( n vq − n vp ) if q / ∈ N p then w p , q = end ifend for Compute weights w p , q by solving:min ∑ p , q ∈ N p w p , q s.t ∑ q ∈ N p w p , q ( n uq − n up ) = ∀ p (5) ∑ q ∈ N p w p , q ( n vq − n vp ) = ∀ p (6) ∑ q ∈ N p w p , q = r p ∀ p (7) w p , q ≥ ∀ p , q ∈ N p (8) w p , q = w q , p ∀ p , q ∈ N p (9) W ∈ R N × N ← ( w p , q ) { W is the matrix whose entries are the weights w p , q } D ∈ R N × N ← diagonal { d = ∑ q w , q ,..., d NN = ∑ q w N , q } L ← D − W { Discrete laplacian matrix } return L Local parameterization over tangent planes
For each p , wewould like to build a local tangent plane parameterization to Ofer Bartal et al. (cid:99) S at n p . We will construct this parameterization using thelocal neighborhood N p . We project the normalized inten-sities ˆ v p of N p onto a two-dimensional plane using Princi-pal Component Analysis (PCA). The coordinates of the pro-jected normals approximate the coordinates on the tangentplane to (cid:99) S . LB weight construction.
Our next objective is to use ourplanar parameterization to construct the discrete Laplacianoperator, L . We ignore boundary conditions for now andtreat them separately later. Each pixel p is associated witha row in L , and the entries of that row represent weights L p , q , q ∈ N p , so that a combination of function values Φ ( n q ) with those weights approximates ∆ Φ ( n p ) as described inequation (4). Our task therefore is to find those weights.Clearly, the weights should vary from one row to the other,since the neighborhoods for each normal n p can vary. Belowwe describe a method to determine these sets of weights.Let us note that our construction of the weights for asingle pixel is equivalent to computing Locally Linear Em-bedding (LLE) [36] weights in the case that the w p , q ’s areunder-determined (number of neighbors is greater than d +
1, where d is the dimension of the points, where in our case d =
2) while taking the limit of their conditioning parameterto zero. The main difference is that our formulation main-tains linear precision always by reproducing the polynomi-als 1 , u , v and their linear combinations (equations 5, 6, 7),uses non-negative weights (equation 8), minimizes the so-lution’s norm, and is global and symmetric (equation 9). Interms of weights, locally finding the weights for one neigh-borhood N p is the same as using the pseudoinverse in theLLE.5.4 Boundary conditionsWe incorporate both Dirichlet and Neumann boundary con-ditions along the hemisphere’s equator ∂ (cid:99) S . Both conditionsmust be used to solve the embedding problem. Later on inthis section we further explain how we identify points on theequator. Dirichlet boundary condition.
For the Dirichlet boundarycondition, we only need to calculate weights for interiorpoints (cid:99) S \ ∂ (cid:99) S . We keep only rows of L corresponding to in-terior points (not on the equator of the normal hemisphere).For every equator point p , we replace the corresponding rowin L with the standard basis vector that has some value t (cid:54) = p and 0’s elsewhere. We denote the new matrix L D . If the eigenvector in the equation L D φ = λ φ satisfies λ (cid:54) = t then necessarily φ ( p ) =
0. In the generic case tak-ing arbitrary t will lead to the desired eigenvectors, and wetake t =
1. Let us denote by (cid:101) n z = (cid:8)(cid:101) n zp (cid:9) the eigenvector of L D corresponding to the smallest eigenvalue. To enforce this Fig. 2 Enforcing Neumann boundary conditions for a pixel:
Left:A pixel on the equator (red) and its local neighborhood (blue) after pro-jection onto 2d. Right: Mirroring of points to enforce Neumann bound-ary conditions. Since each mirrored pixel attains the value of its origi-nal pixel, the derivative in the direction perpendicular to the equator iszero. boundary condition we add the following constraints to con-straints (5)-(9) in Alg. 3. w p , q = ∀ p ∈ ∂ (cid:99) S , p (cid:54) = q (10) w p , q = ∀ p ∈ ∂ (cid:99) S , p = q (11) Neumann boundary condition.
To get Neumann boundaryconditions we use symmetric reflection along the domain’sboundary. As before, we do not alter the rows of L corre-sponding to interior points. For an equator pixel p , we du-plicate its neighborhood ( n uq , n vq ) , q ∈ N p along the equatorvia reflection. To get the direction of the equator we use theeigenvector (cid:101) n z calculated with the Dirichlet boundary condi-tion. Since this vector approximates the z -coordinate valueof the normals its gradient is orthogonal to the boundary(equator). We therefore calculate its approximated gradient(using a linear fit) and rotate it by π / ( n uq , n vq ) , q ∈ N p in the PCA plane along the equator line we havefound, to create corresponding points q R such that the equa-tor point ( n up , n vp ) lies exactly on the equator line. To enforcethis boundary condition we add the following constraints toconstraints (5)-(9) in Alg. 3: w p , q R = w p , q ∀ p ∈ ∂ (cid:99) S , q , q R ∈ N p . (12)We denote this operator by L N . Note that reflection willforce eigenvectors L N φ = λ φ to have zero normal derivativeacross the equator, since we force symmetry w.r.t. the equa-tor. In the kernel of L N we will have the constant vector. Thefirst two eigenvectors corresponding to non-zero eigenval-ues are denoted by (cid:101) n x , (cid:101) n y . Identifying equator points.
In order to set our boundary con-ditions, we need to identify the pixels that lie on the equatorof the normal hemisphere. If the object we attempt to recon-struct is smooth and convex this set of pixels will includethe points on the bounding contour of the silhouette. Thisassumption has been used by [35]. As [23] note, this maynot be the case, as is demonstrated in Figure 3. To address hotometric Stereo by Hemispherical Metric Embedding 7 this issue we introduce a method for identifying the equatorthat is based on the structure of the manifold itself.We identify points on the equator by applying the Isomapalgorithm to embed the points onto 2 dimensions. The Isomapalgorithm uses geodesic distances calculated as the shortestpaths between each pair of points. On real images, which ex-hibit noisy distances, we found that sometimes the embed-ding projects the hemisphere onto other planes such as theXZ or YZ planes instead of the XY plane. We found it help-ful to modify the distances to correspond to a more flatteneddisc-shaped manifold, to ensure the embedding projects thehemisphere onto the XY. This modification is a local non-linear transformation of the distances d p , q that enlarges largedistances: tan ( d p , q π d max + ε ) . We note that the boundary of themanifold is invariant to this distance transformation. Theresulting planar embedding’s boundary corresponds to theboundary of the original manifold. We then select the pointson the convex hull, label them as boundary points, removethem from the set, and apply convex hull again. We repeatthis until 5% of points are selected as boundary. Figure 3shows the equator discovered by our method (left). It can beseen that our result closely matches the real equator, whereasthe equator from the silhouette of the object (right) includesmany more internal points. Fig. 3 Comparing equator discovery methods.
Top: Our method.Bottom: Silhouette-based equator discovery. Discovered equator pointsare marked in red on the image and the normal hemisphere. Observethat silhouette outline pixels (bottom) are scattered over the wholehemisphere. Points discovered by our method (top) more closely matchthe true equator.
We note that the Dirichlet boundary conditions enforce (cid:101) n z to be zero on the equator. A camera cannot actually seepixels whose normal is exactly on the equator since theirsurface orientation is perpendicular to the viewing direction.To correct for this, we have shifted the (cid:101) n z values such thatthe minimum value is set to 0.15. Algorithm 4
BoundaryConditions ( L ) Require: L is the discrete laplacian matrix of size N × N .Identify the boundary points on the equator L D ← apply Dirichlet boundary condition on L { value equals zero } L N ← apply Neumann boundary condition on L { derivative alongthe z axis equals zero } return L D , L N (cid:101) n p and using them to build a depth map. This isachieved by computing the (cid:101) n x , (cid:101) n y , (cid:101) n z eigenvectors definedabove and setting (cid:101) n p = ( (cid:101) n xp , (cid:101) n yp , (cid:101) n zp ) . As the eigenvectors arefound up to scale, we normalize (cid:101) n p to unit length. Next, notethat since the two eigenvectors (cid:101) n x , (cid:101) n y correspond to the sameeigenvalue, they are subject to arbitrary rotation and reflec-tion. The rotation and reflection are resolved using surfaceintegrability constraints based on [46]. Recovering the shapefor a Lambertian object can be done up to a generalized-bas-relief (GBR) transformation, which is a 3 parameter lineartransformation of the normals and a non-linear transforma-tion of albedos [8]. Since we normalize the intensity vec-tors to factor out albedo, our construction is independent ofalbedo and the GBR ambiguity is thus restricted to a convex-concave ambiguity. To reconstruct a depth map we apply in-tegrability constraints as in [6], for other methods see surveyin [10]. The convex-concave ambiguity is resolved using aglobal assumption on depth as in [28]. Comparing recovered and expected eigenvalues.
We knowthe geometry of the desired manifold is that of a hemisphere.We can therefore compare the recovered eigenvalues of ourapproximation L to the expected spherical harmonic eigen-values of ∆ (which are analytically known). If the recoveredeigenvalues do not fit the expected eigenvalues, it suggeststhat our reconstruction will not be very precise. It can beshown that the set of spherical harmonics Y nm can be dividedinto two sets of functions, Y nm with odd n + m , which van-ish along the equator, corresponding to the Ditichlet bound-ary conditions, and Y nm with even n + m , whose longitudinalderivatives vanish along the equator, corresponding to theNeumann boundary conditions.In Figure 4 we show a comparison between the recov-ered and expected eigenvalues of our approximation L con-structed for the owl object (shown later in Figure 6) withDirichlet or Neumann boundary conditions using only thevarying illumination images. The figure shows that the 8smallest eigenvalues computed from L are fairly close to theexpected Spherical Harmonic eigenvalues. As noted above, L ≈ α∆ , with some constant α >
0. Therefore, the compar-ison in Figure 4 is done after compensating for this scale byfitting a global scale factor to the eigenvalues. Note that mul-tiplying an operator L by a factor α scales the eigenvaluesbut does not alter the eigenvectors. Ofer Bartal et al.
Fig. 4 Comparing recovered and expected eigenvalues : The ex-pected (blue dots) and approximated (orange circles) scaled eigenval-ues of images of the rough ball object for the LB operator with Dirich-let (left) and Neumann (right) eigenvalues. For reconstruction we onlyuse the eigenvectors whose corresponding eigenvalues are marked inred.
Algorithm 5
RecoverDepth ( L D , L N ) Require: L D is laplacian matrix with Dirichlet boundary conditions, L N is the same matrix with Neumann boundary conditions. (cid:101) n zp ← first eigenvector of L D (cid:101) n xp , (cid:101) n yp ← second and third eigenvectors of L N rotate (cid:101) n xp , (cid:101) n yp to the image axes z ( x , y ) ← depthImage ( (cid:101) n xp , (cid:101) n y p , (cid:101) n zp ) return z ( x , y ) We evaluate our method as a general embedding methodfor a hemisphere manifold and compare it to the follow-ing existing embedding methods: Isomap, a modified ver-sion of Isomap, Locally Linear Embedding (LLE) [36] andDiffusion maps. The modified version of Isomap improvesits performance for embedding a hemisphere. Isomap em-beds a manifold of intrinsic dimension 2 onto 3d by calcu-lating geodesic distances on the manifold, but then embed-ding them using multi-dimensional scaling (MDS), whichexpects these distances to be Euclidean. Since we know themanifold is that of a hemisphere, we can convert these geodesicdistances d g to Euclidean distances d e using the followingformula, derived from the law of cosines: d e = (cid:113) r ( − cos ( d g )) (13)where d g is the geodesic distance and where we set theradius r according to the following, r = d max π (14)where d max is the longest geodesic distance between anytwo points. We refer to this modified Isomap based on thehemispherical chordal distances as ’Isomap chordal’. In thisexperiment we use 3 models: a sphere, a doll and a rabbit.For each model, we generate 90 images, each lit by a ran-domly positioned point source. We repeat the experiment 30times and show the results in Table 1. In this experiment, the distances are not ideal due to the randomness in the light dis-tribution, which together with the gap between the intrinsicdimension of the hemisphere and the target dimension of theembedding, are known to cause noise and instability in theLLE and Diffusion maps embeddings [21]. Isomap chordalachieves favorable results to those of Isomap, which sug-gests that prior knowledge of the manifold shape can signif-icantly improve embedding results. Unlike Isomap chordal,we do not rely on successful estimation of a global radiusparameter r and do not suffer from topological instability[5]. We further add that in this experiment, we have usedProcrustes superimposition to optimally fit the resulting em-bedding onto the unit hemisphere. In Photometric Stereo thisoptimal alignment is not available to us. We now conducta similar experiment to evaluate the different embeddingmethods for the purpose of Photometric Stereo. We use themethod in [35] based on the occluding boundary to align theembedded hemispheres for the existing methods on the unithemisphere and generate surface normals. Table 2 shows theresulting errors in terms of mean angle error of the normals. Table 1 Various objects, 90 images, random single point source :The table shows the mean point pair distance after optimal Procrustessuperimposition to the unit hemisphere.Model
Error
Ourmethod Isomap Isomapchordal LLE DiffusionMapMean
Table 2 Various objects, 90 images, random single point source :The table shows the mean normal angle error.Model
Error
Ourmethod Isomap Isomapchordal LLE DiffusionMapMean
To test the performance of the proposed algorithm onreal images, we have conducted experiments with a varietyof materials and lighting conditions. In all experiments theimages were resized to 110 pixels in width (and 100 to 170 hotometric Stereo by Hemispherical Metric Embedding 9 pixels in height) for faster computation times. On these im-ages our algorithm takes between 5 to 20 minutes on a quad-core 2.8GHz PC, where the time is mainly dominated by thenumber of pixels and considerably less by the number of in-put images. We note that our code is implemented in Matlaband can be further optimized, especially the per-pixel com-putations which can be massively parallelized.In all experiments we set the number of neighbors k to5% of pixels. We identify outlier pixels by removing pix-els that are not favored by the neighbors (a fraction of theirneighbors did not choose them as neighbors, using 80% as athreshold) as we anticipate such pixels to be corrupt or sub-ject to significant noise. We remove these pixels from thecomputation, and then set their recovered normals by inter-polation from neighboring pixels in the image. Instead ofusing the radius r in equation (7) the constant value of 1was used, as it was found to produce more stable results inpractice.Where appropriate, we compare our results to those ofSato et al. [23] and to Favaro and Papadhimitri [28]. Thecode of both methods was obtained from the authors. As anevaluation error measure we use the mean degree error ofthe normals.We begin by showing results on objects lit by uniformlydistributed point sources. We use the benchmark of [18] whichconsists of 4 objects: a terracotta warrior, a doll and a re-lief sculpture with Lambertian reflectance and a rough non-Lambertian ball. The data set includes between 46 and 57images per object. In lack of ground truth, we used the nor-mals recovered from their calibrated PS method as groundtruth. Figure 5 and Table 3 show the results of this exper-iment. This dataset consists of many images which exhibitsignificant attached shadows and therefore demonstrates therobustness of our method to this phenomena. We note thatthe relief sculpture exhibits significant cast shadows, whichare not modeled by our method, and can account for thehigher error measurements for both our method and [23].In the next experiment, the results of which are shownin Figure 6 and Table 4, we use the benchmark of [28]. Thedata set includes 7 objects and 5 to 12 images per objectobtained with point source lightings. Lighting directions areclose to the viewing directions with a mean angle of 23 ◦ andstandard deviation of 10 ◦ . We compare our reconstructionto ‘ground truth’ shapes produced in [28] by applying a cal-ibrated Photometric Stereo algorithm that assumes Lamber-tian reflectance to the images, which are available with thedataset. Lights in this experiment cover only frontal direc-tions, which can account for higher errors compared to theprevious data set. [23] assume uniformly distributed pointsource lights and estimate an essential parameter based onthis assumption. This experiment does not adhere to theirassumptions and we have not been fruitful in applying theirmethod to these settings.Finally, in Figure 7 and Table 5 we show results obtainedwith non-Lambertian objects and natural illumination which consists of diffuse lighting coming from several directionsas well as light reflected from walls and other objects in theroom. The objects, initially diffuse, were laser scanned toproduce ground truth shapes. They were then sprayed witha thin shiny coating to make them specular and picturedunder ambient illumination that includes reflectance fromsurrounding walls and objects and is roughly uniform. Weuse between 25 and 31 images per model. Although we dohave scanned shapes of the objects, very precise registra-tion is required to properly report quantitative error mea-surements, since even a small pixel shift can cause the nor-mals to change dramatically. In lack thereof, we use a dif-ferent error estimation method. For each pixel normal, wetake the 9x9 patch in which it lies, and select the closestnormal from the 3d scan that lies in that patch. We apply thesame procedure for our method, Sato and Favaro. Note thatit appears that Favaro achieves best results on all lambertianobjects and not winning in all non-lambertian ones. Table 3 Uniformly distributed distant point sources : The tableshows the mean normal angle errors obtained with each algorithm. Inthis scenario, our method is comparable to Sato.
Experiment set Ourmethod
Sato FavaroTerracotta warrior 12.7 12.9
Doll :The table shows the mean normal angle errors obtained with each al-gorithm.
Experiment set Ourmethod
Sato FavaroRedfish 14.0 44.3
Octopus 19.1 30.2
Rock 16.4 22.1
Horse 9.6 23.9
Owl 10.4 24.2
Cat 14.6 26.6
Buddha 11.4 22.8 : The tableshows the mean normal angle errors obtained with each algorithm.
Experiment set Ourmethod
Sato FavaroDuck
20 77.4Onion
Ourmethod
Sato
Fig. 5
Objects lit by uniformly distributed directional light sources.The top 3 objects are Lambertian and the bottom object is a rough, non-Lambertian ball. As light comes from many directions, many of the in-put images exhibit significant attached shadows. These reconstructionsdemonstrate the robustness of our method to attached shadows.
We have presented an algorithm for Photometric Stereo thatuses a generic approach in which intensity measurement vec-tors are embedded into the unit hemisphere of forward-facingsurface normals. We have shown that the algorithm success-fully reconstructs objects under quite general lighting condi-tions and reflectance properties that do not need to be knownin advance. Further work could involve developing betterdistance measures for finding pixel neighborhoods. For abetter approximation of the Laplacian, local geometric anal-ysis at a pixel can be extended beyond a tangent plane totake into account the local geometric curvature. The steps inour algorithm , such as [12], could be combined to a unifiedoptimization problem. Finally, the algorithm can be furtherused in other scenarios that require hemispherical embed-ding.
Our proofs rely on a spherical harmonic representation of re-flectance. Below we introduce notation, which for the mostpart follows the notation of [7]. Spherical harmonics are or-
Objectimage Groundtruth
Ourmethod
Sato
Fig. 6
Lambertian objects lit by distant point sources. thogonal functions from the unit sphere to the complex plain Y nm : S → C , n = , , ... and − n ≤ m ≤ n , defined as Y nm ( θ , φ ) = (cid:115) ( n + ) π ( n − | m | ) ! ( n + | m | ) ! P n | m | ( cos θ ) e im φ , (15)where P nm are the associated Legendre functions, defined as P nm ( z ) = ( − z ) m / n n ! d n + m dz n + m ( z − ) n . (16)Each harmonic function Y nm is a polynomial of degree n ofthe variables ( x , y , z ) on the unit sphere. We will thereforeinterchangeably use Y ( θ , φ ) and Y ( p ) with p = ( x , y , z ) T and x = sin θ cos φ , y = sin θ sin φ , and z = cos θ .To work with real functions we use the notation Y enm = Re ( Y nm ) and Y onm = Im ( Y nm ) (1 ≤ m ≤ n ), which denote theeven and odd components of the harmonics. Note that thezonal harmonics ( Y n ) are real, and that the real sphericalharmonics too are orthogonal. In addition, to simplify nota-tion, we interchangeably use a single subscript notation Y s hotometric Stereo by Hemispherical Metric Embedding 11Objectimage Groundtruth Ourmethod
Sato
Fig. 7
Non-Lambertian objects lit by ambient illumination. with s = , , , ... . For this notation we order the real har-monics by their order n and within every order by m , placingeach even harmonic before its corresponding odd one, i.e., Y , Y , Y e , Y o , Y , ... .We will be interested in surfaces whose reflectance ker-nel is isotropic and band limited. An example for such ker-nels is the Lambertian reflectance kernel k ( θ ) = max ( cos θ , ) ,which can be approximated to a great accuracy with the firstnine harmonics ( n ≤ I p at a pixel p with surface normal n p and albedo ρ p can be expressed in the following form.Let (cid:96) ( θ , φ ) denote the environment lighting as a function ofdirection, and let (cid:96) ( θ , φ ) = ∑ ∞ s = (cid:96) s Y s ( θ , φ ) be its harmonicdecomposition, then I p = ρ p ∞ ∑ s = α s (cid:96) s k s Y s ( n p ) , (17)where α s are constants due to the Funk-Hecke (convolution)theorem, k ( θ ) = ∑ ∞ n = k ( n ) Y n ( θ , φ ) is the harmonic expan-sion of the kernel and k s ( n , m ) = k ( n ) .For Lambertian surfaces we are interested in harmonicsup to order 2. The kernel coefficients and the Funk-Heckeconstants for these orders are k ( ) = √ π α ( ) = π k ( ) = (cid:114) π α ( ) = π k ( ) = √ π α ( ) = π , and the harmonics are Y = Y = √ π Y = Y = (cid:114) π zY = Y e = (cid:114) π x Y = Y o = (cid:114) π yY = Y = (cid:114) π ( z − ) Y = Y e = (cid:114) π xz (19) Y = Y o = (cid:114) π yz Y = Y e = (cid:114) π ( x − y ) Y = Y o = (cid:114) π xy . Claim
Under uniformly distributed directional light sourcesand an isotropic, band limited reflectance kernel, the (cid:96) dis-tances of normalized intensity vectors ˆ v p and ˆ v q of points p and q with nearby normals n p and n q are proportional tothe spherical distance d ( n p , n q ) between the correspondingsurface normals, i.e., (cid:107) ˆ v p − ˆ v q (cid:107) = c · d ( n p , n q ) , where the constant c depends on the reflectance kernel. Proof
We prove the claim by evaluating the expression: (cid:107) ˆ v p − ˆ v q (cid:107) = − v Tp ˆ v q . The (un-normalized) vector v p contains the intensities ob-served at pixel p over a set of images k = .. K , each lit by apoint light source from direction l k ∈ S whose magnitudesare equal. Using (17), the intensity I kp of p can be written as, I kp = ρ p ∞ ∑ s = (cid:96) ks c s Y s ( n p ) , where ρ p and n p respectively are the albedo and normal at p , (cid:96) ks are the harmonic coefficients of the light in the k ’thimage, and c s = α s k s are constants whose values depend onthe reflectance kernel. As we will be interested in normal-ized intensity vectors ˆ v p , we can assume w.l.o.g. that ρ p = v p and v q isgiven by, v Tp v q = K ∑ k (cid:32) ∞ ∑ s = (cid:96) ks c s Y s ( n p ) (cid:33) (cid:32) ∞ ∑ t = (cid:96) kt c t Y t ( n q ) (cid:33) = ∞ ∑ s = ∞ ∑ t = N st L st , where, L st = K ∑ k (cid:96) ks (cid:96) kt N st = c s k s Y s ( n i ) c t k t Y t ( n j ) . Under the assumption that the light in the K images isdistributed uniformly L st = s = t . This is because for a pointsource light at direction l k , which is expressed as a Diracdelta function δ l k , (cid:96) ks = < δ l k , Y s > = Y s ( l k ) , so that, L st = K ∑ k Y s ( l k ) Y t ( l k ) , and for uniform light of unit intensity, we get L st ≈ (cid:90) S Y s ( l ) Y t ( l ) dl = s = t due to the orthonormality of the spherical harmonics. Theinner product v Tp v q therefore simplifies to v Tp v q = ∞ ∑ s = N s , where N s = c s Y s ( n p ) Y s ( n q ) .We notice next that N s can be expressed as a univari-ate polynomial in z = cos θ , where θ is the angle between n p and n q . As the inner product v Tp v q and the spherical har-monics are invariant to a global rotation of the normals, wecan orient our coordinate system so that n p = [ , , ] T and n q = [ sin θ , , cos θ ] T (and φ = Y s ( n p ) is constant and Y s ( n q ) is a (scaled) associated Legen-dre function, which is polynomial of degree n in z = cos θ .We can therefore write N s = n ∑ r = a sr z r with coefficients a sr that depend on the reflectance kerneland the harmonic order, s . Since n p and n q are nearby, weuse the Taylor approximation cos θ = − θ + O ( θ ) , whichyields, N s = n ∑ r = a sr ( − θ ) r + O ( θ ) = a s − b s θ + O ( θ ) , where a s = ∑ nr = a sr and b s = ∑ nr = ra sr . We now have, v Tp v q = ∞ ∑ s = (cid:18) a s − b s θ + O ( θ ) (cid:19) We further assume that the reflectance kernel is band lim-ited, so that harmonic terms of orders n > N for a finite N can be omitted. Therefore, v Tp v q = S ∑ s = (cid:18) a s − b s θ (cid:19) + O ( N θ ) , where S = ( N + ) . We further denote a = ∑ Ss = a s and b = ∑ Ss = b s . v Tp v p can be computed simply by plugging θ = v Tp v p = a , and v Tq v q = v Tp v p due to rotation invariance. We can now evaluate the inner product between the nor-malized intensity vectors,ˆ v Tp ˆ v q = v Tp v q (cid:107) v p (cid:107)(cid:107) v q (cid:107) = a − b θ a + O ( N θ ) = − (cid:18) ( c θ ) (cid:19) + O ( N θ ) , where c = (cid:112) b / a . Note that a is positive since a = (cid:107) v p (cid:107) and b is positive for sufficiently small θ , since v Tp v q < a . Weconclude that, up to O ( N θ ) terms, (cid:107) ˆ v p − ˆ v q (cid:107) = (cid:113) − v Tp ˆ v q = c θ = c · d ( n p , n q ) Claim
The reflectance-dependent constant c can be com-puted analytically for Lambertian reflectance, and is approx-imately 0.93. Proof
As proved in the previous claim, (cid:107) ˆ v p − ˆ v q (cid:107) = c · d ( n p , n q ) . We now wish to calculate the constant c for the Lamber-tian kernel. To this end we assume n p = ( , , ) T , n q =( sin θ , , cos θ ) T and compute N s = α ( n ) k ( n ) Y s ( n p ) Y s ( n q ) asa function of θ . Using (18)-(19) we get N = π N = π θ N = π ( θ − ) , and the rest of the terms vanish. Therefore, v Tp v q = ∑ s = N s = π + π θ + π ( θ − ) . Replacing cos θ by its Taylor approximation, cos θ (cid:39) − θ ,we get that, v Tp v q (cid:39) π − π θ and v Tp v p = v Tq v q are obtained by plugging θ =
0. Denote, a = π /
192 and b = π / v Tp ˆ v q ,ˆ v Tp ˆ v q = v Tp v q (cid:107) v p (cid:107)(cid:107) v q (cid:107) (cid:39) a − b θ a = − ba θ . Let c = (cid:113) ba ≈ .
93. We obtain, (cid:107) ˆ v p − ˆ v q (cid:107) ≈ . θ . hotometric Stereo by Hemispherical Metric Embedding 13 References
1. Abrams, A., Hawley, C., Pless, R.: Heliometric stereo: Shape fromsun position. In: CVPR (2012)2. Ackermann, J., Langguth, F., Fuhrmann, S., Goesele, M., Darm-stadt, T.: Photometric stereo for outdoor webcams. In: ECCV(2012)3. Alldrin, N., Mallick, S., Kriegman, D.: Resolving the generalizedbas-relief ambiguity by entropy minimization. In: CVPR (2007)4. Arnold, V., Cooke, R.: Lectures on Partial Differential Equations.Universitext (1979). Springer (2004). URL http://books.google.co.il/books?id=qlNJAYwmfTcC
5. Balasubramanian, M., Schwartz, E.L.: The isomap algorithm andtopological stability. Science (2002)6. Basri, R., Jacobs, D., Kemelmacher, I.: Photometric stereo withgeneral, unknown lighting. IJCV (2007)7. Basri, R., Jacobs, D.W.: Lambertian reflectance and linear sub-spaces. IEEE Trans. Pattern Anal. Mach. Intell. (2), 218–233(2003)8. Belhumeur, P.N., Kriegman, D.J., Yuille, A.L.: The bas-relief am-biguity. IJCV (1999)9. Durou, J.D., Aujol, J.F., Courteille, F.: Integrating the NormalField of a Surface in the Presence of Discontinuities, pp. 261–273.Springer Berlin Heidelberg (2009)10. Durou, J.D., Qu´eau, Y., Aujol, J.F.: Normal Integration – Part I: ASurvey (2016)11. Fang, Y., Vishwanathan, S., Sun, M., Ramani, K.: slle: Spheri-cal locally linear embedding with applications to tomography. In:CVPR (2011)12. Frankot, R.T., Chellappa, R., Member, S.: A method for enforcingintegrability in shape from shading algorithms. IEEE Transactionson Pattern Analysis and Machine Intelligence , 439–451 (1988)13. Georghiades, A.: Incorporating the torrance and sparrow modelof reflectance in uncalibrated photometric stereo. pp. 816–823(2003)14. Georghiades, A.S.: Recovering 3-d shape and reflectance from asmall number of photographs. In: EGRW (2003)15. Gotardo, P.F.U., Simon, T., Sheikh, Y., Matthews, I.: Photogeo-metric scene flow for high-detail dynamic 3d reconstruction. In:The IEEE International Conference on Computer Vision (ICCV)(2015)16. Hayakawa, H.: Photometric stereo under a light-source with arbi-trary motion. JOSA-A (1994)17. Hertzmann, A., Seitz, S.: Example-based photometric stereo:shape reconstruction with general, varying brdfs. PAMI (2005)18. Higo, T., Matsushita, Y., Ikeuchi, K.: Consensus photometricstereo. In: CVPR (2010)19. Hobson, E.W.: The theory of spherical and ellipsoidal harmonics.CUP Archive (1931)20. Hoeltgen, L., Quau, Y., Breuss, M., Radow, G.: Optimised pho-tometric stereo via non-convex variational minimisation (regularpaper). In: British Machine Vision Conference (BMVC) (2016)21. Levina, E., Bickel, P.J.: Maximum likelihood estimation of intrin-sic dimension. In: NIPS (2004)22. Littwin, E., Averbuch-Elor, H., Cohen-Or, D.: Spherical embed-ding of inlier silhouette dissimilarities (2015)23. Lu, F., Matsushita, Y., Sato, I., Okabe, T., Sato, Y.: Uncalibratedphotometric stereo for unknown isotropic reflectances. In: CVPR(2013)24. Mecca, R., Rodol`a, E., Cremers, D.: Realistic photometric stereousing partial differential irradiance equation ratios. Computers &Graphics (2015)25. Midorikawa, K., Yamasaki, T., Aizawa, K.: Uncalibrated photo-metric stereo by stepwise optimization using principal compo-nents of isotropic brdfs. In: The IEEE Conference on ComputerVision and Pattern Recognition (CVPR) (2016)26. Nillius, P., Eklundh, J.O.: Classifying materials from their re-flectance properties. In: European Conference on Computer Vi-sion (ECCV), pp. 366–376 (2004) 27. Okabe, T., Sato, I., Sato, Y.: Attached shadow coding: Estimat-ing surface normals from shadows under unknown reflectance andlighting conditions. In: ICCV (2009)28. Papadhimitri, T., Favaro, P.: A closed-form solution to uncali-brated photometric stereo via diffuse maxima. IEEE ComputerSociety, Los Alamitos, CA, USA (2012)29. Papadhimitri, T., Favaro, P.: A new perspective on uncalibratedphotometric stereo. In: CVPR, pp. 1474–1481. IEEE ComputerSociety (2013)30. Papadhimitri, T., Favaro, P.: Uncalibrated near-light photometricstereo. In: Proceedings of the British Machine Vision Conference.BMVA Press (2014)31. Queau, Y., Lauze, F., Durou, J.D.: A L1-TV Algorithm for RobustPerspective Photometric Stereo with Spatially-Varying Lightings,SSVM (2015)32. Qu´eau, Y., Lauze, F., Durou, J.D.: Solving uncalibrated photomet-ric stereo using total variation. Journal of Mathematical Imagingand Vision (JMIV) (2015)33. Queau, Y., Mecca, R., Durou, J.D.: Unbiased photometric stereofor colored surfaces: A variational approach. In: The IEEE Confer-ence on Computer Vision and Pattern Recognition (CVPR) (2016)34. Ramamoorthi, R., Hanrahan, P.: On the relationship between radi-ance and irradiance: Determining the illumination from images ofa convex lambertian object. JOSA (10) (2001)35. Sato, I., Okabe, T., Yu, Q., Sato, Y.: Shape reconstruction basedon similarity in radiance changes under varying illumination. In:ICCV (2007)36. Saul, L.K., Roweis, S.T.: Think globally, fit locally: unsupervisedlearning of low dimensional manifolds. J. Mach. Learn. Res. ,119–155 (2003)37. Shi, B., Matsushita, Y., Wei, Y., Xu, C., Tan, P.: P.: Self-calibratingphotometric stereo. In: CVPR (2010)38. Simchony, T., Chellappa, R., Shao, M.: Direct analytical methodsfor solving poisson equations in computer vision problems. IEEETrans. Pattern Anal. Mach. Intell.12