AAzimuthal Anamorphic Ray-map for ImmersiveRenders in Perspective
Jakub Maksymilian Fober [email protected]
Abstract
Wide choice of cinematic lenses enables motion-picture cre-ators to adapt image visual-appearance to their creative vi-sion. Such choice does not exist in realm of real-time com-puter graphics, where only one type of perspective projec-tion is widely used. This work provides perspective imagingmodel that in an artistically convincing manner resemblesanamorphic photography lens variety. It presents anamor-phic azimuthal projection map with natural vignetting andrealistic chromatic aberration. Mathematical model for thisprojection has been chosen such that its parameters reflectpsycho-physiological aspects of visual perception. That en-ables use in artistic and professional environments, wherespecific aspects of the photographed space are to be pre-sented.
CCS Concepts: • Computing methodologies → Percep-tion ; Ray tracing ; •
Human-centered computing ; •
Ap-plied computing → Media arts ; Keywords:
Curvilinear Perspective, Panini, Cylindrical, Fish-eye, Perspective Map © 2021 Jakub Maksymilian FoberThis work is licensed under Creative Commons BY-NC-ND 3.0 license. https://creativecommons.org/licenses/by-nc-nd/3.0/
For all other uses including commercial, contact the owner/author(s).
Perspective in computer real-time graphics hasn’t changedsince the dawn of CGI. It is based on a concept as old asfifteenth century Renaissance, a linear projection [Alberti1435; Argan and Robb 1946; McArdle 2013]. This situationis similar to beginnings of photography in 1840, where onlyone type of lens was widely available, a
Rapid Rectilinear [Kingslake 1989]. Linear perspective even at a time of itsadvent, 500-years ago, has been criticized for distorting pro-portions [Da Vinci 1632]. In a phenomenon known todayas
Leonardo’s paradox [Dixon 1987]. Computer graphics re-ally skipped artistic achievements of last five centuries inregard to perspective. This includes cylindrical perspectiveof Pannini [Sharpless et al. 2010], Barker [Wikipedia, contrib-utors 2019] and anamorphic lenses used in cinematography[Giardina 2016; Sasaki 2017a,b]. Situation is critical even more so, as there is no mathematical model for generatinganamorphic projection in an artistically-convincing manner.Some attempts were made at alternative projections for com-puter graphics, with fixed cylindrical or spherical geometry[Baldwin et al. 2014; Sharpless et al. 2010]. Parametrized per-spective model was also proposed as a new standard [Correiaand Romão 2007], but wasn’t adopted. It included interpo-lation states in-between rectilinear/equidistant and spheri-cal/cylindrical projection. The cylindrical parametrization ofthis solution was merely an interpolation factor, where inter-mediate values did not correspond to any common projectiontype. Therefore it was not suited for artistic or professionaluse. Notion of sphere as a projective surface, which incorpo-rates cartographic mapping to produce perspective picturebecame popularized [German et al. 2007; Peñaranda et al.2015; Williams 2015]. Also perspective parametrization thattransitions according to the content (by the view-tilt angle)has been developed, as a modification to computer gameMinecraft [Williams 2017]. But results of these solutionswhere more a gimmick and have not found practical use.The linear perspective projection is still way-to-go for mostdigital content. One of the reasons is the GPU fixed-functionarchitecture in regard to rasterization. With an advent ofreal-time ray-tracing, more exotic projections could becomewidely adopted.This paper aims to provide perspective model with math-ematical parametrization that gives artistic-style interactionwith image-geometry. A parametrization that works simi-larly to the way film directors choose lenses for each sceneby their aesthetics [Giardina 2016; Sasaki 2017a], but withgreater degree of control. It also aims to provide psycho-physiological correlation between perspective model parame-ters and perception of depicted space attributes, like distance,size, proportions, shape or speed. That mapping enables useof presented model in a professional environment, whereimage geometry is specifically suited to a task [Whittaker1984].
This document uses following naming convention: • Left-handed coordinate system. • Vectors presented natively in column. • Matrix vectors arranged in rows (row-major order) 𝑀 row col . • Matrix-matrix multiplication by [ column ] ′ · [ row ] 𝑏 = 𝑀 𝑎 𝑏 . a r X i v : . [ c s . G R ] F e b ober, J.M. • Double bar enclosure “ ∥ (cid:174) 𝐴 ∥ ” represent vector direction. • Single bar enclosure “ | 𝑢 | ” represent vector length orscalar absolute. • Vectors with arithmetic sign, or none are calculatedcomponent-wise and form another vector. • Centered dot “ · ” represent vector dot product. • Cross sign “ × ” represent vector cross-product. • Square brackets with comma “ [ 𝑓 , 𝑐 ] ” denote interval. • Square brackets with blanks “ [ 𝑥 𝑦 ] ” denote vector,matrix. • Power of “ − ” signifies reciprocal of the value. • QED symbol “ □ ” marks final result or output.This naming convention simplifies transformation processof formulations into code. If we assume that projective visual space is spherical [Fleck1994; McArdle 2013], one can define perspective picture asan array of rays pointing to the visual sphere surface. This ishow the algorithm described below will output a projection-map, aka perspective-map. It is an array of three-dimensionalrays, which are assigned to each screen-pixel. Having visualsphere as the image model enables wider angle of view, be-yond 180° limit of planar projection. Ray-map can easily beconverted to cube 𝑈𝑉 –map, 𝑆𝑇 –map and other formats.Here procedural algorithm for primary-ray map (aka per-spective map) uses two input values from the user, distortionparameter for two power axes and focal-length or angle-of-view (aka FOV). Two distinct power axes define the anamor-phic projection type. Each axis is expressed by the azimuthalprojection factor 𝑘 [Bettonvil 2005; Fleck 1994; Krause 2019].Both power axes share same focal-length 𝑓 . Evaluation ofeach power-axis produces spherical angle (cid:174) 𝜃 𝑥 and (cid:174) 𝜃 𝑦 , whichare then combined to anamorphic azimuthal-projection an-gle 𝜃 ′ . Interpolation of 𝜃 is done by the anamorphic weight (cid:174) 𝜑 𝑥 and (cid:174) 𝜑 𝑦 . Weights are derived from spherical angle 𝜑 , of az-imuthal projection. To note, calculation of 𝜑 is here omitted,using optimization of view-coordinates vector (cid:174) 𝑣 . 𝑟 = |(cid:174) 𝑣 | = √︃ (cid:174) 𝑣 𝑥 + (cid:174) 𝑣 𝑦 (1a) (cid:174) 𝜃 𝑥 = arctan (cid:0) 𝑟𝑓 (cid:174) 𝑘 𝑥 (cid:1) 𝑘 𝑥 , if (cid:174) 𝑘 𝑥 > 𝑟𝑓 , if (cid:174) 𝑘 𝑥 = arcsin (cid:0) 𝑟𝑓 (cid:174) 𝑘 𝑥 (cid:1) 𝑘 𝑥 , if (cid:174) 𝑘 𝑥 < (1b) (cid:174) 𝜃 𝑦 = arctan (cid:0) 𝑟𝑓 (cid:174) 𝑘 𝑦 (cid:1) 𝑘 𝑦 , if (cid:174) 𝑘 𝑦 > 𝑟𝑓 , if (cid:174) 𝑘 𝑦 = arcsin (cid:0) 𝑟𝑓 (cid:174) 𝑘 𝑦 (cid:1) 𝑘 𝑦 , if (cid:174) 𝑘 𝑦 < (1c) (cid:20) (cid:174) 𝜑 𝑥 (cid:174) 𝜑 𝑦 (cid:21) = (cid:174) 𝑣 𝑥 (cid:174) 𝑣 𝑥 +(cid:174) 𝑣 𝑦 (cid:174) 𝑣 𝑦 (cid:174) 𝑣 𝑥 +(cid:174) 𝑣 𝑦 = (cid:20) cos 𝜑 sin 𝜑 (cid:21) = (cid:34) + cos ( 𝜑 ) − cos ( 𝜑 ) (cid:35) , (1d)where 𝑟 ∈ R > is the view-coordinates radius. Vector (cid:174) 𝜃 ∈[ , 𝜋 ] contains incident angles (measured from optical axis)of two azimuthal projections. Vector (cid:174) 𝜑 ∈ [ , ] is the anamor-phic interpolation weight. It is linear (cid:174) 𝜑 𝑥 + (cid:174) 𝜑 𝑦 = , but hasspherical distribution (see Figure 1 on the facing page). Vec-tor (cid:174) 𝑘 ∈ [− , ] describes two power axes of anamorphicprojection. The algorithm is evaluated per-pixel of position (cid:174) 𝑣 ∈ R in view-space coordinates, centered at the opticalaxis and normalized at chosen angle-of-view (horizontal orvertical). The final anamorphic incident angle 𝜃 ′ is obtainedby interpolation using (cid:174) 𝜑 weights. 𝜃 ′ = (cid:34) (cid:174) 𝜃 𝑥 (cid:174) 𝜃 𝑦 (cid:35) · (cid:20) (cid:174) 𝜑 𝑥 (cid:174) 𝜑 𝑦 (cid:21) (2a) ˆ 𝐺 𝑥 ˆ 𝐺 𝑦 ˆ 𝐺 𝑧 = sin 𝜃 ′ 𝑟 (cid:20) (cid:174) 𝑣 𝑥 (cid:174) 𝑣 𝑦 (cid:21) cos 𝜃 ′ = sin 𝜃 ′ (cid:20) cos 𝜑 sin 𝜑 (cid:21) cos 𝜃 ′ , □ (2b)here 𝜃 ′ ∈ ( , 𝜋 ] is the anamorphic incident angle, measuredfrom the optical axis. This measurement resembles azimuthalprojection of the globe (here a visual sphere) [McArdle 2013].Final incident vector ˆ 𝐺 ∈ [− , ] (aka primary-ray) is ob-tained from anamorphic angle 𝜃 ′ . Parameters 𝑟, (cid:174) 𝑣, (cid:174) 𝜑 are inview-space, while (cid:174) 𝜃, 𝜃 ′ , 𝜑, ˆ 𝐺 are in visual-sphere space. Es-sentially anamorphic primary-ray map preserves azimuthalangle 𝜑 , while scaling only the picture’s radius. To give more control over the picture, a mapping betweenangle-of-view Ω and focal-length 𝑓 can be established. Herefocal-length is derived in a reciprocal from, to optimize usagein Equation (1). 𝑓 − (cid:0) Ω ℎ (cid:1) = tan (cid:0) (cid:174) 𝑘 𝑥 Ω ℎ (cid:1) (cid:174) 𝑘 𝑥 , if (cid:174) 𝑘 𝑥 > Ω ℎ , if (cid:174) 𝑘 𝑥 = sin (cid:0) (cid:174) 𝑘 𝑥 Ω ℎ (cid:1) (cid:174) 𝑘 𝑥 , if (cid:174) 𝑘 𝑥 < , (3)where Ω ℎ ∈ ( , 𝜏 ] denotes horizontal angle of view. Simi-larly vertical Ω 𝑣 can be obtained using (cid:174) 𝑘 𝑦 parameter instead.Result value / 𝑓 ∈ R > is the reciprocal focal-length. Remark.
Focal-length 𝑓 value must be same for (cid:174) 𝜃 𝑥 and (cid:174) 𝜃 𝑦 .Therefore only one reference angle Ω can be chosen, eitherhorizontal or vertical. zimuthal Anamorphic Ray-map for Immersive Renders in Perspective (a) Graph illustrating correlation between angle 𝜑 and anamorphicinterpolation vector (cid:174) 𝜑 ∈ [ , ] . Here angle 𝜑 is illustrated as2-times scaled, arc of 180°. (b) Graph mapping angle 𝜑 , to anamorphic interpolation weights (cid:174) 𝜑 𝑥 and (cid:174) 𝜑 𝑦 . Illustrates circular distribution of (cid:174) 𝜑 , in a periodic function. Figure 1.
Illustrating correlation between anamorphic interpolation weights (cid:174) 𝜑 and spherical angle 𝜑 .Inverse function to Equation (3), for angle-of-view Ω fromfocal-length 𝑓 , is obtained as follows Ω 𝑣 (cid:0) 𝑓 (cid:1) = (cid:0) (cid:174) 𝑘𝑦𝑓 (cid:1) (cid:174) 𝑘 𝑦 , if (cid:174) 𝑘 𝑦 > 𝑓 , if (cid:174) 𝑘 𝑦 = (cid:0) (cid:174) 𝑘𝑦𝑓 (cid:1) (cid:174) 𝑘 𝑦 , if (cid:174) 𝑘 𝑦 < (4)This can be used to obtain actual vertical angle-of-view fromhorizontally established focal length. Similarly horizontalangle Ω ℎ can be obtained using (cid:174) 𝑘 𝑥 parameter. Vignetting is an important visual symbol indicating projec-tion stretching of the visual sphere. Incorporating vignettingeffect increases space perception.Here anamorphic vignetting mask Λ ′ is obtained similarlyto anamorphic angle 𝜃 ′ . Two separate vignetting masks, (cid:174) Λ 𝑥 and (cid:174) Λ 𝑦 are generated for each power axis and combined toa single anamorphic vignette Λ ′ by interpolation through (cid:174) 𝜑 component-weights. (cid:20) (cid:174) Λ 𝑥 (cid:174) Λ 𝑦 (cid:21) = (cid:12)(cid:12) cos (cid:0) max (cid:8) | (cid:174) 𝑘 𝑥 | , (cid:9) (cid:174) 𝜃 𝑥 (cid:1)(cid:12)(cid:12) 𝑘𝑥 + (cid:12)(cid:12) cos (cid:0) max (cid:8) | (cid:174) 𝑘 𝑦 | , (cid:9) (cid:174) 𝜃 𝑦 (cid:1)(cid:12)(cid:12) 𝑘𝑦 + (5a) Λ ′ = (cid:20) (cid:174) Λ 𝑥 (cid:174) Λ 𝑦 (cid:21) · (cid:20) (cid:174) 𝜑 𝑥 (cid:174) 𝜑 𝑦 (cid:21) , □ (5b)where Λ ′ ∈ [ , ] is the anamorphic vignetting mask value,interpolated using circular-function vector (cid:174) 𝜑 .Vignette mask of each axis is obtained using two laws ofillumination falloff. Inverse-square law (for 𝑘 = ) and cosinelaw of illumination (for 𝑘 = − ). Vignette value for cosine law is simply expressed as cos 𝜃 ′ . Inverse-square law value can beexpressed as cos 𝜃 ′ . Therefore value for projections other than 𝑘 ∈ {− , } must have vignetting value in-between cos 𝜃 ′ ↔ cos 𝜃 ′ . This has been empirically evaluated to apower value linearly-mapped from 𝑘 ∈ [− , ] ↦→ [ , ] . k power θ scale Figure 2.
Graph illustrating interpolation parameters foranamorphic vignetting (vertical axis value), driven by 𝑘 ∈[− , ] parameter (horizontal axis). Power graph (cid:0) 𝑘 + (cid:1) inter-polates between cosine law of illumination and inverse-squarelaw vignetting. 𝜃 -scale graph scales vignetting boundary. Ray/perspective-map can be easily converted to the ST -map,given that maximum view angle does not exceed or isn’tequal 180°. (cid:2) (cid:174) 𝑎 𝑥 (cid:174) 𝑎 𝑦 (cid:3) = (cid:104) 𝑤ℎ (cid:105) , if Ω ℎ (cid:104) ℎ𝑤 (cid:105) , if Ω 𝑣 (6a) (cid:34) (cid:174) 𝑓 𝑠 (cid:174) 𝑓 𝑡 (cid:35) = cot Ω 𝐺 𝑧 (cid:20) ˆ 𝐺 𝑥 ˆ 𝐺 𝑦 (cid:21) (cid:20) (cid:174) 𝑎 𝑥 (cid:174) 𝑎 𝑦 (cid:21) + , □ (6b) ober, J.M. Value of 𝑘 Azimuthal projection type 𝑘 𝑖 = Rectilinear (aka Gnomonic) 𝑘 𝑖 = / Stereographic 𝑘 𝑖 = Equidistant 𝑘 𝑖 = − / Equisolid 𝑘 𝑖 = − Orthographic (azimuthal)
Source:
PTGui 11 fisheye factor [Krause 2019]. (a)
Azimuthal projection type and corresponding 𝑘 value. Azimuthal projection type Perception of spaceRectilinear straightnessStereographic shape, angleEquidistant speed, aimEquisolid distance, sizeOrthographic —
Source:
Empirical study using various competitive video games. (b)
Correct perception of space attributes and corresponding az-imuthal projection type.
Table 1.
Tables presenting perspective parameters, corresponding projection type and associated perception attitude. (a) (cid:174) 𝑘 = (cid:2) / (cid:3) first-person. (b) (cid:174) 𝑘 = (cid:2) / − / (cid:3) racing. (c) (cid:174) 𝑘 = (cid:2) − / (cid:3) flying. (d) (cid:174) 𝑘 = (cid:2) / (cid:3) panini. Figure 3.
Example of various wide-angle ( Ω 𝑣 = °) anamorphic azimuthal projections with vignetting in 16:9 aspect-ratio.Checkerboard depicts cube centered at the view-position, with each face colored according to axis direction. Here primarycolors represent positive axis and complementary colors its opposite equivalent (same as in color-wheel), { 𝐶𝑦, 𝑀𝑔, 𝑌𝑙 } ← (cid:0) − {
𝑋, 𝑌, 𝑍 } + ↦→ {
𝑅, 𝐺, 𝐵 } .where (cid:174) 𝑎 ∈ R is the square-mapping vector for horizontaland vertical angle of view. Values 𝑤 and ℎ represent picturewidth and height respectively. Ω < 𝜋 is the angle of view. (cid:174) 𝑓 ∈[ , ] represents the final ST -map vector and ˆ 𝐺 ∈ [ , ] isthe input primary-ray map vector. Presented perspective model can be used to simulate real-world anamorphic lens. It can incorporate effects such as dis-proportionate lens breathing , which are unique to anamorphicphotography [Sasaki 2017b] thanks to being focal-lengthbased. Some additional lens-correction may be added to theprimary-ray map, to compensate for lens imperfections.Below presented is an algorithm for anamorphic distor-tion of view coordinates, which can be used as an input for zimuthal Anamorphic Ray-map for Immersive Renders in Perspective
Picture content type Anamorphic (cid:174) 𝑘 valuesRacing simulation (cid:174) 𝑘 = [ / − / ] Flying simulation (cid:174) 𝑘 = [ − / ] Stereopsis (cyclopean) (cid:174) 𝑘 = [ − / ] First-person shooting (cid:174) 𝑘 = [ / ] Pan motion (cid:174) 𝑘 𝑥 ≠ (cid:174) 𝑘 𝑦 Roll motion (cid:174) 𝑘 𝑦 = (cid:174) 𝑘 𝑥 Tilt motion a (cid:174) 𝑘 𝑦 → (cid:174) 𝑘 𝑥 Source:
Determined empirically using various competitive video games, inaccordance to data in Table 1b. a Mapping of vertical distortion by a tilt motion introduced first in aMinecraft mod [Williams 2017].
Table 2.
Recommended values of (cid:174) 𝑘 for various content type.primary-ray map algorithm. It is based on Brown-Conrady lens-distortion model [Wang et al. 2008] in a division-variant[Fitzgibbon 2001]. It is executed upon view-coordinates (cid:174) 𝑣 ,forming alternative (cid:174) 𝑣 ′ . (cid:34) (cid:174) 𝑓 𝑥 (cid:174) 𝑓 𝑦 (cid:35) = (cid:20) (cid:174) 𝑣 𝑥 − (cid:174) 𝑐 (cid:174) 𝑣 𝑦 − (cid:174) 𝑐 (cid:21)(cid:124) (cid:32)(cid:32)(cid:32)(cid:32)(cid:32) (cid:123)(cid:122) (cid:32)(cid:32)(cid:32)(cid:32)(cid:32) (cid:125) cardinal offset a (7a) (cid:20) (cid:174) 𝜑 𝑥 (cid:174) 𝜑 𝑦 (cid:21) = (cid:20) (cid:174) 𝑓 𝑥 (cid:174) 𝑓 𝑥 + (cid:174) 𝑓 𝑦 (cid:174) 𝑓 𝑦 (cid:174) 𝑓 𝑥 + (cid:174) 𝑓 𝑦 (cid:21) T (7b) 𝑟 = (cid:174) 𝑓 𝑥 + (cid:174) 𝑓 𝑦 (7c) (cid:20) (cid:174) 𝑣 ′ 𝑥 (cid:174) 𝑣 ′ 𝑦 (cid:21) = (cid:34) (cid:174) 𝑓 𝑥 (cid:174) 𝑓 𝑦 (cid:35) (cid:32) (cid:34) + (cid:174) 𝑘 𝑥 𝑟 + (cid:174) 𝑘 𝑥 𝑟 · · · + (cid:174) 𝑘 𝑦 𝑟 + (cid:174) 𝑘 𝑦 𝑟 · · · (cid:35) · (cid:20) (cid:174) 𝜑 𝑥 (cid:174) 𝜑 𝑦 (cid:21) (cid:33) − (cid:124) (cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32) (cid:123)(cid:122) (cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32) (cid:125) radial anamorphic □ + (cid:34) (cid:174) 𝑓 𝑥 (cid:174) 𝑓 𝑦 (cid:35) (cid:32) (cid:34) (cid:174) 𝑓 𝑥 (cid:174) 𝑓 𝑦 (cid:35) · (cid:20) (cid:174) 𝑝 (cid:174) 𝑝 (cid:21) (cid:33)(cid:124) (cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32) (cid:123)(cid:122) (cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32) (cid:125) decentering + 𝑟 (cid:20) (cid:174) 𝑞 (cid:174) 𝑞 (cid:21)(cid:124) (cid:32)(cid:32) (cid:123)(cid:122) (cid:32)(cid:32) (cid:125) thin prism + (cid:20) (cid:174) 𝑐 (cid:174) 𝑐 (cid:21)(cid:124)(cid:123)(cid:122)(cid:125) cardinal b □ (7d) (cid:20) (cid:174) 𝑣 ′ 𝑥 (cid:174) 𝑣 ′ 𝑦 (cid:21) ↦→ ˆ 𝐺 𝑥 ˆ 𝐺 𝑦 ˆ 𝐺 𝑧 , □ (7e)where (cid:174) 𝑐 , (cid:174) 𝑐 are the cardinal-offset parameters, (cid:174) 𝑞 , (cid:174) 𝑞 are thethin-prism distortion parameters and (cid:174) 𝑝 , (cid:174) 𝑝 are the decen-tering parameters. Set of (cid:174) 𝑘 parameters define radial distor-tion for each anamorphic power axis. (cid:174) 𝑣 is the input view-coordinate and (cid:174) 𝑣 ′ is the view coordinate with applied lens-transformation. (cid:174) 𝜑 ∈ [ , ] is the anamorphic interpolationweight, defined in Section 2 on page 2. Figure 4.
Graph mapping of 𝑡 ∈ [ , ] to spectral color (cid:174) 𝜒 ∈ [ , ] , for simulation of chromatic aberration. This isan output of periodic function found in Equation (8). Dis-tribution of values ensures proper color order and sum ofsamples equal to neutral-white. Chromatic aberration effect can be achieved with multi-sample blur, where each sampled layer is colored by thecorresponding spectral-value [Gilcher 2015]. Presented pe-riodic function for spectral color (cid:174) 𝜒 produces samples thatalways add-up to 1 (neutral white), when number is even. Italso presents correct order of spectrum colors. (cid:174) 𝜒 𝑟 (cid:174) 𝜒 𝑔 (cid:174) 𝜒 𝑏 = clamp (cid:18) − (cid:12)(cid:12) (cid:32) 𝑡 + / / , (cid:33) − (cid:12)(cid:12)(cid:19) , (8)where (cid:174) 𝜒 ∈ [ , ] is the spectral-color value for position 𝑡 ∈ [ , ] . See Figure 4 for more information.Performing spectral blur on an image involves multi-samplesum of spectrum-colored layers. Here 𝑡 (replaced by sam-ple progress) never reaches 1, this preserves picture white-balance. Number of samples 𝑛 must be an even number, noless than 2, for a correct result. (cid:174) 𝑓 ′ 𝑟 (cid:174) 𝑓 ′ 𝑔 (cid:174) 𝑓 ′ 𝑏 = 𝑛 𝑛 − ∑︁ 𝑖 = (cid:174) 𝑓 𝑟 (cid:174) 𝑓 𝑔 (cid:174) 𝑓 𝑏 clamp (cid:18) − (cid:12)(cid:12) (cid:32) 𝑖𝑛 + / / , (cid:33) − (cid:12)(cid:12)(cid:19)(cid:124) (cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32) (cid:123)(cid:122) (cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32)(cid:32) (cid:125) (cid:174) 𝜒 periodic function , (9)where 𝑛 ∈ N is the even number of samples for the chro-matic aberration color-split. (cid:174) 𝑓 ∈ [ , ] is the current-sampleposition color value. (cid:174) 𝑓 ′ ∈ [ , ] is the final spectral-blurredcolor value. ober, J.M. Figure 5.
Example of anamorphic lens distortion with chro-matic aberration, (cid:174) 𝑘 𝑥 = − . , (cid:174) 𝑘 𝑦 = . , 𝑑 = . , 64–samples.Equation for spectral color (cid:174) 𝜒 can be rewritten to morecomputationally optimized form. (cid:174) 𝜒 𝑟 (cid:174) 𝜒 𝑔 (cid:174) 𝜒 𝑏 = clamp (cid:0) / − | 𝑡 − | (cid:1) clamp (cid:0) / − | 𝑡 − | (cid:1) − clamp (cid:0) / − | 𝑡 − | (cid:1) + clamp (cid:0) 𝑡 − / (cid:1) − clamp (cid:0) 𝑡 − / (cid:1) , (10)where (cid:174) 𝜒 ∈ [ , ] is the spectral color at position 𝑡 ∈ [ , ] .See Figure 4 on the previous page for visualization. Chromatic aberration can be integrated into lens distortion,by spectral blurring through distortion-transformation vec-tor (cid:174) 𝑣 Δ = (cid:174) 𝑣 ′ − (cid:174) 𝑣 . Below presented is equation for spectral-blurdisplacement vector, calculated per spectral-blur sample. (cid:174) 𝑣 Δ ( 𝑡 ) = (cid:16) + (cid:0) 𝑡 − (cid:1) 𝑑 (cid:17) (cid:174) 𝑣 Δ , (11)where (cid:174) 𝑣 Δ ( 𝑡 ) ∈ R is the spectral blur sample-offset vector atposition 𝑡 ∈ [ , ) . Value 𝑑 ∈ R denotes lens dispersion-scale. In this article mathematical model for designing anamorphicperspective geometry has been presented. Parametrization ofthis model enables adaptive picture geometry, which can bedynamically adjusted to the visible content, in an artisticallyconvincing manner. Along with anamorphic perspective,vignetting and lens-distortion with chromatic-aberrationhas been provided, for a holistic digital-lens experience.
References
Leon B. Alberti. 1435.
On Painting (1970 ed.). New Haven: Yale UniversityPress.
Giulio C. Argan and Nesca A. Robb. 1946. The Architecture of Brunelleschiand the Origins of Perspective Theory in the Fifteenth Century.
Journalof the Warburg and Courtauld Institutes https://doi.org/10.2307/750311
Joseph Baldwin, Alistair Burleigh, and Robert Pepperell. 2014. ComparingArtistic and Geometrical Perspective Depictions of Space in the VisualField. i-Perception
5, 6 (jan 2014), 536–547. https://doi.org/10.1068/i0668
Felix Bettonvil. 2005. Fisheye lenses.
WGN, Journal of the InternationalMeteor Organization
33, 1 (2005), 11–12. https://ui.adsabs.harvard.edu/link_gateway/2005JIMO...33....9B/ADS_PDF
José Correia and Luís Romão. 2007. Extended perspective system. In
Pro-ceedings of the 25th eCAADe International Conference . 185–192.Leonardo Da Vinci. 1632.
A treatise on painting (2014 ed.). Project Gutenberg,Chapter Linear Perspective, 49–59. http://gutenberg.org/ebooks/46915
Robert A. Dixon. 1987.
Mathographics . Basil Blackell Limited, ChapterPerspective Drawings, 82–83.Andrew W. Fitzgibbon. 2001. Simultaneous linear estimation of multi-ple view geometry and lens distortion. In
Proceedings of the 2001 IEEEComputer Society Conference on Computer Vision and Pattern Recogni-tion. CVPR 2001 , Vol. 1. IEEE Comput. Soc, Kauai, HI, USA. https://doi.org/10.1109/cvpr.2001.990465
Margaret M. Fleck. 1994.
Perspective Projection: the Wrong Imaging Model .techreport 95-01. Computer Science, University of Iowa. https://mfleck.cs.illinois.edu/my-papers/stereographic-TR.pdf
Daniel M. German, Pablo D’Angelo, Michael Gross, and Bruno Postle. 2007.New Methods to Project Panoramas for Practical and Aesthetic Pur-poses. In
Computational Aesthetics in Graphics, Visualization, and Imaging .The Eurographics Association. https://doi.org/10.2312/COMPAESTH/COMPAESTH07/015-022
Carolyn Giardina. 2016. How ’The Hateful Eight’ Cinematogra-pher Revived Lenses From the 1960s.
The Hollywood Reporter (Jan. 2016). https://hollywoodreporter.com/news/how-hateful-eight-cinematographer-revived-852586
Online.Pascal Gilcher. 2015. YACA (Yet Another Chromatic Aberration). ReShadeforum. https://reshade.me/forum/shader-presentation/1133-yaca-yet-another-chromatic-aberration
Forum post.Rudolf Kingslake. 1989.
A History of the Photographic Lens . Academic Press,Chapter IV, 59–62.Erik Krause. 2019.
Fisheye Projection . PanoTools wiki. https://wiki.panotools.org/index.php?title=Fisheye_Projection&oldid=16077
Online.James M. McArdle. 2013. From the corner of your eye?... Personal blog. https://drjamesmcardle.com/2013/04/07/from-the-corner-of-your-eye/
On-line.Luis Peñaranda, Luiz Velho, and Leonardo Sacht. 2015. Real-time correctionof panoramic images using hyperbolic Möbius transformations.
Journalof Real-Time Image Processing
15, 4 (may 2015), 725–738. https://doi.org/10.1007/s11554-015-0502-x
Dan Sasaki. 2017a. The Aesthetics of Anamorphic in Film and Digital.Panavision Inc. https://vimeo.com/167052303
Video.Dan Sasaki. 2017b. The Five Pillars of Anamorphic - DisproportionateBreathing. Panavision Inc. https://vimeo.com/167045643
Video.Thomas K. Sharpless, Bruno Postle, and Daniel M. German. 2010. Pan-nini: A New Projection for Rendering Wide Angle Perspective Images.
Computational Aesthetics in Graphics
Visualization, and Imaging (2010). https://doi.org/10.2312/COMPAESTH/COMPAESTH10/009-016
Jianhua Wang, Fanhuai Shi, Jing Zhang, and Yuncai Liu. 2008. A newcalibration model of camera lens distortion.
Pattern Recognition
41, 2(feb 2008), 607–615. https://doi.org/10.1016/j.patcog.2007.06.012
Eric J. W. Whittaker. 1984.
The Stereographic Projection (2001 ed.). UniversityCollege Cardiff Press.
Wikipedia, contributors. 2019. Robert Barker (painter).
Wikipedia, The FreeEncyclopedia (2019). http://en.wikipedia.org/w/index.php?title=Robert_Barker_(painter)&oldid=907715733
Shaun E. Williams. 2015. Blinky. GitHub, Inc. https://github.com/shaunlebron/blinky
Modification, Quake.Shaun E. Williams. 2017. Flex FOV. GitHub, Inc. https://github.com/shaunlebron/flex-fov
Modification, Minecraft. zimuthal Anamorphic Ray-map for Immersive Renders in Perspective
Panorama source:
Grzegorz Wronkowski, CC-BY. (a)
Racing, (cid:174) 𝑘 = [ / − / ] , 𝑓 = . , Ω ℎ ≈ ° Panorama source:
Grzegorz Wronkowski, CC-BY. (b)
Flying, (cid:174) 𝑘 = [− / ] , 𝑓 = . , Ω ℎ ≈ ° Panorama source: captured from Obduction through Nvidia Ansel. (c)
First-person, (cid:174) 𝑘 = [ / ] , 𝑓 = . , Ω ℎ ≈ ° Panorama source: captured from For Honor through Nvidia Ansel. (d)