Researchain Logo Researchain
  • Decentralized Journals

    A

    Archives
  • Avatar
    Welcome to Researchain!
    Feedback Center
Decentralized Journals
A
Archives Updated
Archive Your Research
Electrical Engineering and Systems Science Signal Processing

On Procrustes Analysis in Hyperbolic Space

Puoya Tabaghi,  Ivan Dokmanic

Abstract
Congruent Procrustes analysis aims to find the best matching between two point sets through rotation, reflection and translation. We formulate the Procrustes problem for hyperbolic spaces, review the canonical definition of the center of point sets, and give a closed form solution for the optimal isometry for noise-free measurements. We also analyze the performance of the proposed method under measurement noise.
Full PDF

11 On Procrustes Analysis in Hyperbolic Space

Puoya Tabaghi, Ivan Dokmani´c,

Member, IEEE

Abstract —Congruent Procrustes analysis aims to find the bestmatching between two point sets through rotation, reflection andtranslation. We formulate the Procrustes problem for hyperbolicspaces, review the canonical definition of the center of point sets,and give a closed form solution for the optimal isometry fornoise-free measurements. We also analyze the performance ofthe proposed method under measurement noise.

Index Terms —Hyperbolic geometry, Procrustes Analysis

I. I

NTRODUCTION I N Greek mythology, Procrustes was a robber who lived inAttica and deformed his victims to match the size of hisbed. In 1962, Hurley and Catell used the story of Procrustes todescribe a point set matching problem in Euclidean spaces [9],stated below.

Problem 1.

Let { z n } Nn =1 and { z (cid:48) n } Nn =1 be two point sets in R d .The Procrustes problem asks to find a map (cid:98) T that minimizesthe sum of the mismatch norms, i.e., (cid:98) T = arg min T ∈T N (cid:88) n =1 (cid:13)(cid:13)(cid:13) z n − T ( z (cid:48) n ) (cid:13)(cid:13)(cid:13) where T is the set of rotation, reflection, translation, anduniform scaling maps and their compositions [8]. In computer vision, Procrustes analysis is of relevance in point cloud registration problems. The task of rigid registrationis to find an isometry between two (or more) sets of pointssampled from a or dimensional object. Point registrationhas applications in object recognition [13], medical imaging [6]and localization of mobile robotics [14]. In signal processing,Procrustes analysis often involved aligning shapes or point setsby a distance preserving bijection. Procrustes problems alsonaturally arise in distance geometry problems (DGPs) whereone wants to find the location of a point set that best representsa given set of incomplete point distances, i.e., z , . . . , z N ∈ R d : (cid:107) z n − z m (cid:107) = d mn , ∀ ( m, n ) ∈ M where M ⊂ { , . . . , N } and { d m,n : ( m, n ) ∈ M} is the setof measured distances [10]. If a distance geometry problemhas a solution, it is an orbit of the form O Z = (cid:110) { T ( z n ) } Nn =1 s.t. T : R d → R d is an isometry (cid:111) , where Z = { z n } Nn =1 is a particular solution. In order touniquely identify the correct solution from all the possibleelements in the orbit O Z , we may be given the exact positionof a subset of points, called anchors . We use Procrustes analysis Puoya Tabaghi is with the Coordinated Science Lab at University ofIllinois at Urbana-Chamapaig (emails: [email protected]). Ivan Dokmani´cis with the Department of Mathematics and Computer Science at Universityof Basel (email:[email protected]) to pick the correct solution by finding the best match betweenthe anchors with their corresponding points in the orbit. Thistechnique is commonly used in localization problems [4, 19].Procrustes analysis can be performed in any metric space. Inparticular, hyperbolic Procrustes analysis is of great relevancedue to the recent surge of interest in hyperbolic embeddings andmachine learning [18, 3]. Furthermore, hyperbolic embeddingsare closely connected to the study of hierarchical or tree-likedata structures and hyperbolic Procrustes problem solutionsmay be used to align hierarchical data, e.g., ontologies [17, 5].The goal of ontological studies is to find a (distance preserving)map between a fixed number of entities in two tree-likestructures that are best aligned to each other (see Figure 1for an illustration). For example, in ontology matching oneaims to find correspondences between semantically relatedentities in heterogeneous ontologies with the goal of ontologymerging, query response, or data translation [17].In unsupervised matching problems, the first step inProcrustes-type analyses is to find the correspondence be-tween two point clouds by using the iterative closest pointalgorithm [16]. Recently, Alvarez-Melis et al. [1] cast theunsupervised hierarchy matching problem in hyperbolic space.Their proposed method jointly learns the “soft” correspondenceand the alignment map characterized by a hyperbolic neuralnetwork.In our work, we start with parametric isometries in the ’Loidmodel of hyperbolic spaces. It is known that one can decomposeany isometry into elementary isometries, e.g., hyperbolictranslations and hyperbolic rotations (and reflections). In oursetting, we aim to find a joint estimate for hyperbolic translationand rotation maps that best align two point sets.To accomplish this task, we review the definition of the centerof mass, or centroid , for a set of points in hyperbolic space.This enables us to subsequently “center” each set, and decouplethe joint estimation problem into two steps: (1) translate thecenter of mass of each point set to the coordinate origin (ofthe Poincaré model), and (2) estimate the unknown rotation

Fig. 1. Tree alignment in the Poincaré disk [2]. Hyperbolic Procrustes analysisaims to align two trees, depicted on the far left and far right figures. In steps ( a ) and ( b ) we center vertices in both trees, while in step ( c ) we estimate theunknown rotation map. a r X i v : . [ ee ss . SP ] F e b factor. While hyperbolic centering has been studied in theliterature [11], our Procrustes analysis framework is differentfrom prior work in so far that it is similar to its Euclideancounterpart, and provides an optimal estimate for the unknownrotation factor, based on the weighted mean of pairwise innerproducts. Moreover, we prove a proof that our proposed methodensures the theoretically optimal isometry if the point setsmatch perfectly. We conclude the paper by giving numericalperformance bounds for the task of matching noisy point sets. Summary:

Let { x n } n ∈ [ N ] , and { x (cid:48) n } n ∈ [ N ] be two setsof points in a hyperbolic space, related through anisometric map, i.e., x (cid:48) n = T ( x n ) , ∀ n ∈ [ N ] . Then, T = T m x (cid:48) ◦ T U ◦ T − m x where m x , m y ∈ R d are the point sets’ centroids, T b is the translation map by vector b ∈ R d , and T U isa rotation map by a unitary matrix U ∈ O ( d ) ; seeSection III. For noisy points, this isometry is suboptimaland can be fine-tuned via a gradient-based algorithm. Notation.

For N ∈ N , we let [ N ] = { , . . . , N } . Dependingon the context, x can either be the first element of x ∈ R d ,or an indexed vector. We denote the set of orthogonal matricesas O ( d ) = (cid:8) R ∈ R d × d : R (cid:62) R = I (cid:9) . For a function f and itsinputs x , . . . , x N , we write f ( x n ) = N (cid:80) n ∈ [ N ] f ( x n ) . Fora vector b ∈ R d , we denote its (cid:96) norm as (cid:107) b (cid:107) .II. ’L OID M ODEL OF H YPERBOLIC S PACE

Let x, x (cid:48) ∈ R d +1 with d ≥ . The Lorentzian inner productbetween x and x (cid:48) is defined as [ x, x (cid:48) ] = x (cid:62) Hx (cid:48) : H = (cid:18) − (cid:62) I d (cid:19) , (1)where I d ∈ R d × d is the identity matrix. This is an indefiniteinner product on R d +1 . The vector space R d +1 equipped withthe Lorentzian inner product is called a Lorentzian ( d + 1) -space. In a Lorentzian space, we can define notions similar toadjoint and unitary matrices in Euclidean spaces. The H -adjointof the matrix R , denoted by R [ ∗ ] , is defined via [ Rx, x (cid:48) ] = [ x, R [ ∗ ] x (cid:48) ] , ∀ x, x (cid:48) ∈ R d +1 , or simply as R [ ∗ ] = H − R (cid:62) H . An invertible matrix R iscalled H-unitary if R [ ∗ ] = R − [7].The ’Loid model of d -dimensional hyperbolic space is aRiemannian manifold L d = ( L d , ( g x ) x ) , where L d = (cid:8) x ∈ R d +1 : [ x, x ] = − , x > (cid:9) and the Riemannian metric g x : T x L d × T x L d → R definedas g x ( u, v ) = [ u, v ] . The distance function in the ’Loid modelare characterized by Lorentzian inner products as d ( x, x (cid:48) ) = acosh( − [ x, x (cid:48) ]) , ∀ x, x (cid:48) ∈ L d . A. Isometries

A map T : L d → L d is an isometry if it is bijective andpreserves distances, i.e. if d ( x, x (cid:48) ) = d (cid:0) T ( x ) , T ( x (cid:48) ) (cid:1) , ∀ x, x (cid:48) ∈ L d . We can represent any hyperbolic isometry as a composition oftwo elementary maps that are parameterized by a d -dimensionalvector and a d × d unitary matrix, as described below. Fact 1. [15] The function T : L d → L d is an isometry if andonly if it can be written as T ( x ) = R U R b x , where R U = (cid:20) (cid:62) U (cid:21) , R b = (cid:34) (cid:113) (cid:107) b (cid:107) b (cid:62) b ( I + bb (cid:62) ) (cid:35) for a unitary matrix U ∈ O ( d ) and a vector b ∈ R d . Fact 1 can be directly verified by finding the conditionsfor a real matrix R to be H -unitary, i.e., R (cid:62) HR = H orsimply R = H − CH ∈ R ( d +1) × ( d +1) where C (cid:62) C = I and C ∈ C ( d +1) × ( d +1) . We use this parametric decomposition ofrigid transformations to solve the Procrustes problem in L d . Fact 2. T − b = T − b and T − U = T U (cid:62) where b ∈ R d and U ∈ O ( d ) . The hyperbolic translation map T b : L d → L d and hyperbolicrotation map T U : L d → L d are defined as T b ( x ) = R b x, for b ∈ R d , (2) T U ( x ) = R U x, for U ∈ O ( d ) . (3)III. P ROCRUSTES A NALYSIS

Euclidean (orthogonal) Procrustes analysis proceeds throughtwo steps: • Centering: moving the center of mass of both points setto the origin of Cartesian coordinates, and • Finding the optimal rotation/reflection.We proceed to review (and visualize) the definition of the centerof mass of a point set in hyperbolic space [11, Chapter 13].We start by projecting each point x ∈ L d onto the following d -dimensional subspace H d = (cid:8) x ∈ R d +1 : x = 0 (cid:9) . Then, we can simply neglect the first element of the projectedpoint (which is always zero), and define a one-to-one map P between L d and R d ; see Figure 2. In Definition 1, we formalizethis projection and its inverse. Definition 1.

The projection operator P : L d → R d and itsinverse Q are defined as P (cid:16) (cid:34) (cid:113) (cid:107) z (cid:107) z (cid:35) (cid:17) = z, Q ( z ) = (cid:34)(cid:113) (cid:107) z (cid:107) z (cid:35) . For brevity, we define P ( X ) def = [ P ( x ) , . . . , P ( x N )] where X = [ x , . . . , x N ] ∈ ( L d ) N . Similarly, we consider thisextension for Q as well.In Section III-A, we review the hyperbolic centering process[11]. In other words, we find a map T b to move the center of Fig. 2. Geometric illustration of P , Q , and stereographic projection h . mass of projected point sets to ∈ R d , i.e., P (cid:0) T b ( x n ) (cid:1) = 0 .Then, we show how this centering method helps simplify thehyperbolic Procrustes problem to a sub-problem similar to thefamous (Euclidean) orthogonal Procrustes problem. A. Hyperbolic Centering

In Euclidean Procrustes analysis, we have two point sets z , . . . , z N and z (cid:48) , . . . z (cid:48) N that are related via a composition ofrotation, reflection, and translation maps, i.e., z n = U z (cid:48) n + b where U ∈ O ( d ) and b ∈ R d . We extract translation invariantfeatures by moving their point mass to ∈ R d , i.e., z n − z n = U ( z (cid:48) n − z (cid:48) n ) . The main purpose of centering is to map each point set to newlocations, z n − z n and z (cid:48) n − z (cid:48) n that are invariant with respect tothe unknown translation b . Subsequently, we can estimate theunknown unitary matrix ˆ U , and then the translation accordingto (cid:98) b = z n − (cid:98) U z (cid:48) n .In hyperbolic Procrustes analysis, we have x n = R b R U x (cid:48) n , ∀ n ∈ [ N ] (4)where U ∈ O ( d ) and b ∈ R d . In a similar way, we pre-processa point set to extract (hyperbolic) translation invariant locations,i.e., centered point sets. Lemma 1 gives a simple method tocenter a projected point set. Lemma 1. [11] Let x , x , . . . , x N ∈ L d . Then, we have P (cid:0) R − m x x n (cid:1) = 0 where m x def = √ − [ x n ,x n ] P ( x n ) . In Proposition 1, we show that T − m x is the canonicaltranslation map for centering the point set X ∈ (cid:0) L d (cid:1) N . Proposition 1.

Let x , . . . , x N and x (cid:48) , . . . x (cid:48) N in L d such that x n = R b R U x (cid:48) n , ∀ n ∈ [ N ] . for b ∈ R d and U ∈ O ( d ) . Then, R − m x x n = R V R − m x (cid:48) x (cid:48) n where R V is a hyperbolic rotation matrix.Proof. From Lemma 1, we have R − m x x n = (cid:20) a (cid:21) , R − m x (cid:48) x (cid:48) n = (cid:20) a (cid:21) for a , a ∈ R . On the other hand, we can rewrite eq. (4) inthe following form R − m x x n = R (cid:48) R − m x (cid:48) x (cid:48) n , ∀ n ∈ [ N ] . where R (cid:48) = R − m x R b R U R m x (cid:48) . Since R (cid:48) is an H -unitarymatrix, we can decompose it as R (cid:48) = R c R V for some c ∈ R d and V ∈ O ( d ) . Therefore, we have (cid:20) a (cid:21) = R c R V (cid:20) a (cid:21) . This gives c = 0 .The map T − m x not only centers a set of points, but alsorotates them. This phenomena is rooted in the noncommutativeproperty of hyperbolic translation or gyration . More clearly,for any two vectors b , b ∈ R d , we have R b R b = R V R b R b for a specific unitary matrix V ∈ O ( d ) that accounts for the gyration factor; see the example in Figure 3 and the follow-up discussion in Section IV. This does not interfere with ouranalysis since any such rotation is absorbed in U , and as wecan estimate their joint unitary transformation.Now, let us consider the following noisy case, x n = R b R U R (cid:15) n x (cid:48) n , ∀ n ∈ [ N ] where (cid:15) n ∈ R d is a translation noise for the point x (cid:48) n . Let z n = R (cid:15) n x (cid:48) n . Then we have R − m x x n = R V R − m z z n . Thecentroid m z is related to m x (cid:48) and { (cid:15) n } n ∈ [ N ] . Therefore, wecan write m z = m x (cid:48) + (cid:15) for a (cid:15) ∈ R d . This leads to R − m x x n = R V R (cid:15) (cid:48) n R − m x (cid:48) x (cid:48) n , ∀ n ∈ [ N ] , where R (cid:15) (cid:48) n = R − m x (cid:48) − (cid:15) R (cid:15) n R m x (cid:48) . If the translation noise ofeach point is sufficiently small, then R V R (cid:15) (cid:48) n ≈ R V (cid:48) for a V (cid:48) ∈ O ( d ) . B. Hyperbolic Rotation & Reflection

To estimate the unknown hyperbolic rotation, we considerminimizing a weighted discrepancy between the centered pointsets. More precisely, (cid:98) U = arg min V ∈ O ( d ) (cid:88) n ∈ [ N ] w n f (cid:16) d (cid:0) R − m x x n , R V R − m x (cid:48) x (cid:48) n (cid:1)(cid:17) (5) Fig. 3. ( a ) : Red and blue are projected points related by a translation, i.e., X = R b X (cid:48) . ( b, c ) : Centering each point set. ( d ) : Centered points are relatedvia a rotation, i.e., R m x R b R − m x (cid:48) (cid:54) = I d . where d ( x, x (cid:48) ) = acosh( − x (cid:62) Hx (cid:48) ) , { w n } n ∈ [ N ] are positiveweights, and f ( · ) = cosh( · ) is a monotonic function. Proposition 2.

The optimal unitary matrix that solves (5) equals (cid:98) U = U l U (cid:62) r , where U l Σ U (cid:62) r is the singular valuedecomposition of P ( R − m x X ) W P ( R − m x (cid:48) X (cid:48) ) (cid:62) , and W =diag( w , . . . , w N ) .Proof. We can simplify (5) as follows: (cid:98) U = arg max V ∈ O ( d ) (cid:88) n ∈ [ N ] Tr R − m x (cid:48) x (cid:48) n w n ( R − m x x n ) (cid:62) HR V . From Fact 1, we know that R V is only parameterized on itslower right block. The proof then follows from representingthe sum in matrix form and invoking von Neumann’s traceinequality [12]. IV. M ÖBIUS ADDITION

In the Poincaré model ( I d ), the points reside in the unit d -dimensional Euclidean ball. The isometry between the’Loid and the Poincaré model h : L d → I d is called the stereographic projection [2]. The distance between y, y (cid:48) ∈ I d isgiven by d ( y, y (cid:48) ) = 2tanh − ( (cid:107)− y ⊕ y (cid:48) (cid:107) ) where ⊕ is Möbiusaddition — a noncommutative and nonassociative operator. Gyration measures the “deviation” of Möbius addition fromcommutativity, i.e., gyr[ y, y (cid:48) ]( y (cid:48) ⊕ y ) = y ⊕ y (cid:48) [20]. Fact 3.

The maps h ◦ R U ◦ h − and h ◦ T U ◦ h − are isometriesin the Poincaré model, and they can be written as h ◦ T U ◦ h − ( y ) = U y, h ◦ T b ◦ h − ( y ) = b (cid:48) ⊕ y where b (cid:48) = h ◦ Q ( b ) , T b and T U are defined in (2) and (3) . The translation isometry is a direct result of the Gyrotrans-lation theorem equality, − ( c ⊕ y ) ⊕ c ⊕ y (cid:48) = gyr[ c, y ]( − y ⊕ y (cid:48) ) , where c ∈ I d [20]. Therefore, left Möbius addition preservesthe distances of point sets in the Poincaré model . We canhence perform a Procrustes analysis in the Poincaré model by (1) centering each point set, i.e., subtracting their center ofmass from the left hand side of the Möbius addition, and (2) estimating the remaining rotation factor — a composition ofgyrations and the initial unknown rotation between the twopoint sets. V. N UMERICAL A NALYSIS

Let x n = R ∗ R (cid:15) n x (cid:48) n , ∀ n ∈ [ N ] where R ∗ is an H -unitarymatrix and (cid:15) , . . . , (cid:15) N is the set of translation noise samples.We compute the following H -unitary operators to match thepoint sets X, X (cid:48) : • R P : The matrix estimated by our proposed method; • R GD : Let e ( X, X ) def = Nd (cid:80) n ∈ [ N ] d ( x n , x n ) be thenormalized discrepancy between X and X . The matrix R GD is computed by an iterative gradient descent method:We initialize R GD = I d +1 , and iterate the following steps: (1) (cid:98) b = − α ∂∂b e ( X, R b R GD X (cid:48) ) | b =0 for a small α > ; (2) (cid:98) U = arg max U ∈ O ( d ) (cid:80) n ∈ [ N ] [ x n , R U R (cid:98) b R GD x (cid:48) n ] ; (3) Update R GD ← R (cid:98) U R (cid:98) b R GD ; Möbius gyrations hence keep the norm that they inherit from R d invariant,i.e., (cid:107) gyr[ c, y ]( − y ⊕ y (cid:48) ) (cid:107) = (cid:107)− y ⊕ y (cid:48) (cid:107) [20]. Fig. 4. ( a ) Normalized discrepancy for random hyperbolic point sets of size N ∈ { , . . . , } and dimensions d ∈ { , } . For trials, we report thequartiles Q , Q and Q since they are robust to outliers. ( b ) The probabilityof an outlier event P ◦ = 10 − × total number of outliers, e.g., the fractionof examples that failed to converge or outlier defined in the sense of (6). • R GD+P : We can combine the aforementioned methodsby (1) solving the problem with our method, and (2) fine-tuning the estimated isometry by applying the gradientmethod on the point sets X and R P X (cid:48) .For a random H -unitary R ∗ and all n ∈ [ N ] , we sample d -dimensional z n ∼ N (0 , I ) , and (cid:15) n ∼ − N (0 , I ) ; Then, welet x (cid:48) n = Q ( z n ) and x n = R ∗ R (cid:15) n x (cid:48) n . For random ( X, X (cid:48) ) pairs, we compute their normalized discrepancy e ( X, RX (cid:48) ) ,where R ∈ { R P , R GD , R GD+P } . All methods successfullydenoise the measurements, i.e., e ( X, R ∗ X (cid:48) ) > e ( X, RX (cid:48) ) ;see Figure 4 ( a ) . We should note that the gradient descentmethod does not necessarily converge to an acceptable solution.Therefore, we report the number of outlier trials, i.e., ( X, X (cid:48) ) : | e ( X, RX (cid:48) ) − Q | > k | Q − Q | (6)where Q , Q and Q are first, second and third quartiles ofthe total reported discrepancies, and k = 5 for a conservativecriterion to pick outliers (see Figure 4 ( b ) ). The gradientdescent method has the most number of outlier whereas ourproposed method has the minimum number of outliers —comparable to outliers in the measurement noise. Therefore,the proposed method robustly solves the hyperbolic Procrustesproblem and its accuracy can be moderately improved with apost fine-tuning gradient method.VI. C ONCLUSION

Inspired by its Euclidean counterpart, we introduced theProcrustes problem in hyperbolic spaces. We reviewed the(indefinite) Lorentizian inner product, and described how H -unitary matrices represent isometries in the ’Loid model ofhyperbolic spaces. Using the parameterized decompositionof hyperbolic isometries in terms of hyperbolic rotation andtranslation, we showed that moving the center of mass tothe origin gives point sets that are invariant to hyperbolictranslation (for the case of no measurement noise). We thenused the centered point sets to estimate the unknown rotationfactor. VII. A CKNOWLEDGMENT

The authors would like to thank Prof. Olgica Milenkovicfor helpful discussions and suggestions. For our method, we choose W = 11 (cid:62) . R EFERENCES [1] David Alvarez-Melis, Youssef Mroueh, and TommiJaakkola. Unsupervised hierarchy matching with opti-mal transport over hyperbolic spaces. In

InternationalConference on Artificial Intelligence and Statistics , pages1606–1617. PMLR, 2020.[2] James W Cannon, William J Floyd, Richard Kenyon,Walter R Parry, et al. Hyperbolic geometry.

Flavors ofgeometry , 31:59–115, 1997.[3] Christopher De Sa, Albert Gu, Christopher Ré, andFrederic Sala. Representation tradeoffs for hyperbolicembeddings.

Proceedings of machine learning research ,80:4460, 2018.[4] Ivan Dokmanic, Reza Parhizkar, Juri Ranieri, and MartinVetterli. Euclidean distance matrices: essential theory,algorithms, and applications.

IEEE Signal ProcessingMagazine , 32(6):12–30, 2015.[5] Jérôme Euzenat, Pavel Shvaiko, et al.

Ontology matching ,volume 18. Springer, 2007.[6] J Michael Fitzpatrick, Jay B West, and Calvin R Maurer.Predicting error in rigid-body point-based registration.

IEEE transactions on medical imaging , 17(5):694–702,1998.[7] Israel Gohberg, Peter Lancaster, and Leiba Rodman.Matrices and indefinite scalar products. 1983.[8] John C Gower. Generalized procrustes analysis.

Psy-chometrika , 40(1):33–51, 1975.[9] John R Hurley and Raymond B Cattell. The procrustesprogram: Producing direct rotation to test a hypothesizedfactor structure.

Behavioral science , 7(2):258, 1962.[10] Leo Liberti, Carlile Lavor, Nelson Maculan, and AntonioMucherino. Euclidean distance geometry and applications.

SIAM review , 56(1):3–69, 2014.[11] Kanti V Mardia and Peter E Jupp.

Directional statistics ,volume 494. John Wiley & Sons, 2009.[12] Leon Mirsky. A trace inequality of john von neumann.

Monatshefte für mathematik , 79(4):303–306, 1975.[13] Niloy J Mitra, Natasha Gelfand, Helmut Pottmann, andLeonidas Guibas. Registration of point cloud data froma geometric optimization perspective. In

Proceedings ofthe 2004 Eurographics/ACM SIGGRAPH symposium onGeometry processing , pages 22–31, 2004.[14] François Pomerleau, Francis Colas, and Roland Siegwart.A review of point cloud registration algorithms for mobilerobotics. 2015.[15] John G Ratcliffe, S Axler, and KA Ribet.

Foundationsof hyperbolic manifolds , volume 149. Springer, 2006.[16] Szymon Rusinkiewicz and Marc Levoy. Efficient variantsof the icp algorithm. In

Proceedings third internationalconference on 3-D digital imaging and modeling , pages145–152. IEEE, 2001.[17] Pavel Shvaiko and Jérôme Euzenat. Ontology matching:state of the art and future challenges.

IEEE Transactionson knowledge and data engineering , 25(1):158–176, 2011.[18] Puoya Tabaghi and Ivan Dokmani´c. Hyperbolic distancematrices. In

Proceedings of the 26th ACM SIGKDDInternational Conference on Knowledge Discovery & Data Mining , KDD ’20, page 1728–1738. Associationfor Computing Machinery, 2020. ISBN 9781450379984.[19] Puoya Tabaghi, Ivan Dokmani´c, and Martin Vetterli.Kinetic euclidean distance matrices.

IEEE Transactionson Signal Processing , 68:452–465, 2019.[20] Abraham Albert Ungar. A gyrovector space approach tohyperbolic geometry.

Related Researches

Does Probabilistic Constellation Shaping Benefit IM-DD Systems without Optical Amplifiers?
by Di Che
Vehicle Localization via Cooperative Channel Mapping
by Xinghe Chu
Sparse Antenna Array Design for MIMO Radar Using Softmax Selection
by Konstantinos Diamantaras
Fast and Accurate Amplitude Demodulation of Wideband Signals
by Mantas Gabrielaitis
Sparse Factorization-based Detection of Off-the-Grid Moving targets using FMCW radars
by Gilles Monnoyer de Galland
RIGOLETTO -- RIemannian GeOmetry LEarning: applicaTion To cOnnectivity. A contribution to the Clinical BCI Challenge -- WCCI2020
by Marie-Constance Corsi
Medical Image Quality Metrics for Foveated Model Observers
by Miguel A. Lago
Self-Sustainable Reconfigurable Intelligent Surface Aided Simultaneous Terahertz Information and Power Transfer (STIPT)
by Yijin Pan
Synthesizing Skeletal Motion and Physiological Signals as a Function of a Virtual Human's Actions and Emotions
by Bonny Banerjee
A Tutorial on 5G NR V2X Communications
by Mario H. Castañeda Garcia
Learning Task-Oriented Communication for Edge Inference: An Information Bottleneck Approach
by Jiawei Shao
WiSleep: Scalable Sleep Monitoring and Analytics Using Passive WiFi Sensing
by Priyanka Mary Mammen
MIN2Net: End-to-End Multi-Task Learning for Subject-Independent Motor Imagery EEG Classification
by Phairot Autthasan
DOA Estimation with Non-Uniform Linear Arrays: A Phase-Difference Projection Approach
by Hui Chen
A Greedy Graph Search Algorithm Based on Changepoint Analysis for Automatic QRS Complex Detection
by Atiyeh Fotoohinasab
Performance Analysis of RIS-Based nT-FSO Link Over G-G Turbulence With Pointing Errors
by Alain R. Ndjiongue
Making Intelligent Reflecting Surfaces More Intelligent: A Roadmap Through Reservoir Computing
by Zhou Zhou
Localizing Unsynchronized Sensors with Unknown Sources
by Dalia El Badawy
An Efficient Active Set Algorithm for Covariance Based Joint Data and Activity Detection for Massive Random Access with Massive MIMO
by Ziyue Wang
Recent Development in Disease Diagnosis by Information, Communication and Technology
by Shabana Urooj
ULA Fitting for Sparse Array Design
by Wanlu Shi
LoRD-Net: Unfolded Deep Detection Network with Low-Resolution Receivers
by Shahin Khobahi
Improving CSI-based Massive MIMO Indoor Positioning using Convolutional Neural Network
by Gregor Cerar
Deep Learning for Short-Term Voltage Stability Assessment of Power Systems
by Meng Zhang
Long time-series NDVI reconstruction in cloud-prone regions via spatio-temporal tensor completion
by Dong Chu

  • «
  • 1
  • 2
  • 3
  • 4
  • »
Submitted on 7 Feb 2021 Updated

arXiv.org Original Source
NASA ADS
Google Scholar
Semantic Scholar
How Researchain Works
Researchain Logo
Decentralizing Knowledge