Learning image representations tied to ego-motion
aa r X i v : . [ c s . C V ] M a r Learning image representations tied to ego-motion
Dinesh JayaramanThe University of Texas at Austin [email protected]
Kristen GraumanThe University of Texas at Austin [email protected]
Abstract
Understanding how images of objects and scenes be-have in response to specific ego-motions is a crucial as-pect of proper visual development, yet existing visual learn-ing methods are conspicuously disconnected from the phys-ical source of their images. We propose to exploit propri-oceptive motor signals to provide unsupervised regulariza-tion in convolutional neural networks to learn visual repre-sentations from egocentric video. Specifically, we enforcethat our learned features exhibit equivariance i.e . they re-spond predictably to transformations associated with dis-tinct ego-motions. With three datasets, we show that ourunsupervised feature learning approach significantly out-performs previous approaches on visual recognition andnext-best-view prediction tasks. In the most challengingtest, we show that features learned from video captured onan autonomous driving platform improve large-scale scenerecognition in static images from a disjoint domain.
1. Introduction
How is visual learning shaped by ego-motion? In theirfamous “kitten carousel” experiment, psychologists Heldand Hein examined this question in 1963 [11]. To analyzethe role of self-produced movement in perceptual develop-ment, they designed a carousel-like apparatus in which twokittens could be harnessed. For eight weeks after birth, thekittens were kept in a dark environment, except for onehour a day on the carousel. One kitten, the “active” kit-ten, could move freely of its own volition while attached.The other kitten, the “passive” kitten, was carried along ina basket and could not control his own movement; rather,he was forced to move in exactly the same way as the ac-tive kitten. Thus, both kittens received the same visual ex-perience. However, while the active kitten simultaneouslyexperienced signals about his own motor actions, the pas-sive kitten did not. The outcome of the experiment is re-markable. While the active kitten’s visual perception wasindistinguishable from kittens raised normally, the passivekitten suffered fundamental problems. The implication is
Figure 2. We learn visual features from egocentric video that re-spond predictably to observer egomotion. clear: proper perceptual development requires leveraging self-generated movement in concert with visual feedback .We contend that today’s visual recognition algorithmsare crippled much like the passive kitten. The culprit: learn-ing from “bags of images”. Ever since statistical learningmethods emerged as the dominant paradigm in the recog-nition literature, the norm has been to treat images as i.i.d.draws from an underlying distribution. Whether learningobject categories, scene classes, body poses, or featuresthemselves, the idea is to discover patterns within a col-lection of snapshots, blind to their physical source. So isthe answer to learn from video? Only partially. Withoutleveraging the accompanying motor signals initiated by thevideographer, learning from video data does not escape thepassive kitten’s predicament.Inspired by this concept, we propose to treat visual learn-ing as an embodied process, where the visual experienceis inextricably linked to the motor activity behind it. Inparticular, our goal is to learn representations that exploitthe parallel signals of ego-motion and pixels. We hypothe-size that downstream processing will benefit from a featurespace that preserves the connection between “how I move”and “how my visual surroundings change”.To this end, we cast the problem in terms of unsuper-vised equivariant feature learning. During training, the in-put image sequences are accompanied by a synchronizedstream of ego-motor sensor readings; however, they need Depending on the context, the motor activity could correspond to ei-ther the 6-DOF ego-motion of the observer moving in the scene or thesecond-hand motion of an object being actively manipulated, e.g., by aperson or robot’s end effectors. igure 1. Our goal is to learn a feature space equivariant to ego-motion. We train with image pairs from video accompanied by their sensedego-poses (left and center), and produce a feature mapping such that two images undergoing the same ego-pose change move similarlyin the feature space (right). Left:
Scatter plot of motions ( y i − y j ) among pairs of frames ≤
1s apart in video from KITTI car-mountedcamera, clustered into motion patterns p ij . Center:
Frame pairs ( x i , x j ) from the “right turn”, “left turn” and “zoom” motion patterns. Right:
An illustration of the equivariance property we seek in the learned feature space. Pairs of frames corresponding to each ego-motionpattern ought to have predictable relative positions in the learned feature space. Best seen in color. not possess any semantic labels. The ego-motor signalcould correspond, for example, to the inertial sensor mea-surements received alongside video on a wearable or car-mounted camera. The objective is to learn a feature map-ping from pixels in a video frame to a space that is equiv-ariant to various motion classes. In other words, the learnedfeatures should change in predictable and systematic waysas a function of the transformation applied to the originalinput.
See Fig 1. We develop a convolutional neural net-work (CNN) approach that optimizes a feature map for thedesired egomotion-based equivariance. To exploit the fea-tures for recognition, we augment the network with a clas-sification loss when class-labeled images are available. Inthis way, ego-motion serves as side information to regular-ize the features learned, which we show facilitates categorylearning when labeled examples are scarce.In sharp contrast to our idea, previous work on visualfeatures—whether hand-designed or learned—primarilytargets feature invariance . Invariance is a special case ofequivariance, where transformations of the input have noeffect. Typically, one seeks invariance to small transforma-tions, e.g., the orientation binning and pooling operationsin SIFT/HOG and modern CNNs both target invariance tolocal translations and rotations. While a powerful con-cept, invariant representations require a delicate balance:“too much” invariance leads to a loss of useful informationor discriminability. In contrast, more general equivariantrepresentations are intriguing for their capacity to imposestructure on the output space without forcing a loss of infor-mation. Equivariance is “active” in that it exploits observermotor signals, like Hein and Held’s active kitten.Our main contribution is a novel feature learning ap-proach that couples ego-motor signals and video. To ourknowledge, ours is the first attempt to ground feature learn-ing in physical activity. The limited prior work on unsu- pervised feature learning with video [22, 24, 21, 9] learnsonly passively from observed scene dynamics, uninformedby explicit motor sensory cues. Furthermore, while equiv-ariance is explored in some recent work, unlike our idea,it typically focuses on 2D image transformations as op-posed to 3D ego-motion [14, 26] and considers existingfeatures [30, 17]. Finally, whereas existing methods thatlearn from image transformations focus on view synthesisapplications [12, 15, 21], we explore recognition applica-tions of learning jointly equivariant and discriminative fea-ture maps.We apply our approach to three public datasets. On pureequivariance as well as recognition tasks, our method con-sistently outperforms the most related techniques in featurelearning. In the most challenging test of our method, weshow that features learned from video captured on a vehiclecan improve image recognition accuracy on a disjoint do-main. In particular, we use unlabeled KITTI [6, 7] car datato regularize feature learning for the 397-class scene recog-nition task for the SUN dataset [34]. Our results show thepromise of departing from the “bag of images” mindset, infavor of an embodied approach to feature learning.
2. Related work
Invariant features
Invariance is a special case of equiv-ariance, wherein a transformed output remains identical toits input. Invariance is known to be valuable for visual rep-resentations. Descriptors like SIFT, HOG, and aspects ofCNNs like pooling and convolution, are hand-designed forinvariance to small shifts and rotations. Feature learningwork aims to learn invariances from data [27, 28, 31, 29, 5].Strategies include augmenting training data by perturbingimage instances with label-preserving transformations [28,31, 5], and inserting linear transformation operators into thefeature learning algorithm [29].ost relevant to our work are feature learning meth-ods based on temporal coherence and “slow feature analy-sis” [32, 10, 22]. The idea is to require that learned featuresvary slowly over continuous video, since visual stimuli canonly gradually change between adjacent frames. Tempo-ral coherence has been explored for unsupervised featurelearning with CNNs [22, 37, 9, 3, 19], with applications todimensionality reduction [10], object recognition [22, 37],and metric learning [9]. Temporal coherence of inferredbody poses in unlabeled video is exploited for invariantrecognition in [4]. These methods exploit video as a sourceof free supervision to achieve invariance, analogous to theimage perturbations idea above. In contrast, our method ex-ploits video coupled with ego-motor signals to achieve themore general property of equivariance.
Equivariant representations
Equivariant features canalso be hand-designed or learned. For example, equivari-ant or “co-variant” operators are designed to detect repeat-able interest points [30]. Recent work explores ways tolearn descriptors with in-plane translation/rotation equivari-ance [14, 26]. While the latter does perform feature learn-ing, its equivariance properties are crafted for specific 2Dimage transformations. In contrast, we target more complexequivariances arising from natural observer motions (3Dego-motion) that cannot easily be crafted, and our methodlearns them from data.Methods to learn representations with disentangled la-tent factors [12, 15] aim to sort properties like pose, il-lumination etc . into distinct portions of the feature space.For example, the transforming auto-encoder learns to ex-plicitly represent instantiation parameters of object parts inequivariant hidden layer units [12]. Such methods targetequivariance in the limited sense of inferring pose param-eters, which are appended to a conventional feature spacedesigned to be invariant. In contrast, our formulation en-courages equivariance over the complete feature space; weshow the impact as an unsupervised regularizer when train-ing a recognition model with limited training data.The work of [17] quantifies the invariance/equivarianceof various standard representations, including CNN fea-tures, in terms of their responses to specified in-plane 2Dimage transformations (affine warps, flips of the image). Weadopt the definition of equivariance used in that work, butour goal is entirely different. Whereas [17] quantifies theequivariance of existing descriptors, our approach learns afeature space that is equivariant.
Learning transformations
Other methods train withpairs of transformed images and infer an implicit represen-tation for the transformation itself. In [20], bilinear modelswith multiplicative interactions are used to learn content-independent “motion features” that encode only the trans-formation between image pairs. One such model, the “gated autoencoder” is extended to perform sequence predictionfor video in [21]. Recurrent neural networks combined witha grammar model of scene dynamics can also predict futureframes in video [24]. Whereas these methods learn a repre-sentation for image pairs (or tuples) related by some trans-formation, we learn a representation for individual imagesin which the behavior under transformations is predictable.Furthermore, whereas these prior methods abstract away theimage content, our method preserves it, making our featuresrelevant for recognition.
Egocentric vision
There is renewed interest in egocen-tric computer vision methods, though none perform fea-ture learning using motor signals and pixels in concert aswe propose. Recent methods use ego-motion cues to sepa-rate foreground and background [25, 35] or infer the first-person gaze [36, 18]. While most work relies solely on ap-parent image motion, the method of [35] exploits a robot’smotor signals to detect moving objects and [23] uses re-inforcement learning to form robot movement policies byexploiting correlations between motor commands and ob-served motion cues.
3. Approach
Our goal is to learn an image representation that is equiv-ariant with respect to ego-motion transformations. Let x i ∈ X be an image in the original pixel space, and let y i ∈ Y be its associated ego-pose representation. The ego-pose captures the available motor signals, and could take avariety of forms. For example, Y may encode the completeobserver camera pose (its position in 3D space, pitch, yaw,roll), some subset of those parameters, or any reading froma motor sensor paired with the camera.As input to our learning algorithm, we have a trainingset U of N u image pairs and their associated ego-poses, U = {h ( x i , x j ) , ( y i , y j ) i} N u ( i,j )=1 . The image pairs origi-nate from video sequences, though they need not be adja-cent frames in time. The set may contain pairs from multi-ple videos and cameras. Note that this training data does not have any semantic labels (object categories, etc .); they are“labeled” only in terms of the ego-motor sensor readings.In the following, we first explain how to translate ego-pose information into pairwise “motion pattern” annota-tions (Sec 3.1). Then, Sec 6.3 defines the precise natureof the equivariance we seek, and Sec 3.3 defines our learn-ing objective. Sec 3.4 shows how our equivariant featurelearning scheme may be used to enhance recognition withlimited training data. Finally, in Sec 3.5, we show how afeedforward neural network architecture may be trained toproduce the desired equivariant feature space. .1. Mining discrete ego-motion patterns First we want to organize training sample pairs into adiscrete set of ego-motion patterns. For instance, one ego-motion pattern might correspond to “tilt downwards by ap-proximately 20°”. While one could collect new data ex-plicitly controlling for the patterns (e.g., with a turntableand camera rig), we prefer a data-driven approach that canleverage video and ego-pose data collected “in the wild”.To this end, we discover clusters among pose differencevectors y i − y j for pairs ( i, j ) of temporally close framesfrom video (typically / k -means to find G clus-ters, though other methods are possible. Let p ij ∈ P = { , . . . , G } denote the motion pattern ID, i.e ., the cluster towhich ( y i , y j ) belongs. We can now replace the ego-posevectors in U with motion pattern IDs: h ( x i , x j ) , p ij i . The left panel of Fig 1 illustrates a set of motion patternsdiscovered from videos in the KITTI [6] dataset, which arecaptured from a moving car. Here Y consists of the posi-tion and yaw angle of the camera. So, we are clustering a2D space consisting of forward distance and change in yaw.As illustrated in the center panel, the largest clusters corre-spond to the car’s three primary ego-motions: turning left,turning right, and going forward. Given U , we wish to learn a feature mapping function z θ ( . ) : X → R D parameterized by θ that maps a singleimage to a D -dimensional vector space that is equivariantto ego-motion. To be equivariant, the function z θ must re-spond systematically and predictably to ego-motion: z θ ( x j ) ≈ f ( z θ ( x i ) , y i , y j ) , (1)for some function f . We consider equivariance for linearfunctions f ( . ) , following [17]. In this case, z θ is said to beequivariant with respect to some transformation g if thereexists a D × D matrix M g such that: ∀ x ∈ X : z θ ( g x ) ≈ M g z θ ( x ) . (2)Such an M g is called the “equivariance map” of g on thefeature space z θ ( . ) . It represents the affine transformationin the feature space that corresponds to transformation g inthe pixel space. For example, suppose a motion pattern g corresponds to a yaw turn of 20°, and x and g x are the im-ages observed before and after the turn, respectively. Equiv-ariance demands that there is some matrix M g that maps thepre-turn image to the post-turn image, once those imagesare expressed in the feature space z θ . Hence, z θ “orga-nizes” the feature space in such a way that movement in a For movement with d degrees of freedom, setting G ≈ d should suf-fice (cf. Sec 6.3). We chose small G for speed and did not vary it. bias dimension assumed to be included in D for notational simplicity particular direction in the feature space (here, as computedby multiplication with M g ) has a predictable outcome. Thelinear case, as also studied in [17], ensures that the struc-ture of the mapping has a simple form, and is convenientfor learning since M g can be encoded as a fully connectedlayer in a neural network.While prior work [14, 26] focuses on equivariance where g is a 2D image warp, we explore the case where g ∈ P is anego-motion pattern (cf. Sec 3.1) reflecting the observer’s 3Dmovement in the world. In theory, appearance changes of animage in response to an observer’s ego-motion are not de-termined by the ego-motion alone. They also depend on thedepth map of the scene and the motion of dynamic objectsin the scene. One could easily augment either the frames x i or the ego-pose y i with depth maps, when available. Non-observer motion appears more difficult, especially in theface of changing occlusions and newly appearing objects.However, our experiments indicate we can learn effectiverepresentations even with dynamic objects. In our imple-mentation, we train with pairs relatively close in time, so asto avoid some of these pitfalls.While during training we target equivariance for the dis-crete set of G ego-motions, the learned feature space will not be limited to preserving equivariance for pairs originat-ing from the same ego-motions. This is because the linearequivariance maps are composable. If we are operating ina space where every ego-motion can be composed as a se-quence of “atomic” motions, equivariance to those atomicmotions is sufficient to guarantee equivariance to all mo-tions. To see this, suppose that the maps for “turn head rightby 10°” (ego-motion pattern r ) and “turn head up by 10°”(ego-motion pattern u ) are respectively M r and M u , i.e ., z ( r x ) = M r z ( x ) and z ( u x ) = M u z ( x ) for all x ∈ X .Now for a novel diagonal motion d that can be composedfrom these atomic motions as d = r ◦ u , we have z ( d x ) = z (( r ◦ u ) x ) = M r z ( u x ) = M r M u z ( x ) , (3)so that M d = M r M u is the equivariance map for novelego-motion d , even though d was not among , . . . , G . Thisproperty lets us restrict our attention to a relatively smallnumber of discrete ego-motion patterns during training, andstill learn features equivariant w.r.t. new ego-motions. We now design a loss function that encourages thelearned feature space z θ to exhibit equivariance with re-spect to each ego-motion pattern. Specifically, we wouldlike to learn the optimal feature space parameters θ ∗ jointlywith its equivariance maps M ∗ = { M ∗ , . . . , M ∗ G } for themotion pattern clusters through G (cf. Sec 3.1).To achieve this, a naive translation of the definition ofequivariance in Eq (2) into a minimization problem overfeature space parameters θ and the D × D equivariance mapcandidate matrices M would be as follows: θ ∗ , M ∗ ) = arg min θ , M X g X { ( i,j ): p ij = g } d ( M g z θ ( x i ) , z θ ( x j )) , (4)where d ( ., . ) is a distance measure. This problem can be de-composed into G independent optimization problems, onefor each motion, corresponding only to the inner summationabove, and dealing with disjoint data. The g -th such prob-lem requires only that training frame pairs annotated withmotion pattern p ij = g approximately satisfy Eq (2).However, such a formulation admits problematic so-lutions that perfectly optimize it, e.g . for the trivial all-zero feature space z θ ( x ) = , ∀ x ∈ X with M g set tothe all-zeros matrix for all g , the loss above evaluates tozero. To avoid such solutions, and to force the learned M g ’s to be different from one another (since we would likethe learned representation to respond differently to differ-ent ego-motions), we simultaneously account for the “neg-atives” of each motion pattern. Our learning objective is: ( θ ∗ , M ∗ ) = arg min θ , M X g,i,j d g ( M g z θ ( x i ) , z θ ( x j ) , p ij ) , (5)where d g ( ., ., . ) is a “contrastive loss” [10] specific to mo-tion pattern g : d g ( a , b , c ) = ( c = g ) d ( a , b )+ ( c = g ) max( δ − d ( a , b ) , , (6)where ( . ) is the indicator function. This contrastive losspenalizes distance between a and b in “positive” mode(when c = g ), and pushes apart pairs in “negative” mode(when c = g ), up to a minimum margin distance speci-fied by the constant δ . We use the ℓ norm for the distance d ( ., . ) .In our objective in Eq (5), the contrastive loss operatesin the latent feature space. For pairs belonging to cluster g , the contrastive loss d g penalizes feature space distancebetween the first image and its transformed pair, similar toEq (4) above. For pairs belonging to clusters other than g , d g requires that the transformation defined by M g mustnot bring the image representations close together. In thisway, our objective learns the M g ’s jointly. It ensures thatdistinct ego-motions, when applied to an input z θ ( x ) , mapit to different locations in feature space.We want to highlight the important distinctions betweenour objective and the “temporal coherence” objective of[22] for slow feature analysis. Written in our notation, theobjective of [22] may be stated as: θ ∗ = arg min θ X i,j d ( z θ ( x i ) , z θ ( x j ) , ( | t i − t j | ≤ T )) , (7) where t i , t j are the video time indices of x i , x j and T is atemporal neighborhood size hyperparameter. This loss en-courages the representations of nearby frames to be simi-lar to one another. However, crucially, it does not accountfor the nature of the ego-motion between the frames. Ac-cordingly, while temporal coherence helps learn invarianceto small image changes, it does not target a (more gen-eral) equivariant space. Like the passive kitten from Heinand Held’s experiment, the temporal coherence constraintwatches video to passively learn a representation; like theactive kitten, our method registers the observer motion ex-plicitly with the video to learn more effectively, as we willdemonstrate in results. While we have thus far described our formulation forgeneric equivariant image representation learning, it canoptionally be used for visual recognition tasks. Supposethat in addition to the ego-pose annotated pairs U we arealso given a small set of N l class-labeled static images, L = { ( x k , c k } N l k =1 , where c k ∈ { , . . . , C } . Let L e de-note the unsupervised equivariance loss of Eq (5). We canintegrate our unsupervised feature learning scheme with therecognition task, by optimizing a misclassification loss to-gether with L e . Let W be a D × C matrix of classifierweights. We solve jointly for W and the maps M : ( θ ∗ , W ∗ , M ∗ ) = arg min θ ,W, M L c ( θ , W, L ) + λL e ( θ , M , U ) , (8)where L c denotes the softmax loss over the learned features, L c ( W, L ) = − N l P N l i =1 log( σ c k ( W z θ ( x i )) , and σ c k ( . ) isthe softmax probability of the correct class. The regularizerweight λ is a hyperparameter. Note that neither the super-vised training data L nor the testing data for recognition arerequired to have any associated sensor data. Thus, our fea-tures are applicable to standard image recognition tasks.In this use case, the unsupervised ego-motion equivari-ance loss encodes a prior over the feature space that can im-prove performance on the supervised recognition task withlimited training examples. We hypothesize that a featurespace that embeds knowledge of how objects change un-der different viewpoints / manipulations allows a recogni-tion system to, in some sense, hallucinate new views of anobject to improve performance. z θ ( . ) For the mapping z θ ( . ) , we use a convolutional neuralnetwork architecture, so that the parameter vector θ nowrepresents the layer weights. The loss L e of Eq (5) is opti-mized by sharing the weight parameters θ among two iden-tical stacks of layers in a “Siamese” network [2, 10, 22], asshown in the top two rows of Fig 3. Image pairs from U arefed into these two stacks. Both stacks are initialized with o t i o n - p a tt e r n i m a g e p a i r s c l a ss - l a b e ll e d O v e r a ll l o ss Figure 3. Training setup: (top) “Siamese network” for computingthe equivariance loss of Eq (5), together with (bottom) a third tiedstack for computing the supervised recognition softmax loss as inEq (8). See Sec 4.1 and Supp for exact network specifications. identical random weights, and identical gradients are passedthrough them in every training epoch, so that the weights re-main tied throughout. Each stack encodes the feature mapthat we wish to train, z θ .To optimize Eq (5), an array of equivarance maps M ,each represented by a fully connected layer, is connected tothe top of the second stack. Each such equivariance mapthen feeds into a motion-pattern-specific contrastive lossfunction d g , whose other inputs are the first stack outputand the ego-motion pattern ID p ij .To optimize Eq (8), in addition to the Siamese net thatminimizes L e as above, the supervised softmax loss is min-imized through a third replica of the z θ layer stack withweights tied to the two Siamese networks stacks. Labelledimages from L are fed into this stack, and its output is fedinto a softmax layer whose other input is the class label.The complete scheme is depicted in Fig 3. Optimizationis done through mini-batch stochastic gradient descent im-plemented through backpropagation with the Caffe pack-age [13] (more details in Sec 4 and Supp).
4. Experiments
We validate our approach on 3 public datasets and com-pare to two existing methods, on equivariance (Sec 4.2),recognition performance (Sec 4.3) and next-best view se-lection (Sec 4.4). Throughout we compare the followingmethods:•
CLSNET : A neural network trained only from the su-pervised samples with a softmax loss.•
TEMPORAL : The temporal coherence approachof [22], which regularizes the classification loss withEq (7) setting the distance measure d ( . ) to the ℓ dis-tance in d . This method aims to learn invariant fea-tures by exploiting the fact that adjacent video framesshould not change too much.• DRLIM : The approach of [10], which also regularizesthe classification loss with Eq (7), but setting d ( . ) tothe ℓ distance in d .• EQUIV : Our ego-motion equivariant feature learningapproach, combined with the classification loss as in Eq (8), unless otherwise noted below.•
EQUIV + DRLIM : Our approach augmented with tem-poral coherence regularization ([10]).
TEMPORAL and
DRLIM are the most pertinent baselinesbecause they, like us, use contrastive loss-based formula-tions, but represent the popular “slowness”-based family oftechniques ([37, 3, 9, 19]) for unsupervised feature learningfrom video, which, unlike our approach, are passive.
Recall that in the fully unsupervised mode, our methodtrains with pairs of video frames annotated only by theirego-poses in U . In the supervised mode, when applied torecognition, our method additionally has access to a set ofclass-labeled images in L . Similarly, the baselines all re-ceive a pool of unsupervised data and supervised data. Wenow detail the data composing these two sets. Unsupervised datasets
We consider two unsuperviseddatasets, NORB and KITTI:(1)
NORB [16]: This dataset has 24,300 96 × y consisting of camera eleva-tion and azimuth. Because this dataset has discrete ego-pose variations, we consider two ego-motion patterns, i.e ., G = 2 (cf. Sec 3.1): one step along elevation and one stepalong azimuth. For EQUIV , we use all available positivepairs for each of the two motion patterns from the trainingimages, yielding a N u = 45 , -pair training set. For DR - LIM and
TEMPORAL , we create a 50,000-pair training set(positives to negatives ratio 1:3). Pairs within one step (ele-vation and/or azimuth) are treated as “temporal neighbors”,as in the turntable results of [10, 22].(2)
KITTI [6, 7]: This dataset contains videos with reg-istered GPS/IMU sensor streams captured on a car driv-ing around 4 types of areas (location classes): “campus”,“city”, “residential”, “road”. We generate a random 67%-33% train-validation split and use 2D ego-pose vectors con-sisting of “yaw” and “forward position” (integral over “for-ward velocity” sensor outputs) from the sensors. We dis-cover ego-motion patterns p ij (cf. Sec 3.1) on frame pairs ≤ clusters and automati-cally retain the G = 3 with the largest motions, which uponinspection correspond to “forward motion/zoom”, “rightturn”, and “left turn” (see Fig 1, left). For EQUIV , we cre-ate a N u = 47 , -pair training set with 11,996 positives.For DRLIM and
TEMPORAL , we create a 98,460-pair train-ing set with 24,615 “temporal neighbor” positives sampled ≤ ×
32 pixels, so that we canadopt CNN architecture choices known to be effective fortiny images [1]. asks → Equivariance error Recognition accuracy % Next-best viewDatasets → NORB NORB-NORB KITTI-KITTI KITTI-SUN KITTI-SUN NORBMethods ↓ atomic composite [25 cls] [4 cls] [397 cls] [397 cls, top-10] 1-view → → CLSNET ± ± ± ± TEMPORAL [22] 0.7587 0.8119 35.47 ± ± ± ± → DRLIM [10] 0.6404 0.7263 36.60 ± ± ± ± → EQUIV ± ± ± ± → EQUIV + DRLIM ± ± ± ± → Table 1. (Left) Average equivariance error (Eq (10)) on NORB for ego-motions like those in the training set (atomic) and novel ego-motions(composite). (Center) Recognition result for 3 datasets (mean ± standard error) of accuracy % over 5 repetitions. (Right) Next-best viewselection accuracy %. Our method EQUIV (and augmented with slowness in
EQUIV + DRLIM ) clearly outperforms all baselines.
Supervised datasets
In our recognition experiments, weconsider 3 supervised datasets L : (1) NORB : We select6 images from each of the C = 25 object training splitsat random to create instance recognition training data. (2) KITTI : We select 4 images from each of the C = 4 locationclass training splits at random to create location recognitiontraining data.(3) SUN [34]: We select 6 images for each of C = 397 scene categories at random to create scene recog-nition training data. We preprocess them identically to theKITTI images above (grayscale, crop to KITTI aspect ra-tio, resize to × ). We keep all the supervised datasetssmall, since unsupervised feature learning should be mostbeneficial when labeled data is scarce. Note that while thevideo frames of the unsupervised datasets U are associatedwith ego-poses, the static images of L have no such auxil-iary data. Network architectures and optimization
For KITTI,we closely follow the cuda-convnet [1] recommendedCIFAR-10 architecture: 32 conv(5x5)-max(3x3)-ReLU →
32 conv(5x5)-ReLU-avg(3x3) →
64 conv(5x5)-ReLU-avg(3x3) → D =
64 full feature units. For NORB, we use afully connected architecture: 20 full-ReLU → D =
100 fullfeature units. Parentheses indicate sizes of convolution orpooling kernels, and pooling layers have stride length 2.We use Nesterov-accelerated stochastic gradient descent.The base learning rate and regularization λ s are selectedwith greedy cross-validation. The contrastive loss marginparameter δ in Eq (6) is set to 1.0. We report all resultsfor all methods based on 5 repetitions. For more details onarchitectures and optimization, see Supp. First, we test the learned features for equivariance.Equivariance is measured separately for each ego-motion g through the normalized error ρ g : ρ g = E h k z θ ( x ) − M ′ g z θ ( g x ) k / k z θ ( x ) − z θ ( g x ) k i , (9)where E [ . ] denotes the empirical mean, M ′ g is the equiv-ariance map, and ρ g = 0 would signify perfect equivari- ance. We closely follow the equivariance evaluation ap-proach of [17] to solve for the equivariance maps of featuresproduced by each compared method on held-out validationdata, before computing ρ g (see Supp).We test both (1) “atomic” ego-motions matching thoseprovided in the training pairs ( i.e ., “up” 5°and “down”20°) and (2) composite ego-motions (“up+right”, “up+left”,“down+right”). The latter lets us verify that our method’sequivariance extends beyond those motion patterns used fortraining (cf. Sec 6.3). First, as a sanity check, we quantifyequivariance for the unsupervised loss of Eq (5) in isola-tion, i.e ., learning with only U . Our EQUIV method’s av-erage ρ g error is 0.0304 and 0.0394 for atomic and com-posite ego-motions in NORB, respectively. In comparison, DRLIM —which promotes invariance, not equivariance—achieves ρ g = EQUIV tends to learn nearly completely equivari-ant features, even for novel composite transformations.Next we evaluate equivariance for all methods using fea-tures optimized for the NORB recognition task. Table 2(left) shows the results. As expected, we find that the fea-tures learned with
EQUIV regularization are again easily themost equivariant. We also see that for all methods erroris lower for atomic motions than composite motions, sincethey are more equivariant for smaller motions (see Supp).
Next we test the unsupervised-to-supervised transferpipeline of Sec 3.4 on 3 recognition tasks: NORB-NORB,KITTI-KITTI, and KITTI-SUN. The first dataset in eachpairing is unsupervised, and the second is supervised.Table 1 (center) shows the results. On all 3 datasets, ourmethod significantly improves classification accuracy, notjust over the no-prior
CLSNET baseline, but also over theclosest previous unsupervised feature learning methods. All the unsupervised feature learning methods yieldlarge gains over
CLSNET on all three tasks. However, DR - LIM and
TEMPORAL are significantly weaker than the pro- To verify the
CLSNET baseline is legitimate, we also ran a Tiny Imagenearest neighbor baseline on SUN as in [34]. It obtains 0.61% accuracy(worse than
CLSNET , which obtains 0.70%). uery pair NN (ours) NN (pixel) + query pair NN (ours) NN (pixel) + query pair NN (ours) NN (pixel) + Figure 4. Nearest neighbor image pairs (cols 3 and 4 in each block) in pairwise equivariant feature difference space for various query imagepairs (cols 1 and 2 per block). For comparison, cols 5 and 6 show pixel-wise difference-based neighbor pairs. The direction of ego-motionin query and neighbor pairs (inferred from ego-pose vector differences) is indicated above each block. See text. posed method. Those methods are based on the “slowfeature analysis” principle [32]—nearby frames must beclose to one another in the learned feature space. We ob-serve in practice (see Supp) that temporally close frames aremapped close to each other after only a few training epochs.This points to a possible weakness in these methods—evenwith parameters (temporal neighborhood size, regulariza-tion λ ) cross-validated for recognition, the slowness prioris too weak to regularize feature learning effectively, sincestrengthening it causes loss of discriminative information.In contrast, our method requires systematic feature spaceresponses to ego-motions, and offers a stronger prior. EQUIV + DRLIM further improves over
EQUIV , possibly be-cause: (1) our
EQUIV implementation only exploits framepairs arising from specific motion patterns as positives,while
DRLIM more broadly exploits all neighbor pairs, and(2)
DRLIM and
EQUIV losses are compatible—
DRLIM re-quires that small perturbations affect features in small ways,and
EQUIV requires that they affect them systematically.The most exciting result is KITTI-SUN. The KITTI dataitself is vastly more challenging than NORB due to itsnoisy ego-poses from inertial sensors, dynamic scenes withmoving traffic, depth variations, occlusions, and objectsthat enter and exit the scene. Furthermore, the fact wecan transfer
EQUIV features learned without class labels onKITTI (street scenes from Karlsruhe, road-facing camerawith fixed pitch and field of view) to be useful for a su-pervised task on the very different domain of SUN (“in thewild” web images from 397 categories mostly unrelated tostreets) indicates the generality of our approach. Our bestrecognition accuracy of 1.58% on SUN is achieved withonly 6 labeled examples per class. It is ≈
30% better thanthe nearest competing baseline
TEMPORAL and over 6 timesbetter than chance. Top-10 accuracy trends are similar.While we have thus far kept supervised training setssmall to simulate categorization problems in the “long tail”where training samples are scarce and priors are most use-ful, new preliminary tests with larger labeled training setson SUN show that our advantage is preserved. With N =20samples for each of 397 classes on KITTI-SUN, EQUIV scored 3.66+/-0.08% accuracy vs. 1.66+/-0.18 for
CLSNET . Next, we show preliminary results of a direct applicationof equivariant features to “next-best view selection”. Givenone view of a NORB object, the task is to tell a hypothet-ical robot how to move next to help recognize the object, i.e ., which neighboring view would best reduce object pre-diction uncertainty. We exploit the fact that equivariant fea-tures behave predictably under ego-motions to identify theoptimal next view. Our method for this task, similar in spiritto [33], is described in detail in Supp. Table 1 (right) showsthe results. On this task too,
EQUIV features easily outper-form the baselines.
To qualitatively evaluate the impact of equivariant fea-ture learning, we pose a nearest neighbor task in the featuredifference space to retrieve image pairs related by similarego-motion to a query image pair (details in Supp). Fig 4shows examples. For a variety of query pairs, we show thetop neighbor pairs in the
EQUIV space, as well as in pixel-difference space for comparison. Overall they visually con-firm the desired equivariance property: neighbor-pairs in
EQUIV ’s difference space exhibit a similar transformation(turning, zooming, etc .), whereas those in the original im-age space often do not. Consider the first azimuthal rotationNORB query in row 2, where pixel distance, perhaps domi-nated by the lighting, identifies a wrong ego-motion match,whereas our approach finds a correct match, despite thechanged object identity, starting azimuth, lighting etc . Thered boxes show failure cases. For instance, in the KITTIfailure case shown (row 1, column 3), large foreground mo-tion of a truck in the query image causes our method towrongly miss the rotational motion.
5. Conclusion
Over the last decade, visual recognition methods havefocused almost exclusively on learning from “bags of im-ages”. We argue that such “disembodied” image collec-tions, though clearly valuable when collected at scale, de-prive feature learning methods from the informative physi-cal context of the original visual experience. We presentedhe first “embodied” approach to feature learning that gener-ates features equivariant to ego-motion. Our results on mul-tiple datasets and on multiple tasks show that our approachsuccessfully learns equivariant features, which are benefi-cial for many downstream tasks and hold great promise fornovel future applications.
Acknowledgements:
This research is supported in part by ONRPECASE Award N00014-15-1-2291 and a gift from Intel.
References [1] Cuda-convnet. https://code.google.com/p/cuda-convnet/ .6, 7, 10[2] J. Bromley, J. W. Bentz, L. Bottou, I. Guyon, Y. LeCun,C. Moore, E. S¨ackinger, and R. Shah. Signature verificationusing a Siamese time delay neural network.
IJPRAI , 1993. 5[3] C. F. Cadieu and B. A. Olshausen. Learning intermediate-level representations of form and motion from naturalmovies.
Neural computation , 2012. 3, 6[4] C. Chen and K. Grauman. Watching unlabeled videos helpslearn new human actions from very few labeled snapshots.In
CVPR , 2013. 3[5] A. Dosovitskiy, J. T. Springenberg, M. Riedmiller, andT. Brox. Discriminative Unsupervised Feature Learning withConvolutional Neural Networks.
NIPS , 2014. 2[6] A. Geiger, P. Lenz, C. Stiller, and R. Urtasun. Vision meetsRobotics: The KITTI Dataset.
IJRR , 2013. 2, 4, 6, 11[7] A. Geiger, P. Lenz, and R. Urtasun. Are we ready forautonomous driving? the KITTI vision benchmark suite.
CVPR , 2012. 2, 6[8] X. Glorot and Y. Bengio. Understanding the difficulty oftraining deep feedforward neural networks.
AISTATS , 2010.10[9] R. Goroshin, J. Bruna, J. Tompson, D. Eigen, and Y. LeCun.Unsupervised Learning of Spatiotemporally Coherent Met-rics. arXiv , 2014. 2, 3, 6[10] R. Hadsell, S. Chopra, and Y. LeCun. Dimensionality Re-duction by Learning an Invariant Mapping.
CVPR , 2006. 3,5, 6, 7, 12[11] R. Held and A. Hein. Movement-produced stimulation in thedevelopment of visually guided behavior.
Journal of com-parative and physiological psychology , 1963. 1[12] G. E. Hinton, A. Krizhevsky, and S. D. Wang. TransformingAuto-Encoders.
ICANN , 2011. 2, 3[13] Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Gir-shick, S. Guadarrama, and T. Darrell. Caffe: Convolutionalarchitecture for fast feature embedding. arXiv , 2014. 6, 10[14] J. J. Kivinen and C. K. Williams. Transformation equivariantboltzmann machines.
ICANN , 2011. 2, 3, 4[15] T. D. Kulkarni, W. Whitney, P. Kohli, and J. B. Tenenbaum.Deep convolutional inverse graphics network. arXiv , 2015.2, 3[16] Y. LeCun, F. J. Huang, and L. Bottou. Learning methodsfor generic object recognition with invariance to pose andlighting.
CVPR , 2004. 6[17] K. Lenc and A. Vedaldi. Understanding image represen-tations by measuring their equivariance and equivalence.
CVPR , 2015. 2, 3, 4, 7, 10[18] Y. Li, A. Fathi, and J. M. Rehg. Learning to predict gaze inegocentric video. In
ICCV , 2013. 3[19] J.-P. Lies, R. M. H¨afner, and M. Bethge. Slowness andsparseness have diverging effects on complex cell learning.
PLoS computational biology , 2014. 3, 6[20] R. Memisevic. Learning to relate images.
PAMI , 2013. 3[21] V. Michalski, R. Memisevic, and K. Konda. Modeling DeepTemporal Dependencies with Recurrent Grammar Cells””.
NIPS , 2014. 2, 322] H. Mobahi, R. Collobert, and J. Weston. Deep Learning fromTemporal Coherence in Video.
ICML , 2009. 2, 3, 5, 6, 7, 12[23] T. Nakamura and M. Asada. Motion sketch: Acquisition ofvisual motion guided behaviors.
IJCAI , 1995. 3[24] M. Ranzato, A. Szlam, J. Bruna, M. Mathieu, R. Collobert,and S. Chopra. Video (language) modeling: a baseline forgenerative models of natural videos. arXiv , 2014. 2, 3[25] X. Ren and C. Gu. Figure-Ground Segmentation ImprovesHandled Object Recognition in Egocentric Video. In
CVPR ,2010. 3[26] U. Schmidt and S. Roth. Learning rotation-aware features:From invariant priors to equivariant descriptors.
CVPR ,2012. 2, 3, 4[27] P. Simard, Y. LeCun, J. Denker, and B. Victorri. Transfor-mation Invariance in Pattern Recognition - Tangent distanceand Tangent propagation. 1998. 2[28] P. Y. Simard, D. Steinkraus, and J. C. Platt. Best practices forconvolutional neural networks applied to visual documentanalysis.
ICDAR , 2003. 2[29] K. Sohn and H. Lee. Learning invariant representations withlocal transformations.
ICML , 2012. 2[30] T. Tuytelaars and K. Mikolajczyk. Local invariant featuredetectors: a survey.
Foundations and trends in computergraphics and vision , 3(3):177–280, 2008. 2, 3[31] P. Vincent, H. Larochelle, Y. Bengio, and P.-A. Manzagol.Extracting and composing robust features with denoising au-toencoders.
ICML , 2008. 2[32] L. Wiskott and T. J. Sejnowski. Slow feature analysis: unsu-pervised learning of invariances.
Neural computation , 2002.3, 7[33] Z. Wu, S. Song, A. Khosla, X. Tang, and J. Xiao. 3dshapenets for 2.5 d object recognition and next-best-viewprediction.
CVPR , 2015. 8[34] J. Xiao, J. Hays, K. A. Ehinger, A. Oliva, and A. Torralba.Sun database: Large-scale scene recognition from abbey tozoo.
CVPR , 2010. 2, 7, 11[35] C. Xu, J. Liu, and B. Kuipers. Moving object segmentationusing motor signals.
ECCV , 2012. 3[36] K. Yamada, Y. Sugano, T. Okabe, Y. Sato, A. Sugimoto, andK. Hiraki. Attention prediction in egocentric video usingmotion and visual saliency.
PSIVT , 2012. 3[37] W. Zou, S. Zhu, K. Yu, and A. Y. Ng. Deep learning of in-variant features via simulated fixations in video.
NIPS , 2012.3, 6 max-pool (3x3, stride2)
ReLU ReLUavg-pool (3x3, stride2)551 323232 321515 7755 64643364 (fullyconnected)
ReLUavg-pool (3x3)55
Figure 6. KITTI z θ architecture producing D =
6. Supplementary details
Some sample images from KITTI and SUN are shown in Fig 5.As they show, these datasets have substantial domain differences.In KITTI, the camera faces the road and has a fixed field of viewand camera pitch, and the content is entirely street scenes aroundKarlsruhe. In SUN, the images are downloaded from the internet,and belong to 397 diverse indoor and outdoor scene categories—most of which have nothing to do with roads. (Elaborating on para titled “Network architectures and Opti-mization” 4.1) As mentioned in the paper, for KITTI, we closelyfollow the cuda-convnet [1] recommended CIFAR-10 architecture:32 conv(5x5)-max(3x3)-ReLU →
32 conv(5x5)-ReLU-avg(3x3) →
64 conv(5x5)-ReLU-avg(3x3) → D =
64 full feature units. Aschematic representation for this architecture is shown in Fig 6.We use Nesterov-accelerated stochastic gradient descent as im-plemented in Caffe [13], starting from weights randomly initial-ized according to [8]. The base learning rate and regularization λ s are selected with greedy cross-validation. Specifically, foreach task, the optimal base learning rate (from 0.1, 0.01, 0.001,0.0001) was identified for CLSNET . Next, with this base learn-ing rate fixed, the optimal regularizer weight (for
DRLIM , TEM - PORAL and
EQUIV ) was selected from a logarithmic grid (stepsof . ). For EQUIV + DRLIM , the
DRLIM loss regularizer weightfixed for
DRLIM was retained, and only the
EQUIV loss weightwas cross-validated. The contrastive loss margin parameter δ inEq (6) in DRLIM , TEMPORAL and
EQUIV were set uniformly to1.0. Since no other part of these objectives (including the soft-max classification loss) depends on the scale of features, differentchoices of margins δ in these methods lead to objective functionswith equivalent optima - the features are only scaled by a factor.For EQUIV + DRLIM , we set the
DRLIM and
EQUIV margins respec-tively to 1.0 and 0.1 to reflect the fact that the equivariance maps M g of Eq (5) applied to the representation z θ ( g x ) of the trans-formed image must bring it closer to the original image represen-tation z θ ( x ) than it was before i.e . k M g z θ ( g x ) − z θ ( x ) k < k z θ ( g x ) − z θ ( x ) k . Technically, the
EQUIV objective in Eq (5) may benefit from settingdifferent margins corresponding to the different ego-motion patterns, butwe overlook this in favor of scalability and fewer hyperparameters. igure 5. (top) Figure from [6] showcasing images from the 4 KITTI location classes (shown here in color; we use grayscale images), and(bottom) Figure from [34] showcasing images from a subset of the 397 SUN classes (shown here in color; see text in main paper for imagepre-processing details).n addition, to allow fast and thorough experimentation, we setthe number of training epochs for each method on each datasetbased on a number of initial runs to assess the scale of time usu-ally taken before the classification softmax loss on validation databegan to rise i.e . overfitting began. All future runs for that methodon that data were run to roughly match (to the nearest 5000) thenumber of epochs identified above. For most cases, this numberwas of the order of . Batch sizes (for both the classificationstack and the Siamese networks) were set to 16 (found to haveno major difference from 4 or 64) for NORB-NORB and KITTI-KITTI, and to 128 (selected from 4, 16, 64, 128) for KITTI-SUN,where we found it necessary to increase batch size so that mean-ingful classification loss gradients were computed in each SGDiteration, and training loss began to fall, despite the large number(397) of classes.On a single Tesla K-40 GPU machine, NORB-NORB trainingtasks took ≈
15 minutes, KITTI-KITTI tasks took ≈
30 minutes,and KITTI-SUN tasks took ≈ Computing ρ g - details In Sec 4.2 in the main paper, weproposed the following measure for equivariance. For each ego-motion g , we measure equivariance separately through the nor-malized error ρ g : ρ g = E (cid:20) k z θ ( x ) − M ′ g z θ ( g x ) k k z θ ( x ) − z θ ( g x ) k (cid:21) , (10)where E [ . ] denotes the empirical mean, M ′ g is the equivariancemap, and ρ g = 0 would signify perfect equivariance. We closelyfollow the equivariance evaluation approach of [17] to solve for theequivariance maps of features produced by each compared methodon held-out validation data (cf. Sec 4.1 from the paper), beforecomputing ρ g . Such maps are produced explicitly by our method,but not the baselines. Thus, as in [17], we compute their maps bysolving a least squares minimization problem based on the defini-tion of equivariance in Eq (2) in the paper: M ′ g = arg min M X m ( y i , y j )= g k z θ ( x i ) − M z θ ( x j ) k . (11) M ′ g ’s computed as above are used to compute ρ g ’s as in Eq (10). M ′ g and ρ g are computed on disjoint subsets of the validation data.Since the output features are relatively low in dimension ( D = Equivariance results - details
While results in the main pa-per (Table 2) were reported as averages over atomic and compositemotions, we present here the results for individual motions in Ta-ble 2. While relative trends among the methods remain the same asfor the averages reported in the main paper, the new numbers helpdemonstrate that ρ g for composite motions is no bigger than foratomic motions, as we would expect from the argument presentedin Sec 6.3 in the main paper.To see this, observe that even among the atomic motions, ρ g forall methods is lower on the small “up” atomic ego-motion (5°) than For uniformity, we do the same recovery of M ′ g for our method; ourresults are similar either way. Tasks → atomic compositeDatasets ↓ “up (u)” “right (r)” “u+r” “u+l” “d+r”random 1.0000 1.0000 1.0000 1.0000 1.0000 CLSNET
TEMPORAL [22] 0.7140 0.8033 0.8089 0.8061 0.8207
DRLIM [10] 0.5770 0.7038 0.7281 0.7182 0.7325
EQUIV
EQUIV + DRLIM
Table 2. The “normalized error” equivariance measure ρ g for in-dividual ego-motions (Eq (10)) on NORB, organized as “atomic”(motions in the EQUIV training set) and “composite” (novel) ego-motions.it is for the larger “right” ego-motion (20°). Further, the errors for“right” are close to those for the composite motions (“up+right”,“up+left” and “down+right”), establishing that while equivarianceis diminished for larger motions, it is not affected by whether themotions were used in training or not. In other words, if trained forequivariance to a suitable discrete set of atomic ego-motions (cf.Sec 6.3 in the paper), the feature space generalizes well to newego-motions.
Restricted slowness is a weak prior
We now present evi-dence supporting our claim in the paper that the principle of slow-ness, which penalizes feature variation within small temporal win-dows, provides a prior that is rather weak. In every stochasticgradient descent (SGD) training iteration for the
DRLIM and
TEM - PORAL networks, we also computed a “slowness” measure that isindependent of feature scaling (unlike the
DRLIM and
TEMPORAL losses of Eq 7 themselves), to better understand the shortcomingsof these methods.Given training pairs ( x i , x j ) annotated as neighbors or non-neighbors by n ij = ( | t i − t j | ≤ T ) (cf. Eq (7) in the paper),we computed pairwise distances ∆ ij = d ( z θ ( s ) ( x i ) , z θ ( s ) ( x j )) ,where θ ( s ) is the parameter vector at SGD training iteration s , and d ( ., . ) is set to the ℓ distance for DRLIM and to the ℓ distance for TEMPORAL (cf. Sec 4).We then measured how well these pairwise distances ∆ ij pre-dict the temporal neighborhood annotation n ij , by measuring theArea Under Receiver Operating Characteristic (AUROC) whenvarying a threshold on ∆ ij .These “slowness AUROC”s are plotted as a function of trainingiteration number in Fig 7, for DRLIM and
COHERENCE networkstrained on the KITTI-SUN task. Compared to the standard randomAUROC value of 0.5, these slowness AUROCs tend to be near 0.9already even before optimization begins, and reach peak AUROCsvery close to 1.0 on both training and testing data within about4000 iterations (batch size 128). This points to a possible weak-ness in these methods—even with parameters (temporal neighbor-hood size, regularization λ ) cross-validated for recognition, theslowness prior is too weak to regularize feature learning effec-tively, since strengthening it causes loss of discriminative infor-mation. In contrast, our method requires systematic feature spaceresponses to ego-motions, and offers a stronger prior. + ++ + query pair NN (ours) NN (pixel) query pair NN (ours) NN (pixel) + query pair NN (ours) NN (pixel) Figure 8. (Contd. from Fig 4) More examples of nearest neighbor image pairs (cols 3 and 4 in each block) in pairwise equivariant featuredifference space for various query image pairs (cols 1 and 2 per block). For comparison, cols 5 and 6 show pixel-wise difference-basedneighbor pairs. The direction of ego-motion in query and neighbor pairs (inferred from ego-pose vector differences) is indicated aboveeach block. D r L I M A UR O C C ohe r en c e A UR O C Figure 7. Slowness AUROC on training (left) and testing (right)data for (top)
DRLIM (bottom)
COHERENCE , showing the weak-ness of slowness prior.
We now describe our method for next-best view selection forrecognition on NORB. Given one view of a NORB object, the taskis to tell a hypothetical robot how to move next to help recognizethe object i.e . which neighboring view would best reduce objectprediction uncertainty. We exploit the fact that equivariant featuresbehave predictably under ego-motions to identify the optimal nextview.We limit the choice of next view g to { “up”, “down”,“up+right” and “up+left” } for simplicity in this preliminary test.We build a k -nearest neighbor (k-NN) image-pair classifier foreach possible g , using only training image pairs ( x , g x ) relatedby the ego-motion g . This classifier C g takes as input a vector of length D , formed by appending the features of the image pair(each image’s representation is of length D ) and produces the out-put probability of each class. So, C g ([ z θ ( x ) , z θ ( g x )]) returnsclass likelihood probabilities for all 25 NORB classes. Outputclass probabilities for the k-NN classifier are computed from thehistogram of class votes from the k nearest neighbors. We set k = 25 .At test time, we first compute features z θ ( x ) on the givenstarting image x . Next we predict the feature z θ ( g x ) corre-sponding to each possible surrounding view g , as M ′ g z θ ( x ) , perthe definition of equivariance (cf. Eq 2 in the paper). With these predicted transformed image features and the pair-wise nearest neighbor class probabilities C g ( . ) , we may now pickthe next-best view as: g ∗ = arg min g H ( C g ([ z θ ( x ) , M ′ g z θ ( x )])) , (12)where H ( . ) is the information-theoretical entropy function. Thisselects the view that would produce the least predicted image pairclass prediction uncertainty. To qualitatively evaluate the impact of equivariant featurelearning, we pose a pair-wise nearest neighbor task in the featuredifference space to retrieve image pairs related by similar ego-motion to a query image pair (details in Supp). Given a learnedfeature space z ( . ) and a query image pair ( x i , x j ) , we form thepairwise feature difference d ij = z ( x i ) − z ( x j ) . In an equivari-ant feature space, other image pairs ( x k , x l ) with similar featuredifference vectors d kl ≈ d ij would be likely to be related by sim- Equivariance maps M ′ g for all methods are computed as described inSec 6.3 in this document (and Sec 4.2 in the main paper) lar ego-motion to the query pair. This can also be viewed as ananalogy completion task, x i : x j = x k :? , where the right answershould apply p ij to x k to obtain x l . For the results in the paper,the closest pair to the query in the learned equivariant feature spaceis compared to that in the pixel space. Some more examples areshown in Fig 8. Note that in our model of equivariance, this isn’t strictly true, sincethe pair-wise difference vector M g z θ ( x ) − z θ ( x ) need not actually befixed for a given transformation g , ∀ x . For small motions (and the rightkinds of equivariant maps M gg