Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tomasz Malisiewicz is active.

Publication


Featured researches published by Tomasz Malisiewicz.


international conference on computer vision | 2011

Ensemble of exemplar-SVMs for object detection and beyond

Tomasz Malisiewicz; Abhinav Gupta; Alexei A. Efros

This paper proposes a conceptually simple but surprisingly powerful method which combines the effectiveness of a discriminative object detector with the explicit correspondence offered by a nearest-neighbor approach. The method is based on training a separate linear SVM classifier for every exemplar in the training set. Each of these Exemplar-SVMs is thus defined by a single positive instance and millions of negatives. While each detector is quite specific to its exemplar, we empirically observe that an ensemble of such Exemplar-SVMs offers surprisingly good generalization. Our performance on the PASCAL VOC detection task is on par with the much more complex latent part-based model of Felzenszwalb et al., at only a modest computational cost increase. But the central benefit of our approach is that it creates an explicit association between each detection and a single training exemplar. Because most detections show good alignment to their associated exemplar, it is possible to transfer any available exemplar meta-data (segmentation, geometric structure, 3D model, etc.) directly onto the detections, which can then be used as part of overall scene understanding.


british machine vision conference | 2007

Improving Spatial Support for Objects via Multiple Segmentations

Tomasz Malisiewicz; Alexei A. Efros

Sliding window scanning is the dominant paradigm in object recognition research today. But while much success has been reported in detecting several rectangular-shaped object classes (i.e. faces, cars, pedestrians), results have been much less impressive for more general types of objects. Several researchers have advocated the use of image segmentation as a way to get a better spatial support for objects. In this paper, our aim is to address this issue by studying the following two questions: 1) how important is good spatial support for recognition? 2) can segmentation provide better spatial support for objects? To answer the first, we compare recognition performance using ground-truth segmentation vs. bounding boxes. To answer the second, we use the multiple segmentation approach to evaluate how close can real segments approach the ground-truth for real objects, and at what cost. Our results demonstrate the importance of finding the right spatial support for objects, and the feasibility of doing so without excessive computational burden.


european conference on computer vision | 2012

Undoing the damage of dataset bias

Aditya Khosla; Tinghui Zhou; Tomasz Malisiewicz; Alexei A. Efros; Antonio Torralba

The presence of bias in existing object recognition datasets is now well-known in the computer vision community. While it remains in question whether creating an unbiased dataset is possible given limited resources, in this work we propose a discriminative framework that directly exploits dataset bias during training. In particular, our model learns two sets of weights: (1) bias vectors associated with each individual dataset, and (2) visual world weights that are common to all datasets, which are learned by undoing the associated bias from each dataset. The visual world weights are expected to be our best possible approximation to the object model trained on an unbiased dataset, and thus tend to have good generalization ability. We demonstrate the effectiveness of our model by applying the learned weights to a novel, unseen dataset, and report superior results for both classification and detection tasks compared to a classical SVM that does not account for the presence of bias. Overall, we find that it is beneficial to explicitly account for bias when combining multiple datasets.


computer vision and pattern recognition | 2008

Recognition by association via learning per-exemplar distances

Tomasz Malisiewicz; Alexei A. Efros

We pose the recognition problem as data association. In this setting, a novel object is explained solely in terms of a small set of exemplar objects to which it is visually similar. Inspired by the work of Frome et al., we learn separate distance functions for each exemplar; however, our distances are interpretable on an absolute scale and can be thresholded to detect the presence of an object. Our exemplars are represented as image regions and the learned distances capture the relative importance of shape, color, texture, and position features for that region. We use the distance functions to detect and segment objects in novel images by associating the bottom-up segments obtained from multiple image segmentations with the exemplar regions. We evaluate the detection and segmentation performance of our algorithm on real-world outdoor scenes from the LabelMe (B. Russel, et al., 2007) dataset and also show some promising qualitative image parsing results.


digital identity management | 2005

Registration of multiple range scans as a location recognition problem: hypothesis generation, refinement and verification

Bradford J. King; Tomasz Malisiewicz; Charles V. Stewart; Richard J. Radke

This paper addresses the following version of the multiple range scan registration problem. A scanner with an associated intensity camera is placed at a series of locations throughout a large environment; scans are acquired at each location. The problem is to decide automatically which scans overlap and to estimate the parameters of the transformations aligning these scans. Our technique is based on (1) detecting and matching keypoints - distinctive locations in range and intensity images, (2) generating and refining a transformation estimate from each keypoint match, and (3) deciding if a given refined estimate is correct. While these steps are familiar, we present novel approaches to each. A new range keypoint technique is presented that uses spin images to describe holes in smooth surfaces. Intensity keypoints are detected using multiscale filters, described using intensity gradient histograms, and backprojected to form 3D keypoints. A hypothesized transformation is generated by matching a single keypoint from one scan to a single keypoint from another, and is refined using a robust form of the ICP algorithm in combination with controlled region growing. Deciding whether a refined transformation is correct is based on three criteria: alignment accuracy, visibility, and a novel randomness measure. Together these three steps produce good results in test scans of the Rensselaer campus.


International Journal of Computer Vision | 2016

Visualizing Object Detection Features

Carl Vondrick; Aditya Khosla; Hamed Pirsiavash; Tomasz Malisiewicz; Antonio Torralba

We introduce algorithms to visualize feature spaces used by object detectors. Our method works by inverting a visual feature back to multiple natural images. We found that these visualizations allow us to analyze object detection systems in new ways and gain new insight into the detector’s failures. For example, when we visualize the features for high scoring false alarms, we discovered that, although they are clearly wrong in image space, they often look deceptively similar to true positives in feature space. This result suggests that many of these false alarms are caused by our choice of feature space, and supports that creating a better learning algorithm or building bigger datasets is unlikely to correct these errors without improving the features. By visualizing feature spaces, we can gain a more intuitive understanding of recognition systems.


international conference on pattern recognition | 2014

Exemplar Network: A Generalized Mixture Model

Chikao Tsuchiya; Tomasz Malisiewicz; Antonio Torralba

We present a non-linear object detector called Exemplar Network. Our model efficiently encodes the space of all possible mixture models, and offers a framework that generalizes recent exemplar-based object detection with monolithic detectors. We evaluate our method on the traffic scene dataset that we collected using onboard cameras, and demonstrate an orientation estimation. Our model has both the interpretability and accessibility necessary for industrial applications. One can easily apply our method to a variety of applications.


international conference on computer graphics and interactive techniques | 2011

Data-driven visual similarity for cross-domain image matching

Abhinav Shrivastava; Tomasz Malisiewicz; Abhinav Gupta; Alexei A. Efros


international conference on computer vision | 2013

HOGgles: Visualizing Object Detection Features

Carl Vondrick; Aditya Khosla; Tomasz Malisiewicz; Antonio Torralba


neural information processing systems | 2009

Beyond Categories: The Visual Memex Model for Reasoning About Object Relationships

Tomasz Malisiewicz; Alexei A. Efros

Collaboration


Dive into the Tomasz Malisiewicz's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Antonio Torralba

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Aditya Khosla

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Abhinav Gupta

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Carl Vondrick

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bradford J. King

Rensselaer Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar

Charles V. Stewart

Rensselaer Polytechnic Institute

View shared research outputs
Researchain Logo
Decentralizing Knowledge