Oren Freifeld
Brown University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Oren Freifeld.
computer vision and pattern recognition | 2012
Silvia Zuffi; Oren Freifeld; Michael J. Black
Pictorial Structures (PS) define a probabilistic model of 2D articulated objects in images. Typical PS models assume an object can be represented by a set of rigid parts connected with pairwise constraints that define the prior probability of part configurations. These models are widely used to represent non-rigid articulated objects such as humans and animals despite the fact that such objects have parts that deform non-rigidly. Here we define a new Deformable Structures (DS) model that is a natural extension of previous PS models and that captures the non-rigid shape deformation of the parts. Each part in a DS model is represented by a low-dimensional shape deformation space and pairwise potentials between parts capture how the shape varies with pose and the shape of neighboring parts. A key advantage of such a model is that it more accurately models object boundaries. This enables image likelihood models that are more discriminative than previous PS likelihoods. This likelihood is learned using training imagery annotated using a DS “puppet.” We focus on a human DS model learned from 2D projections of a realistic 3D human body model and use it to infer human poses in images using a form of non-parametric belief propagation.
european conference on computer vision | 2012
Oren Freifeld; Michael J. Black
This technical report is complementary to [1] and contains proofs, formulas and additional plots. It is identical to the supplemental material submitted to European Conference on Computer Vision (ECCV 2012) on March 2012. References [1] Freifeld, O., Black, M.J.: Lie Bodies: A Manifold Representation of 3D Human Shape. European Conference on Computer Vision (2012) Lie Bodies: A Manifold Representation of 3D Human Shape Supplemental Material Oren Freifeld Division of Applied Mathematics, Brown University Michael J. Black Max Planck Institute for Intelligent Systems
computer vision and pattern recognition | 2010
Oren Freifeld; Alexander Weiss; Silvia Zuffi; Michael J. Black
We define a new “contour person” model of the human body that has the expressive power of a detailed 3D model and the computational benefits of a simple 2D part-based model. The contour person (CP) model is learned from a 3D SCAPE model of the human body that captures natural shape and pose variations; the projected contours of this model, along with their segmentation into parts forms the training set. The CP model factors deformations of the body into three components: shape variation, viewpoint change and part rotation. This latter model also incorporates a learned non-rigid deformation model. The result is a 2D articulated model that is compact to represent, simple to compute with and more expressive than previous models. We demonstrate the value of such a model in 2D pose estimation and segmentation. Given an initial pose from a standard pictorial-structures method, we refine the pose and shape using an objective function that segments the scene into foreground and background regions. The result is a parametric, human-specific, image segmentation.
computer vision and pattern recognition | 2014
Julian Straub; Guy Rosman; Oren Freifeld; John J. Leonard; John W. Fisher
Objects and structures within man-made environments typically exhibit a high degree of organization in the form of orthogonal and parallel planes. Traditional approaches to scene representation exploit this phenomenon via the somewhat restrictive assumption that every plane is perpendicular to one of the axes of a single coordinate system. Known as the Manhattan-World model, this assumption is widely used in computer vision and robotics. The complexity of many real-world scenes, however, necessitates a more flexible model. We propose a novel probabilistic model that describes the world as a mixture of Manhattan frames: each frame defines a different orthogonal coordinate system. This results in a more expressive model that still exploits the orthogonality constraints. We propose an adaptive Markov-Chain Monte-Carlo sampling algorithm with Metropolis-Hastings split/merge moves that utilizes the geometry of the unit sphere. We demonstrate the versatility of our Mixture-of-Manhattan-Frames model by describing complex scenes using depth images of indoor scenes as well as aerial-LiDAR measurements of an urban center. Additionally, we show that the model lends itself to focal-length calibration of depth cameras and to plane segmentation.
International Journal of Biomedical Imaging | 2009
Oren Freifeld; Hayit Greenspan; Jacob Goldberger
This paper focuses on the detection and segmentation of Multiple Sclerosis (MS) lesions in magnetic resonance (MRI) brain images. To capture the complex tissue spatial layout, a probabilistic model termed Constrained Gaussian Mixture Model (CGMM) is proposed based on a mixture of multiple spatially oriented Gaussians per tissue. The intensity of a tissue is considered a global parameter and is constrained, by a parameter-tying scheme, to be the same value for the entire set of Gaussians that are related to the same tissue. MS lesions are identified as outlier Gaussian components and are grouped to form a new class in addition to the healthy tissue classes. A probability-based curve evolution technique is used to refine the delineation of lesion boundaries. The proposed CGMM-CE algorithm is used to segment 3D MRI brain images with an arbitrary number of channels. The CGMM-CE algorithm is automated and does not require an atlas for initialization or parameter learning. Experimental results on both standard brain MRI simulation data and real data indicate that the proposed method outperforms previously suggested approaches, especially for highly noisy data.
european conference on computer vision | 2010
Peng Guan; Oren Freifeld; Michael J. Black
Detection, tracking, segmentation and pose estimation of people in monocular images are widely studied. Two-dimensional models of the human body are extensively used, however, they are typically fairly crude, representing the body either as a rough outline or in terms of articulated geometric primitives. We describe a new 2D model of the human body contour that combines an underlying naked body with a low-dimensional clothing model. The naked body is represented as a Contour Person that can take on a wide variety of poses and body shapes. Clothing is represented as a deformation from the underlying body contour. This deformation is learned from training examples using principal component analysis to produce eigen clothing. We find that the statistics of clothing deformations are skewed and we model the a priori probability of these deformations using a Beta distribution. The resulting generative model captures realistic human forms in monocular images and is used to infer 2D body shape and pose under clothing. We also use the coefficients of the eigen clothing to recognize different categories of clothing on dressed people. The method is evaluated quantitatively on synthetic and real images and achieves better accuracy than previous methods for estimating body shape under clothing.
international symposium on biomedical imaging | 2007
Oren Freifeld; Hayit Greenspan; Jacob Goldberger
This paper focuses on the detection and segmentation of multiple sclerosis (MS) lesions in magnetic resonance images. The proposed method performs healthy tissue segmentation using a probabilistic model for normal brain images. MS lesions are simultaneously identified as outlier Gaussian components. The probabilistic model, termed constrained-GMM, is based on a mixture of many spatially-oriented Gaussians per tissue. The intensity of a tissue is considered a global parameter and is constrained to be the same value for a set of related Gaussians per tissue. An active contour algorithm is used to delineate lesion boundaries. Experimental results on both standard brain MR simulation data and real data, indicate that our method outperforms previously suggested approaches especially for highly noisy data.
Journal of Neural Engineering | 2014
Justin D. Foster; Paul Nuyujukian; Oren Freifeld; Hua Gao; Ross Walker; Stephen I. Ryu; Teresa H. Meng; Boris Murmann; Michael J. Black; Krishna V. Shenoy
OBJECTIVE Motor neuroscience and brain-machine interface (BMI) design is based on examining how the brain controls voluntary movement, typically by recording neural activity and behavior from animal models. Recording technologies used with these animal models have traditionally limited the range of behaviors that can be studied, and thus the generality of science and engineering research. We aim to design a freely-moving animal model using neural and behavioral recording technologies that do not constrain movement. APPROACH We have established a freely-moving rhesus monkey model employing technology that transmits neural activity from an intracortical array using a head-mounted device and records behavior through computer vision using markerless motion capture. We demonstrate the flexibility and utility of this new monkey model, including the first recordings from motor cortex while rhesus monkeys walk quadrupedally on a treadmill. MAIN RESULTS Using this monkey model, we show that multi-unit threshold-crossing neural activity encodes the phase of walking and that the average firing rate of the threshold crossings covaries with the speed of individual steps. On a population level, we find that neural state-space trajectories of walking at different speeds have similar rotational dynamics in some dimensions that evolve at the step rate of walking, yet robustly separate by speed in other state-space dimensions. SIGNIFICANCE Freely-moving animal models may allow neuroscientists to examine a wider range of behaviors and can provide a flexible experimental paradigm for examining the neural mechanisms that underlie movement generation across behaviors and environments. For BMIs, freely-moving animal models have the potential to aid prosthetic design by examining how neural encoding changes with posture, environment and other real-world context changes. Understanding this new realm of behavior in more naturalistic settings is essential for overall progress of basic motor neuroscience and for the successful translation of BMIs to people with paralysis.
international ieee/embs conference on neural engineering | 2011
Justin D. Foster; Oren Freifeld; Paul Nuyujukian; Stephen I. Ryu; Michael J. Black; Krishna V. Shenoy
Neural control of movement is typically studied in constrained environments where there is a reduced set of possible behaviors. This constraint may unintentionally limit the applicability of findings to the generalized case of unconstrained behavior. We hypothesize that examining the unconstrained state across multiple behavioral contexts will lead to new insights into the neural control of movement and help advance the design of neural prosthetic decode algorithms. However, to pursue electrophysiological studies in such a manner requires a more flexible framework for experimentation. We propose that head-mounted neural recording systems with wireless data transmission, combined with markerless computer-vision based motion tracking, will enable new, less constrained experiments. As a proof-of-concept, we recorded and wirelessly transmitted broadband neural data from 32 electrodes in premotor cortex while acquiring single-camera video of a rhesus macaque walking on a treadmill. We demonstrate the ability to extract behavioral kinematics using an automated computer vision algorithm without use of markers and to predict kinematics from the neural data. Together these advances suggest that a new class of “freely moving monkey” experiments should be possible and should help broaden our understanding of the neural control of movement.
computer vision and pattern recognition | 2014
Randi Cabezas; Oren Freifeld; Guy Rosman; John W. Fisher
We propose an integrated probabilistic model for multi-modal fusion of aerial imagery, LiDAR data, and (optional) GPS measurements. The model allows for analysis and dense reconstruction (in terms of both geometry and appearance) of large 3D scenes. An advantage of the approach is that it explicitly models uncertainty and allows for missing data. As compared with image-based methods, dense reconstructions of complex urban scenes are feasible with fewer observations. Moreover, the proposed model allows one to estimate absolute scale and orientation and reason about other aspects of the scene, e.g., detection of moving objects. As formulated, the model lends itself to massively-parallel computing. We exploit this in an efficient inference scheme that utilizes both general purpose and domain-specific hardware components. We demonstrate results on large-scale reconstruction of urban terrain from LiDAR and aerial photography data.