Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Lee M. Seversky is active.

Publication


Featured researches published by Lee M. Seversky.


eurographics | 2014

State of the Art in Surface Reconstruction from Point Clouds

Matthew Berger; Andrea Tagliasacchi; Lee M. Seversky; Pierre Alliez; Joshua A. Levine; Andrei Sharf; Cláudio T. Silva

The area of surface reconstruction has seen substantial progress in the past two decades. The traditional problem addressed by surface reconstruction is to recover the digital representation of a physical shape that has been scanned, where the scanned data contains a wide variety of defects. While much of the earlier work has been focused on reconstructing a piece-wise smooth representation of the original shape, recent work has taken on more specialized priors to address significantly challenging data imperfections, where the reconstruction can take on different representations -- not necessarily the explicit geometry. This state-of-the-art report surveys the field of surface reconstruction, providing a categorization with respect to priors, data imperfections, and reconstruction output. By considering a holistic view of surface reconstruction, this report provides a detailed characterization of the field, highlights similarities between diverse reconstruction techniques, and provides directions for future work in surface reconstruction.


Computer Graphics Forum | 2017

A Survey of Surface Reconstruction from Point Clouds

Matthew Berger; Andrea Tagliasacchi; Lee M. Seversky; Pierre Alliez; Gaël Guennebaud; Joshua A. Levine; Andrei Sharf; Cláudio T. Silva

The area of surface reconstruction has seen substantial progress in the past two decades. The traditional problem addressed by surface reconstruction is to recover the digital representation of a physical shape that has been scanned, where the scanned data contain a wide variety of defects. While much of the earlier work has been focused on reconstructing a piece‐wise smooth representation of the original shape, recent work has taken on more specialized priors to address significantly challenging data imperfections, where the reconstruction can take on different representations—not necessarily the explicit geometry. We survey the field of surface reconstruction, and provide a categorization with respect to priors, data imperfections and reconstruction output. By considering a holistic view of surface reconstruction, we show a detailed characterization of the field, highlight similarities between diverse reconstruction techniques and provide directions for future work in surface reconstruction.


acm multimedia | 2006

Real-time automatic 3D scene generation from natural language voice and text descriptions

Lee M. Seversky; Lijun Yin

Automatic scene generation using voice and text offers a unique multimedia approach to classic storytelling and human computer interaction with 3D graphics. In this paper, we present a newly developed system that generates 3D scenes from voice and text natural language input. Our system is intended to benefit non-graphics domain users and applications by providing advanced scene production through an automatic system. Scene descriptions are constructed in real-time using a method for depicting spatial relationships between and among different objects. Only the polygon representations of the objects are required for object placement. In addition, our system is robust. The system supports different quality polygon models such as those widely available on the Internet.


Computers & Graphics | 2011

SMI 2011: Full Paper: Harmonic point cloud orientation

Lee M. Seversky; Matt S. Berger; Lijun Yin

In this work we propose a new method for estimating the normal orientation of unorganized point clouds. Consistent assignment of normal orientation is a challenging task in the presence of sharp features, nearby surface sheets, noise, undersampling, and missing data. Existing approaches, which consider local geometric properties often fail when operating on such point clouds as local neighborhood measures inherently face issues of robustness. Our approach circumvents these issues by orienting normals based on globally smooth functions defined on point clouds with measures that depend only on single points. More specifically, we consider harmonic functions, or functions which lie in the kernel of the point cloud Laplace-Beltrami operator. Each harmonic function in the set is used to define a gradient field over the point cloud. The problem of normal orientation is then cast as an assignment of cross-product ordering between gradient fields. Global smoothness ensures a highly consistent orientation, rendering our method extremely robust in the presence of imperfect point clouds.


military communications conference | 2009

N-CET: Network-centric exploitation and tracking

James M. Metzler; Mark Linderman; Lee M. Seversky

The Network-Centric Exploitation and Tracking (N-CET) program is a research effort to enhance intelligence exploitation in a tactical environment by cross-cueing sensors and fusing data from on-board sources with processed information from off-board platforms and sharing the resulting products in a net-centric manner. At the core of N-CET are information management services that decouple data producers and consumers, allowing reconfiguration to suit mission needs. Network-centric algorithms utilize the availability of information from both homogeneous and complementary on-board and off-board sensors. Organic capabilities facilitate the extraction of actionable information from high bandwidth sensor data and ensure the necessary information arrives at other platforms and users in a timely manner. This paper provides an overview of the N-CET architecture and the sensors and algorithms currently implemented upon it. The extent to which such algorithms are enhanced in a network-centric environment is discussed and the challenges of managing the resulting dynamic information space in a tactical publish/subscribe/query model are presented.


IEEE Transactions on Visualization and Computer Graphics | 2017

cite2vec: Citation-Driven Document Exploration via Word Embeddings

Matthew Berger; Katherine McDonough; Lee M. Seversky

Effectively exploring and browsing document collections is a fundamental problem in visualization. Traditionally, document visualization is based on a data model that represents each document as the set of its comprised words, effectively characterizing what the document is. In this paper we take an alternative perspective: motivated by the manner in which users search documents in the research process, we aim to visualize documents via their usage, or how documents tend to be used. We present a new visualization scheme - cite2vec - that allows the user to dynamically explore and browse documents via how other documents use them, information that we capture through citation contexts in a document collection. Starting from a usage-oriented word-document 2D projection, the user can dynamically steer document projections by prescribing semantic concepts, both in the form of phrase/document compositions and document:phrase analogies, enabling the exploration and comparison of documents by their use. The user interactions are enabled by a joint representation of words and documents in a common high-dimensional embedding space where user-specified concepts correspond to linear operations of word and document vectors. Our case studies, centered around a large document corpus of computer vision research papers, highlight the potential for usage-based document visualization.


Face and Gesture 2011 | 2011

Reshaping 3D facial scans for facial appearance modeling and 3D facial expression analysis

Yanhui Huang; Xing Zhang; Yangyu Fan; Lijun Yin; Lee M. Seversky; Tao Lei; Weijun Dong

3D face scans have been widely used for face modeling and face analysis. Due to the fact that face scans provide variable point clouds across frames, they may not capture complete facial data or miss point-to-point correspondences across various facial scans, thus causing difficulties to use such data for analysis. This paper presents an efficient approach to represent facial shapes from face scans through the reconstruction of face models based on regional information and a generic model. A hybrid approach using two vertex mapping algorithms, displacement mapping and point-to-surface mapping, and a regional blending algorithm are proposed to reconstruct the facial surface detail. The resulting models can represent individual facial shapes consistently and adaptively, establishing the facial point correspondence across individual models. The accuracy of the generated models is evaluated quantitatively. The applicability of the models is validated through the application for 3D facial expression recognition based on the databases of static 3DFE and dynamic 4DFE. A comparison with the state of the art has also been reported.


computer vision and pattern recognition | 2014

Subspace Tracking under Dynamic Dimensionality for Online Background Subtraction

Matthew Berger; Lee M. Seversky

Long-term modeling of background motion in videos is an important and challenging problem used in numerous applications such as segmentation and event recognition. A major challenge in modeling the background from point trajectories lies in dealing with the variable length duration of trajectories, which can be due to such factors as trajectories entering and leaving the frame or occlusion from different depth layers. This work proposes an online method for background modeling of dynamic point trajectories via tracking of a linear subspace describing the background motion. To cope with variability in trajectory durations, we cast subspace tracking as an instance of subspace estimation under missing data, using a least-absolute deviations formulation to robustly estimate the background in the presence of arbitrary foreground motion. Relative to previous works, our approach is very fast and scales to arbitrarily long videos as our method processes new frames sequentially as they arrive.


international conference on robotics and automation | 2016

Classifying swarm behavior via compressive subspace learning

Matthew Berger; Lee M. Seversky; Daniel S. Brown

Bio-inspired robot swarms encompass a rich space of dynamics and collective behaviors. Given some agent measurements of a swarm at a particular time instance, an important problem is the classification of the swarm behavior. This is challenging in practical scenarios where information from only a small number of agents may be available, resulting in limited agent samples for classification. Another challenge is recognizing emerging behavior: the prediction of swarm behavior prior to convergence of the attracting state. In this paper we address these challenges by modeling a swarms collective motion as a low-dimensional linear subspace. We illustrate that for both synthetic and real data, these behaviors manifest as low-dimensional subspaces, and that these subspaces are highly discriminative. We also show that these subspaces generalize well to predicting emerging behavior, highlighting that there exists low-dimensional structure in transient agent behavior. In order to learn distinct behavior subspaces, we extend previous work on subspace estimation and identification from missing data to that of compressive measurements, where compressive measurements arise due to agent positions scattered throughout the domain. We demonstrate improvement in performance over prior works with respect to limited agent samples over a wide range of agent models and scenarios.


Computer Graphics Forum | 2012

A Global Parity Measure for Incomplete Point Cloud Data

Lee M. Seversky; Lijun Yin

Shapes with complex geometric and topological features such as tunnels, neighboring sheets, and cavities are susceptible to undersampling and continue to challenge existing reconstruction techniques. In this work we introduce a new measure for point clouds to determine the likely interior and exterior regions of an object. Specifically, we adapt the concept of parity to point clouds with missing data and introduce the parity map, a global measure of parity over the volume. We first examine how parity changes over the volume with respect to missing data and develop a method for extracting topologically correct interior and exterior crusts for estimating a signed distance field and performing surface reconstruction. We evaluate our approach on real scan data representing complex shapes with missing data. Our parity measure is not only able to identify highly confident interior and exterior regions but also localizes regions of missing data. Our reconstruction results are compared to existing methods and we show that our method faithfully captures the topology and geometry of complex shapes in the presence of missing data.

Collaboration


Dive into the Lee M. Seversky's collaboration.

Top Co-Authors

Avatar

Matthew Berger

Air Force Research Laboratory

View shared research outputs
Top Co-Authors

Avatar

Lijun Yin

Binghamton University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Eric Heim

University of Pittsburgh

View shared research outputs
Top Co-Authors

Avatar

Joe H. Chow

Rensselaer Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Meng Wang

Rensselaer Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Pengzhi Gao

Rensselaer Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge