Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Stephen DiVerdi is active.

Publication


Featured researches published by Stephen DiVerdi.


international conference on computer graphics and interactive techniques | 2012

Exploring collections of 3D models using fuzzy correspondences

Vladimir G. Kim; Wilmot Li; Niloy J. Mitra; Stephen DiVerdi; Thomas A. Funkhouser

Large collections of 3D models from the same object class (e.g., chairs, cars, animals) are now commonly available via many public repositories, but exploring the range of shape variations across such collections remains a challenging task. In this work, we present a new exploration interface that allows users to browse collections based on similarities and differences between shapes in user-specified regions of interest (ROIs). To support this interactive system, we introduce a novel analysis method for computing similarity relationships between points on 3D shapes across a collection. We encode the inherent ambiguity in these relationships using fuzzy point correspondences and propose a robust and efficient computational framework that estimates fuzzy correspondences using only a sparse set of pairwise model alignments. We evaluate our analysis method on a range of correspondence benchmarks and report substantial improvements in both speed and accuracy over existing alternatives. In addition, we demonstrate how fuzzy correspondences enable key features in our exploration tool, such as automated view alignment, ROI-based similarity search, and faceted browsing.


international conference on computer graphics and interactive techniques | 2013

Learning part-based templates from large collections of 3D shapes

Vladimir G. Kim; Wilmot Li; Niloy J. Mitra; Siddhartha Chaudhuri; Stephen DiVerdi; Thomas A. Funkhouser

As large repositories of 3D shape collections continue to grow, understanding the data, especially encoding the inter-model similarity and their variations, is of central importance. For example, many data-driven approaches now rely on access to semantic segmentation information, accurate inter-model point-to-point correspondence, and deformation models that characterize the model collections. Existing approaches, however, are either supervised requiring manual labeling; or employ super-linear matching algorithms and thus are unsuited for analyzing large collections spanning many thousands of models. We propose an automatic algorithm that starts with an initial template model and then jointly optimizes for part segmentation, point-to-point surface correspondence, and a compact deformation model to best explain the input model collection. As output, the algorithm produces a set of probabilistic part-based templates that groups the original models into clusters of models capturing their styles and variations. We evaluate our algorithm on several standard datasets and demonstrate its scalability by analyzing much larger collections of up to thousands of shapes.


Computers & Graphics | 2009

Technical Section: Annotation in outdoor augmented reality

Jason Wither; Stephen DiVerdi; Tobias Höllerer

Annotation, the process of adding extra virtual information to an object, is one of the most common uses for augmented reality. Although annotation is widely used in augmented reality, there is no general agreed-upon definition of what precisely constitutes an annotation in this context. In this paper, we propose a taxonomy of annotation, describing what constitutes an annotation and outlining different dimensions along which annotation can vary. Using this taxonomy we also highlight what styles of annotation are used in different types of applications and areas where further work needs to be done to improve annotation. Through our taxonomy we found two primary categories into which annotations in current applications fall. Some annotations present information that is directly related to the object they are annotating, while others are only indirectly related to the object that is being annotated. We also found that there are very few applications that enable the user to either edit or create new annotations online. Instead, most applications rely on content that is created in various offline processes. There are, however, many advantages to online annotation. We summarize and formalize our recent work in this field by presenting the steps needed to build an online annotation system, looking most closely at techniques for placing annotations from a distance.


ieee virtual reality conference | 2008

Envisor: Online Environment Map Construction for Mixed Reality

Stephen DiVerdi; J. Wither; T. Hollerei

One of the main goals of anywhere augmentation is the development of automatic algorithms for scene acquisition in augmented reality systems. In this paper, we present Envisor, a system for online construction of environment maps in new locations. To accomplish this, Envisor uses vision-based frame to frame and landmark orientation tracking for long-term, drift-free registration. For additional robustness, a gyroscope/compass orientation unit can optionally be used for hybrid tracking. The tracked video is then projected into a cubemap frame by frame. Feedback is presented to the user to help avoid gaps in the cubemap, while any remaining gaps are filled by texture diffusion. The resulting environment map can be used for a variety of applications, including shading of virtual geometry and remote presence.


international conference on computer graphics and interactive techniques | 2013

RealBrush: painting with examples of physical media

Jingwan Lu; Connelly Barnes; Stephen DiVerdi; Adam Finkelstein

Conventional digital painting systems rely on procedural rules and physical simulation to render paint strokes. We present an interactive, data-driven painting system that uses scanned images of real natural media to synthesize both new strokes and complex stroke interactions, obviating the need for physical simulation. First, users capture images of real media, including examples of isolated strokes, pairs of overlapping strokes, and smudged strokes. Online, the user inputs a new stroke path, and our system synthesizes its 2D texture appearance with optional smearing or smudging when strokes overlap. We demonstrate high-fidelity paintings that closely resemble the captured media style, and also quantitatively evaluate our synthesis quality via user studies.


international conference on computer graphics and interactive techniques | 2012

HelpingHand: example-based stroke stylization

Jingwan Lu; Fisher Yu; Adam Finkelstein; Stephen DiVerdi

Digital painters commonly use a tablet and stylus to drive software like Adobe Photoshop. A high quality stylus with 6 degrees of freedom (DOFs: 2D position, pressure, 2D tilt, and 1D rotation) coupled to a virtual brush simulation engine allows skilled users to produce expressive strokes in their own style. However, such devices are difficult for novices to control, and many people draw with less expensive (lower DOF) input devices. This paper presents a data-driven approach for synthesizing the 6D hand gesture data for users of low-quality input devices. Offline, we collect a library of strokes with 6D data created by trained artists. Online, given a query stroke as a series of 2D positions, we synthesize the 4D hand pose data at each sample based on samples from the library that locally match the query. This framework optionally can also modify the stroke trajectory to match characteristic shapes in the style of the library. Our algorithm outputs a 6D trajectory that can be fed into any virtual brush stroke engine to make expressive strokes for novices or users of limited hardware.


international conference on computer graphics and interactive techniques | 2015

Palette-based photo recoloring

Huiwen Chang; Ohad Fried; Yiming Liu; Stephen DiVerdi; Adam Finkelstein

Image editing applications offer a wide array of tools for color manipulation. Some of these tools are easy to understand but offer a limited range of expressiveness. Other more powerful tools are time consuming for experts and inscrutable to novices. Researchers have described a variety of more sophisticated methods but these are typically not interactive, which is crucial for creative exploration. This paper introduces a simple, intuitive and interactive tool that allows non-experts to recolor an image by editing a color palette. This system is comprised of several components: a GUI that is easy to learn and understand, an efficient algorithm for creating a color palette from an image, and a novel color transfer algorithm that recolors the image based on a user-modified palette. We evaluate our approach via a user study, showing that it is faster and easier to use than two alternatives, and allows untrained users to achieve results comparable to those of experts using professional software.


international conference on computer graphics and interactive techniques | 2005

The interactive FogScreen

Ismo Rakkolainen; Stephen DiVerdi; Alex Olwal; Nicola Candussi; Tobias Hüllerer; Markku Laitinen; Mika Piirto; Karri T. Palovuori

The FogScreen is an immaterial projection screen that consists of air and a little humidity, and enables high-quality projected images in thin air. Objects and images appear to float in mid-air, and touching or walking through them enhances the impression, as the screen feels just like air. One nice feature is the possibility to project different images on each side without having them interfere with each other.


Location Based Services and TeleCartography | 2007

“Anywhere Augmentation”: Towards Mobile Augmented Reality in Unprepared Environments

Tobias Höllerer; Jason Wither; Stephen DiVerdi

We introduce the term “Anywhere Augmentation” to refer to the idea of linking location-specific computing services with the physical world, making them readily and directly available in any situation and location. This chapter presents a novel approach to “Anywhere Augmentation” based on efficient human input for wearable computing and augmented reality (AR). Current mobile and wearable computing technologies, as found in many industrial and governmental service applications, do not routinely integrate the services they provide with the physical world. Major limitations in the computer’s general scene understanding abilities and the infeasibility of instrumenting the whole globe with a unified sensing and computing environment prevent progress in this area. Alternative approaches must be considered.


international symposium on mixed and augmented reality | 2006

Using aerial photographs for improved mobile AR annotation

Jason Wither; Stephen DiVerdi; Tobias Höllerer

We present a mobile augmented reality system for outdoor annotation of the real world. To reduce user burden, we use aerial photographs in addition to the wearable systems usual data sources (position, orientation, camera and user input). This allows the user to accurately annotate 3D features with only a few simple interactions from a single position by aligning features in both their first-person viewpoint and in the aerial view. We examine three types of aerial photograph features - corners, edges, and regions - that are suitable for a wide variety of useful mobile augmented reality applications, and are easily visible on aerial photographs. By using aerial photographs in combination with wearable augmented reality, we are able to achieve much higher accuracy 3D annotation positions than was previously possible from a single user location.

Collaboration


Dive into the Stephen DiVerdi's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ismo Rakkolainen

Tampere University of Technology

View shared research outputs
Top Co-Authors

Avatar

Alex Olwal

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Jason Wither

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge