Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Austin Abrams is active.

Publication


Featured researches published by Austin Abrams.


Journal of Structural Biology | 2011

Modeling protein structure at near atomic resolutions with Gorgon.

Matthew L. Baker; Sasakthi S. Abeysinghe; Stephen Schuh; Ross A. Coleman; Austin Abrams; Michael P. Marsh; Corey F. Hryc; Troy Ruths; Wah Chiu; Tao Ju

Electron cryo-microscopy (cryo-EM) has played an increasingly important role in elucidating the structure and function of macromolecular assemblies in near native solution conditions. Typically, however, only non-atomic resolution reconstructions have been obtained for these large complexes, necessitating computational tools for integrating and extracting structural details. With recent advances in cryo-EM, maps at near-atomic resolutions have been achieved for several macromolecular assemblies from which models have been manually constructed. In this work, we describe a new interactive modeling toolkit called Gorgon targeted at intermediate to near-atomic resolution density maps (10-3.5 Å), particularly from cryo-EM. Gorgons de novo modeling procedure couples sequence-based secondary structure prediction with feature detection and geometric modeling techniques to generate initial protein backbone models. Beyond model building, Gorgon is an extensible interactive visualization platform with a variety of computational tools for annotating a wide variety of 3D volumes. Examples from cryo-EM maps of Rotavirus and Rice Dwarf Virus are used to demonstrate its applicability to modeling protein structure.


advances in geographic information systems | 2009

The global network of outdoor webcams: properties and applications

Nathan Jacobs; Walker Burgin; Nick Fridrich; Austin Abrams; Kylia Miskell; Bobby H. Braswell; Andrew D. Richardson; Robert Pless

There are thousands of outdoor webcams which offer live images freely over the Internet. We report on methods for discovering and organizing this already existing and massively distributed global sensor, and argue that it provides an interesting alternative to satellite imagery for global-scale remote sensing applications. In particular, we characterize the live imaging capabilities that are freely available as of the summer of 2009 in terms of the spatial distribution of the cameras, their update rate, and characteristics of the scene in view. We offer algorithms that exploit the fact that webcams are typically static to simplify the tasks of inferring relevant environmental and weather variables directly from image data. Finally, we show that organizing and exploiting the large, ad-hoc, set of cameras attached to the web can dramatically increase the data available for studying particular problems in phenology.


acm multimedia | 2010

Webcams in context: web interfaces to create live 3D environments

Austin Abrams; Robert Pless

Web services supporting deep integration between video data and geographic information systems (GIS) empower a large user base to build on popular tools such as Google Earth and Google Maps. Here we extend web interfaces designed explicitly for novice users to integrate streaming video with 3D GIS, and work to dramatically simplify the task of retexturing 3D scenes from live imagery. We also derive and implement constraints to use corresponding points to calibrate popular pan-tilt-zoom webcams with respect to GIS applications, so that the calibration is automatically updated as web users adjust the camera zoom and view direction. These contributions are demonstrated in a live web application implemented on the Google Earth Plug-in, within which hundreds of users have already geo-registered streaming cameras in hundreds of scenes to create live, updating textures in 3D scenes.


workshop on applications of computer vision | 2012

LOST: Longterm Observation of Scenes (with Tracks)

Austin Abrams; Jim Tucek; Joshua Little; Nathan Jacobs; Robert Pless

We introduce the Longterm Observation of Scenes (with Tracks) dataset. This dataset comprises videos taken from streaming outdoor webcams, capturing the same half hour, each day, for over a year. LOST contains rich metadata, including geolocation, day-by-day weather annotation, object detections, and tracking results. We believe that sharing this dataset opens opportunities for computer vision research involving very long-term outdoor surveillance, robust anomaly detection, and scene analysis methods based on trajectories. Efficient analysis of changes in behavior in a scene at very long time scale requires features that summarize large amounts of trajectory data in an economical way. We describe a trajectory clustering algorithm and aggregate statistics about these exemplars through time and show that these statistics exhibit strong correlations with external meta-data, such as weather signals and day of the week.


international conference and exhibition on computing for geospatial research application | 2010

Participatory integration of live webcams into GIS

Austin Abrams; Nick Fridrich; Nathan Jacobs; Robert Pless

Global satellite imagery provides nearly ubiquitous views of the Earths surface, and the tens of thousands of webcams provide live views from near Earth viewpoints. Combining these into a single application creates live views in the global context, where cars move through intersections, trees sway in the wind, and students walk across campus in real-time. This integration of the camera requires registration, which takes time, effort, and expertise. Here we report on two participatory interfaces that simplify this registration by providing applications which allow anyone to use live webcam streams to create virtual overhead views or to map live texture onto 3D models. We highlight system design issues that affect the scalability of such a service, and offer a case-study of how we overcame these in building a system which is publicly available and integrated with Google Maps and the Google Earth Plug-in. Imagery registered to features in GIS applications can be considered as richly geotagged, and we discuss opportunities for this rich geotagging.


workshop on applications of computer vision | 2012

Tools for richer crowd source image annotations

Joshua Little; Austin Abrams; Robert Pless

Crowd-sourcing tools such as Mechanical Turk are popular for annotation of large scale image data sets. Typically, these annotations consist of bounding boxes or coarse outlines of objects, in order to keep the interface as simple as possible and to respect browser constraints. However, as most browsers now contain functionality to quickly process images and render shapes to the browser through JavaScript, better annotations can feasibly be generated through the browser given an easy-to-use interface. In this paper, we develop a suite of annotation tools for high-fidelity object contouring and 3D pose working within the limitation that, to be accessible to most Mechanical Turk users, the tools must be available through browsers with no plug-ins or extra downloads. We show comparative results exploring the annotation accuracy relative to existing annotation tools.


workshop on applications of computer vision | 2011

Exploratory analysis of time-lapse imagery with fast subset PCA

Austin Abrams; Emily Feder; Robert Pless

In surveillance and environmental monitoring applications, it is common to have millions of images of a particular scene. While there exist tools to find particular events, anomalies, human actions and behaviors, there has been little investigation of tools which allow more exploratory searches in the data. This paper proposes modifications to PCA that enable users to quickly recompute low-rank decompositions for select spatial and temporal subsets of the data. This process returns decompositions orders of magnitude faster than general PCA and are close to optimal in terms of reconstruction error. We show examples of real exploratory data analysis across several applications, including an interactive web application.


computer vision and pattern recognition | 2011

On analyzing video with very small motions

Michael Dixon; Austin Abrams; Nathan Jacobs; Robert Pless

We characterize a class of videos consisting of very small but potentially complicated motions. We find that in these scenes, linear appearance variations have a direct relationship to scene motions. We show how to interpret appearance variations captured through a PCA decomposition of the image set as a scene-specific non-parametric motion basis. We propose fast, robust tools for dense flow estimates that are effective in scenes with small motions and potentially large image noise. We show example results in a variety of applications, including motion segmentation and long-term point tracking.


workshop on applications of computer vision | 2015

Characterizing Feature Matching Performance over Long Time Periods

Abby Stylianou; Austin Abrams; Robert Pless

Many computer vision applications rely on matching features of a query image to reference data sets, but little work has explored how quickly data sets become out of date. In this paper we measure feature matching performance across 5 years of time-lapse data from 20 static cameras to empirically study how feature matching is affected by changing sunlight direction, seasons, weather, and the structural changes over time in outdoor settings. We identify several trends that may be relevant in real world applications: (1) features are much more likely to match within a few days of the reference data, (2) weather and sun-direction have a large effect on feature matching, and (3) there is a slow decay over time due to physical changes in a scene, but this decay is much smaller than effects of lighting direction and weather. These trends are consistent across standard choices for feature detection (DoG, MSER) and feature description (SIFT, SURF, and DAISY). Across all choices, analysis of the feature detection and matching pipeline highlights that performance decay is mostly due to failures in key point detection rather than feature description.


international conference on computational photography | 2014

Structure from shadow motion

Austin Abrams; Ian Schillebeeckx; Robert Pless

In outdoor images, cast shadows define 3D constraints between the sun, the points casting a shadow, and the surfaces onto which shadows are cast. This cast shadow structure provides a powerful cue for 3D reconstruction, but requires that shadows be tracked over time, and this is difficult as shadows have minimal texture. Thus, we develop a shadow tracking system that enforces geometric consistency for each track and then combines thousands of tracking results to create a 3D model of scene geometry. We demonstrate reconstruction results on a variety of outdoor scenes, including some that show the 3D structure of occluders never directly observed by the camera.

Collaboration


Dive into the Austin Abrams's collaboration.

Top Co-Authors

Avatar

Robert Pless

Washington University in St. Louis

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Abby Stylianou

Washington University in St. Louis

View shared research outputs
Top Co-Authors

Avatar

Kylia Miskell

Washington University in St. Louis

View shared research outputs
Top Co-Authors

Avatar

Christopher Hawley

Washington University in St. Louis

View shared research outputs
Top Co-Authors

Avatar

Joseph D. O'Sullivan

Washington University in St. Louis

View shared research outputs
Top Co-Authors

Avatar

Joshua Little

Washington University in St. Louis

View shared research outputs
Top Co-Authors

Avatar

Nick Fridrich

Washington University in St. Louis

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bobby H. Braswell

University of New Hampshire

View shared research outputs
Researchain Logo
Decentralizing Knowledge