Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jay W. Summet is active.

Publication


Featured researches published by Jay W. Summet.


user interface software and technology | 2005

Moveable interactive projected displays using projector based tracking

Johnny Chung Lee; Scott E. Hudson; Jay W. Summet; Paul H. Dietz

Video projectors have typically been used to display images on surfaces whose geometric relationship to the projector remains constant, such as walls or pre-calibrated surfaces. In this paper, we present a technique for projecting content onto moveable surfaces that adapts to the motion and location of the surface to simulate an active display. This is accomplished using a projector based location tracking techinque. We use light sensors embedded into the moveable surface and project low-perceptibility Gray-coded patterns to first discover the sensor locations, and then incrementally track them at interactive rates. We describe how to reduce the perceptibility of tracking patterns, achieve interactive tracking rates, use motion modeling to improve tracking performance, and respond to sensor occlusions. A group of tracked sensors can define quadrangles for simulating moveable displays while single sensors can be used as control inputs. By unifying the tracking and display technology into a single mechanism, we can substantially reduce the cost and complexity of implementing applications that combine motion tracking and projected imagery.


technical symposium on computer science education | 2009

Personalizing CS1 with robots

Jay W. Summet; Deepak Kumar; Keith J. O'Hara; Daniel Walker; Lijun Ni; Douglas S. Blank; Tucker R. Balch

We have developed a CS1 curriculum that uses a robotics context to teach introductory programming [1]. Core to our approach is that each student has their own personal robot. Our robot and software have been specifically developed to support the needs of a CS1 curriculum. We frame traditional problems (robot control) in terms that are personal, relevant, and fun. Initial trial classes have shown that our approach is successful and adaptable.


international conference on computer graphics and interactive techniques | 2007

Prakash: lighting aware motion capture using photosensing markers and multiplexed illuminators

Ramesh Raskar; Hideaki Nii; Bert deDecker; Yuki Hashimoto; Jay W. Summet; Dylan Moore; Yong Zhao; Jonathan Westhues; Paul H. Dietz; John C. Barnwell; Shree K. Nayar; Masahiko Inami; Philippe Bekaert; Michael Noland; Vlad Branzoi; Erich Bruns

In this paper, we present a high speed optical motion capture method that can measure three dimensional motion, orientation, and incident illumination at tagged points in a scene. We use tracking tags that work in natural lighting conditions and can be imperceptibly embedded in attire or other objects. Our system supports an unlimited number of tags in a scene, with each tag uniquely identified to eliminate marker reacquisition issues. Our tags also provide incident illumination data which can be used to match scene lighting when inserting synthetic elements. The technique is therefore ideal for on-set motion capture or real-time broadcasting of virtual sets. Unlike previous methods that employ high speed cameras or scanning lasers, we capture the scene appearance using the simplest possible optical devices - a light-emitting diode (LED) with a passive binary mask used as the transmitter and a photosensor used as the receiver. We strategically place a set of optical transmitters to spatio-temporally encode the volume of interest. Photosensors attached to scene points demultiplex the coded optical signals from multiple transmitters, allowing us to compute not only receiver location and orientation but also their incident illumination and the reflectance of the surfaces to which the photosensors are attached. We use our untethered tag system, called Prakash, to demonstrate methods of adding special effects to captured videos that cannot be accomplished using pure vision techniques that rely on camera images.


IEEE Pervasive Computing | 2008

Designing Personal Robots for Education: Hardware, Software, and Curriculum

Tucker R. Balch; Jay W. Summet; Douglas S. Blank; Deepak Kumar; Mark Guzdial; Keith J. O'Hara; Daniel Walker; M. Sweat; C. Gupta; S. Tansley; J. Jackson; Mansi Gupta; M.N. Muhammad; S. Prashad; N. Eilbert; A. Gavin

An exciting new initiative at Georgia Tech and Bryn Mawr College is using personal robots both to motivate students and to serve as the primary programming platform for the Computer Science 1 curriculum. Here, the authors introduce the initiative and outline plans for the future.


international conference on pervasive computing | 2007

TrackSense: infrastructure free precise indoor positioning using projected patterns

Moritz Köhler; Shwetak N. Patel; Jay W. Summet; Erich P. Stuntebeck; Gregory D. Abowd

While commercial solutions for precise indoor positioning exist, they are costly and require installation of additional infrastructure, which limits opportunities for widespread adoption. Inspired by robotics techniques of Simultaneous Localization and Mapping (SLAM) and computer vision approaches using structured light patterns, we propose a self-contained solution to precise indoor positioning that requires no additional environmental infrastructure. Evaluation of our prototype, called TrackSense, indicates that such a system can deliver up to 4 cm accuracy with 3 cm precision in rooms up to five meters squared, as well as 2 degree accuracy and 1 degree precision on orientation. We explain the design and performance characteristics of our prototype and demonstrate a feasible miniaturization that supports applications that require a single device localizing itself in a space. We also discuss extensions to locate multiple devices and limitations of this approach.


ubiquitous computing | 2005

Preventing camera recording by designing a capture-resistant environment

Khai N. Truong; Shwetak N. Patel; Jay W. Summet; Gregory D. Abowd

With the ubiquity of camera phones, it is now possible to capture digital still and moving images anywhere, raising a legitimate concern for many organizations and individuals. Although legal and social boundaries can curb the capture of sensitive information, it sometimes is neither practical nor desirable to follow the option of confiscating the capture device from an individual. We present the design and proof of concept implementation of a capture-resistant environment that prevents the recording of still and moving images without requiring any cooperation on the part of the capturing device or its operator. Our solution involves a tracking system that uses computer vision for locating any number of retro-reflective CCD or CMOS camera sensors in a protected area. A pulsing light is then directed at the lens, distorting any imagery the camera records. Although the directed light interferes with the cameras operation, it can be designed to minimally impact the sight of other humans in the environment.


international conference on pervasive computing | 2005

Tracking locations of moving hand-held displays using projected light

Jay W. Summet; Rahul Sukthankar

Lee et al. have recently demonstrated display positioning using optical sensors in conjunction with temporally-coded patterns of projected light. This paper extends that concept in two important directions. First, we enable such sensors to determine their own location without using radio synchronization signals – allowing cheaper sensors and protecting location privacy. Second, we track the optical sensors over time using adaptive patterns, minimizing the extent of distracting temporal codes to small regions, thus enabling the remainder of the illuminated region to serve as a useful display while tracking. Our algorithms have been integrated into a prototype system that projects content onto a small, moving surface to create an inexpensive hand-held display for pervasive computing applications.


IEEE Transactions on Visualization and Computer Graphics | 2008

Viz-A-Vis: Toward Visualizing Video through Computer Vision

Mario Romero; Jay W. Summet; John T. Stasko; Gregory D. Abowd

In the established procedural model of information visualization, the first operation is to transform raw data into data tables. The transforms typically include abstractions that aggregate and segment relevant data and are usually defined by a human, user or programmer. The theme of this paper is that for video, data transforms should be supported by low level computer vision. High level reasoning still resides in the human analyst, while part of the low level perception is handled by the computer. To illustrate this approach, we present Viz-A-Vis, an overhead video capture and access system for activity analysis in natural settings over variable periods of time. Overhead video provides rich opportunities for long-term behavioral and occupancy analysis, but it poses considerable challenges. We present initial steps addressing two challenges. First, overhead video generates overwhelmingly large volumes of video impractical to analyze manually. Second, automatic video analysis remains an open problem for computer vision.


IEEE Aerospace and Electronic Systems Magazine | 2000

Privacy algorithm for cylindrical holographic weapons surveillance system

Paul E. Keller; Douglas L. McMakin; David M. Sheen; A.D. McKinnon; Jay W. Summet

A novel personnel surveillance system has been developed to detect and identify threatening objects, which are undetectable by metal detectors, concealed on the human body. This new system can detect threats which are fabricated with plastic, liquid, metal, or ceramic. It uses millimeter-wave array technology and a cylindrical holographic imaging algorithm to provide full-body, 360-degree coverage of a person in near real-time. This system is ideally suited for mass transportation centers such as airport checkpoints that require high throughput rates and full coverage. Research and development efforts are underway to produce a privacy algorithm that removes the human features from the images while identifying the potential threats. This algorithm locates and segments the threats and places them on a wire-frame humanoid representation. The research areas for this algorithm development include artificial neural networks, image processing, edge detection, and dielectric measurements. This system is operational and results from this test and the privacy algorithm will be discussed in this paper.


IEEE Transactions on Visualization and Computer Graphics | 2007

Shadow Elimination and Blinding Light Suppression for Interactive Projected Displays

Jay W. Summet; Matthew Flagg; Tat-Jen Cham; James M. Rehg; Rahul Sukthankar

A major problem with interactive displays based on front projection is that users cast undesirable shadows on the display surface. This paper demonstrates that shadows can be muted by redundantly illuminating the display surface using multiple projectors, all mounted at different locations. However, this technique alone does not eliminate shadows: Multiple projectors create multiple dark regions on the surface (penumbral occlusions) and cast undesirable light onto the users. These problems can be solved by eliminating shadows and suppressing the light that falls on occluding users by actively modifying the projected output. This paper categorizes various methods that can be used to achieve redundant illumination, shadow elimination, and blinding light suppression and evaluates their performance.

Collaboration


Dive into the Jay W. Summet's collaboration.

Top Co-Authors

Avatar

Gregory D. Abowd

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

James M. Rehg

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gregory M. Corso

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Matthew Flagg

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Daniel Walker

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

David M. Sheen

Pacific Northwest National Laboratory

View shared research outputs
Researchain Logo
Decentralizing Knowledge