Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Thad Starner is active.

Publication


Featured researches published by Thad Starner.


IEEE Pervasive Computing | 2005

Energy scavenging for mobile and wireless electronics

Joseph A. Paradiso; Thad Starner

Energy harvesting has grown from long-established concepts into devices for powering ubiquitously deployed sensor networks and mobile electronics. Systems can scavenge power from human activity or derive limited energy from ambient heat, light, radio, or vibrations. Ongoing power management developments enable battery-powered electronics to live longer. Such advances include dynamic optimization of voltage and clock rate, hybrid analog-digital designs, and clever wake-up procedures that keep the electronics mostly inactive. Exploiting renewable energy resources in the devices environment, however, offers a power source limited by the devices physical survival rather than an adjunct energy store. Energy harvestings true legacy dates to the water wheel and windmill, and credible approaches that scavenge energy from waste heat or vibration have been around for many decades. Nonetheless, the field has encountered renewed interest as low-power electronics, wireless standards, and miniaturization conspire to populate the world with sensor networks and mobile devices. This article presents a whirlwind survey through energy harvesting, spanning historic and current developments.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 1998

Real-time American sign language recognition using desk and wearable computer based video

Thad Starner; Joshua Weaver; Alex Pentland

We present two real-time hidden Markov model-based systems for recognizing sentence-level continuous American sign language (ASL) using a single camera to track the users unadorned hands. The first system observes the user from a desk mounted camera and achieves 92 percent word accuracy. The second system mounts the camera in a cap worn by the user and achieves 98 percent accuracy (97 percent with an unrestricted grammar). Both experiments use a 40-word lexicon.


Lecture Notes in Computer Science | 1999

The Aware Home: A Living Laboratory for Ubiquitous Computing Research

Cory D. Kidd; Robert J. Orr; Gregory D. Abowd; Christopher G. Atkeson; Irfan A. Essa; Blair MacIntyre; Elizabeth D. Mynatt; Thad Starner; Wendy C. Newstetter

We are building a home, called the Aware Home, to create a living laboratory for research in ubiquitous computing for everyday activities. This paper introduces the Aware Home project and outlines some of our technology-and human-centered research objectives in creating the Aware Home.


ubiquitous computing | 2003

Using GPS to learn significant locations and predict movement across multiple users

Daniel Ashbrook; Thad Starner

Wearable computers have the potential to act as intelligent agents in everyday life and to assist the user in a variety of tasks, using context to determine how to act. Location is the most common form of context used by these agents to determine the users task. However, another potential use of location context is the creation of a predictive model of the users future movements. We present a system that automatically clusters GPS data taken over an extended period of time into meaningful locations at multiple scales. These locations are then incorporated into a Markov model that can be consulted for use with a variety of applications in both single-user and collaborative scenarios.


Ibm Systems Journal | 1996

Human-powered wearable computing

Thad Starner

Batteries add size, weight, and inconvenience to present-day mobile computers. This paper explores the possibility of harnessing the energy expended during the users everyday actions to generate power for his or her computer, thus eliminating the impediment of batteries. An analysis of power generation through leg motion is presented in depth, and a survey of other methods such as generation by breath or blood pressure, body heat, and finger and limb motion is also presented.


international symposium on computer vision | 1995

Real-time American Sign Language recognition from video using hidden Markov models

Thad Starner; Alex Pentland

Hidden Markov models (HMMs) have been used prominently and successfully in speech recognition and, more recently, in handwriting recognition. Consequently, they seem ideal for visual recognition of complex, structured hand gestures such as are found in sign language. We describe a real-time HMM-based system for recognizing sentence level American Sign Language (ASL) which attains a word accuracy of 99.2% without explicitly modeling the fingers.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 1993

Visually controlled graphics

Ali Azarbayejani; Thad Starner; Bradley Horowitz; Alex Pentland

Interactive graphics systems that are driven by visual input are discussed. The underlying computer vision techniques and a theoretical formulation that addresses issues of accuracy, computational efficiency, and compensation for display latency are presented. Experimental results quantitatively compare the accuracy of the visual technique with traditional sensing. An extension to the basic technique to include structure recovery is discussed. >


Presence: Teleoperators & Virtual Environments | 1997

Augmented reality through wearable computing

Thad Starner; Steve Mann; Bradley J. Rhodes; Jeffrey Steven Levine; Jennifer Healey; Dana Kirsch; Rosalind W. Picard; Alex Pentland

Wearable computing moves computation from the desktop to the user. We are forming a community of networked, wearable-computer users to explore, over a long period, the augmented realities that these systems can provide. By adapting its behavior to the users changing environment, a body-worn computer can assist the user more intelligently, consistently, and continuously than a desktop system. A text-based augmented reality, the Remembrance Agent, is presented to illustrate this approach. Video cameras are used both to warp the visual input (mediated reality) and to sense the users world for graphical overlay. With a camera, the computer could track the users finger to act as the systems mouse; perform face recognition; and detect passive objects to overlay 2.5D and 3D graphics onto the real world. Additional apparatus such as audio systems, infrared beacons for sensing location, and biosensors for learning about the wearers affect are described. With the use of input from these interface devices and sensors, a long-term goal of this project is to model the users actions, anticipate his or her needs, and perform a seamless interaction between the virtual and physical environments.


IEEE Computer | 1999

Wearable devices: new ways to manage information

Mark Billinghurst; Thad Starner

Although the Information Age has many upsides, one of its major downsides is information overload. Indeed, the amount of information easily pushes the limit of what people can manage. This conflict drives research to seek a solution to humanitys information woes. As computers have shrunk from room size to palm size, so they have also moved from being passive accessories, such as laptops and personal digital assistants, to wearable appliances that form an integral part of our personal space. Wearable computers are always on and accessible. As the computer moves from desktop to coat pocket to the human body, its ability to help manage, sort, and filter information will become more intimately connected to our daily lives. The paper discusses the principles of wearable computers.


international symposium on wearable computers | 2002

Learning significant locations and predicting user movement with GPS

Daniel Ashbrook; Thad Starner

Wearable computers have the potential to act as intelligent agents in everyday life and assist the user in a variety of tasks, using context to determine how to act. Location is the most common form of context used by these agents to determine the user’s task. However, another potential use of location context is the creation of a predictive model of the user’s future movements. We present a system that automatically clusters GPS data taken over an extended period of time into meaningful locations at multiple scales. These locations are then incorporated into a Markov model that can be consulted for use with a variety of applications in both single–user and collaborative scenarios.

Collaboration


Dive into the Thad Starner's collaboration.

Top Co-Authors

Avatar

Alex Pentland

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Gregory D. Abowd

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Scott M. Gilliland

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Clint Zeagler

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

James Clawson

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Melody Moore Jackson

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Michael Patrick Johnson

Massachusetts Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge