Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Patrick Chiu is active.

Publication


Featured researches published by Patrick Chiu.


ubiquitous computing | 2001

A Probabilistic Room Location Service for Wireless Networked Environments

Paul Castro; Patrick Chiu; Ted Kremenek; Richard R. Muntz

The popularity of wireless networks has increased in recent years and is becoming a common addition to LANs. In this paper we investigate a novel use for a wireless network based on the IEEE 802.11 standard: inferring the location of a wireless client from signal quality measures. Similar work has been limited to prototype systems that rely on nearest-neighbor techniques to infer location. In this paper, we describe Nibble, a Wi-Fi location service that uses Bayesian networks to infer the location of a device. We explain the general theory behind the system and how to use the system, along with describing our experiences at a university campus building and at a research lab. We also discuss how probabilistic modeling can be applied to a diverse range of applications that use sensor data.


user interface software and technology | 2000

A semi-automatic approach to home video editing

Andreas Girgensohn; John S. Boreczky; Patrick Chiu; John Doherty; Jonathan Foote; Gene Golovchinsky; Shingo Uchihashi; Lynn Wilcox

Hitchcock is a system that allows users to easily create custom videos from raw video shot with a standard video camera. In contrast to other video editing systems, Hitchcock uses automatic analysis to determine the suitability of portions of the raw video. Unsuitable video typically has fast or erratic camera motion. Hitchcock first analyzes video to identify the type and amount of camera motion: fast pan, slow zoom, etc. Based on this analysis, a numerical unsuit-ability score is computed for each frame of the video. Combined with standard editing rules, this score is used to identify clips for inclusion in the final video and to select their start and end points. To create a custom video, the user drags keyframes corresponding to the desired clips into a storyboard. Users can lengthen or shorten the clip without specifying the start and end frames explicitly. Clip lengths are balanced automatically using a spring-based algorithm.


acm multimedia | 1999

NoteLook: taking notes in meetings with digital video and ink

Patrick Chiu; Ashutosh Kapuskar; Sarah Reitmeier; Lynn Wilcox

NoteLook is a client-server system designed and built to support multimedia note taking in meetings with digital video and ink. It is integrated into a conference room equipped with computer controllable video cameras, video conference camera, and a large display rear video projector. The NoteLook client application runs on wireless pen-based notebook computers. Video channels containing images of the room activity and presentation material are transmitted by the NoteLook servers to the clients, and the images can be interactively and automatically incorporated into the note pages. Users can select channels, snap in large background images and sequences of thumbnails, and write freeform ink notes. A smart video source management component enables the capture of high quality images of the presentation material from a variety of sources. For accessing and browsing the notes and recorded video, NoteLook generates Web pages with links from the images and ink strokes correlated to the video.


international world wide web conferences | 2001

LiteMinutes: an Internet-based system for multimedia meeting minutes

Patrick Chiu; John S. Boreczky; Andreas Girgensohn; Don Kimber

The Internet provides a highly suitable infrastructure for sharing multimedia meeting records, especially as multimedia technologies become more lightweight and workers more mobile. LiteMinutes is a system that uses both the Web and email for creating, revising, distributing, and accessing multimedia information captured in a meeting. Supported media include text notes taken on wireless laptops, slide images captured from presentations, and video recorded by cameras in the room. At the end of a meeting, text notes are sent by the note taking applet to the server, which formats them in HTML with links from each note item to the captured slide images and video recording. Smart link generation is achieved by capturing contextual metadata such as the on/off state of the media equipment and the room location of the laptop, and inferring whether it makes sense to supply media links to a particular note item. Note takers can easily revise meeting minutes after a meeting by modifying the email message sent to them and mailing it back to the server’s email address. We explore design issues concerning preferences for email and Web access of meeting minutes, as well as the different timeframes for access. We also describe the integration with a comic book style video summary and visualization system with text captions for browsing the video recording of a meeting.


user interface software and technology | 1998

A dynamic grouping technique for ink and audio notes

Patrick Chiu; Lynn Wilcox

In this paper, we describe a technique for dynamically grouping digital ink and audio to support user interaction in freeform note-taking systems. For ink, groups of strokes might correspond to words, lines, or paragraphs of handwritten text. For audio, groups might be a complete spoken phrase or a speaker turn in a conversation. Ink and audio grouping is important for editing operations such as deleting or moving chunks of ink and audio notes. The grouping technique is based on hierarchical agglomerative clustering. This clustering algorithm yields groups of ink or audio in a range of sizes, depending on the level in the hierarchy, and thus provides structure for simple interactive selection and rapid non-linear expansion of a selection. Ink and audio grouping is also important for marking portions of notes for subsequent browsing and retrieval. Integration of the ink and audio clusters provides a flexible way to browse the notes by selecting the ink cluster and playing the corresponding audio cluster.


acm conference on hypertext | 2000

Automatically linking multimedia meeting documents by image matching

Patrick Chiu; Jonathan Foote; Andreas Girgensohn; John S. Boreczky

We describe a way to make a hypermedia meeting record from multimedia meeting documents by automatically generating links through image matching. In particular, we look at video recordings and scanned paper handouts of presentation slides with ink annotations. The algorithm that we employ is the Discrete Cosine Transform (DCT). Interactions with multipath links and paper interfaces are discussed.


Lecture Notes in Computer Science | 1999

Meeting Capture in a Media Enriched Conference Room

Patrick Chiu; Ashutosh Kapuskar; Lynn Wilcox; Sarah Reitmeier

We describe a media enriched conference room designed for capturing meetings. Our goal is to do this in a flexible, seamless, and unobtrusive manner in a public conference room that is used for everyday work. Room activity is captured by computer controllable video cameras, video conference cameras, and ceiling microphones. Presentation material displayed on a large screen rear video projector is captured by a smart video source management component that automatically locates the highest fidelity image source. Wireless pen-based notebook computers are used to take notes, which provide indexes to the captured meeting. Images can be interactively and automatically incorporated into the notes. Captured meetings may be browsed on the Web with links to recorded video.


acm multimedia | 2010

FACT: fine-grained cross-media interaction with documents via a portable hybrid paper-laptop interface

Chunyuan Liao; Hao Tang; Qiong Liu; Patrick Chiu; Francine Chen

FACT is an interactive paper system for fine-grained interaction with documents across the boundary between paper and computers. It consists of a small camera-projector unit, a laptop, and ordinary paper documents. With the camera-projector unit pointing to a paper document, the system allows a user to issue pen gestures on the paper document for selecting fine-grained content and applying various digital functions. For example, the user can choose individual words, symbols, figures, and arbitrary regions for keyword search, copy and paste, web search, and remote sharing. FACT thus enables a computer-like user experience on paper. This paper interaction can be integrated with laptop interaction for cross-media manipulations on multiple documents and views. We present the infrastructure, supporting techniques and interaction design, and demonstrate the feasibility via a quantitative experiment. We also propose applications such as document manipulation, map navigation and remote collaboration.


acm multimedia | 2005

MediaMetro: browsing multimedia document collections with a 3D city metaphor

Patrick Chiu; Andreas Girgensohn; Surapong Lertsithichai; Wolfgang Polak; Frank M. Shipman

The MediaMetro application provides an interactive 3D visualization of multimedia document collections using a city metaphor. The directories are mapped to city layouts using algorithms similar to treemaps. Each multimedia document is represented by a building and visual summaries of the different constituent media types are rendered onto the sides of the building. From videos, Manga storyboards with keyframe images are created and shown on the fatade; from slides and text, thumbnail images are produced and subsampled for display on the building sides. The images resemble windows on a building and can be selected for media playback. To support more facile navigation between high overviews and low detail views, a novel swooping technique was developed that combines altitude and tilt changes with zeroing in on a target.


acm multimedia | 2003

Shared interactive video for teleconferencing

Chunyuan Liao; Qiong Liu; Don Kimber; Patrick Chiu; Jonathan Foote; Lynn Wilcox

We present a system that allows remote and local participants to control devices in a meeting environment using mouse or pen based gestures through video windows. Unlike state-of-the-art device control interfaces that require interaction with text commands, buttons, or other artificial symbols, our approach allows users to interact with devices through live video of the environment. This naturally extends our video supported pan/tilt/zoom (PTZ) camera control system, by allowing gestures in video windows to control not only PTZ cameras, but also other devices visible in video images. For example, an authorized meeting participant can show a presentation on a screen by dragging the file on a personal laptop and dropping it on the video image of the presentation screen. This paper presents the system architecture, implementation tradeoffs, and various meeting control scenarios.

Collaboration


Dive into the Patrick Chiu's collaboration.

Top Co-Authors

Avatar

Qiong Liu

FX Palo Alto Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Don Kimber

FX Palo Alto Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Chelhwon Kim

FX Palo Alto Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge