Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Michael J. Sinclair is active.

Publication


Featured researches published by Michael J. Sinclair.


user interface software and technology | 2000

Sensing techniques for mobile interaction

Ken Hinckley; Jeffrey S. Pierce; Michael J. Sinclair; Eric Horvitz

We describe sensing techniques motivated by unique aspects of human-computer interaction with handheld devices in mobile settings. Special features of mobile interaction include changing orientation and position, changing venues, the use of computing as auxiliary to ongoing, real-world activities like talking to a colleague, and the general intimacy of use for such devices. We introduce and integrate a set of sensors into a handheld device, and demonstrate several new functionalities engendered by the sensors, such as recording memos when the device is held like a cell phone, switching between portrait and landscape display modes by holding the device in the desired orientation, automatically powering up the device when the user picks it up the device to start using it, and scrolling the display using tilt. We present an informal experiment, initial usability testing results, and user reactions to these techniques.


human factors in computing systems | 1999

Touch-sensing input devices

Ken Hinckley; Michael J. Sinclair

We can touch things, and our senses tell us when our hands aretouching something. But most computer input devices cannot detectwhen the user touches or releases the device or some portion of thedevice. Thus, adding touch sensors to input devices offers manypossibilities for novel interaction techniques. We demonstrate theTouchTrackball and the Scrolling TouchMouse, which use unobtrusivecapacitance sensors to detect contact from the users hand withoutrequiring pressure or mechanical actuation of a switch. We furtherdemonstrate how the capabilities of these devices can be matched toan implicit interaction technique, the On-Demand Interface, whichuses the passive information captured by touch sensors to fade inor fade out portions of a display depending on what the user isdoing; a second technique uses explicit, intentional interactionwith touch sensors for enhanced scrolling. We present our newdevices in the context of a simple tax- onomy of tactile inputtechnologies. Finally, we discuss the properties of touch-sensingas an input channel in general.


user interface software and technology | 1998

Interaction and modeling techniques for desktop two-handed input

Ken Hinckley; Mary Czerwinski; Michael J. Sinclair

We describe input devices and two-handed interaction techniques to support map navigation tasks. We discuss several design variations and user testing of two-handed navigation techniques, including puck and stylus input on a Wacom tablet, as well as a novel design incorporating a touchpad (for the nonpreferred hand) and a mouse (for the preferred hand). To support the latter technique, we introduce a new input device, the TouchMouse, which is a standard mouse augmented with a pair of one-bit touch sensors, one for the palm and one for the index finger. Finally, we propose several enhancements to Buxton’s three-state model of graphical input and extend this model to encompass two-handed input transactions as well.


user interface software and technology | 1999

The VideoMouse: a camera-based multi-degree-of-freedom input device

Ken Hinckley; Michael J. Sinclair; Erik Hanson; Richard Szeliski; Matthew Conway

The VideoMouse is a mouse that uses a camera as its input sensor. A real-time vision algorithm determines the six degree-of-freedom mouse posture, consisting of 2D motion, tilt in the forward/back and left/right axes, rotation of the mouse about its vertical axis, and some limited height sensing. Thus, a familiar 2D device can be extended for three-dimensional manipulation, while remaining suitable for standard 2D GUI tasks. We describe techniques for mouse functionality, 3D manipulation, navigating large 2D spaces, and using the camera for lightweight scanning tasks.


ACM Transactions on Computer-Human Interaction | 2005

Foreground and background interaction with sensor-enhanced mobile devices

Ken Hinckley; Jeffrey S. Pierce; Eric Horvitz; Michael J. Sinclair

Building on Buxtons foreground/background model, we discuss the importance of explicitly considering both foreground interaction and background interaction, as well as transitions between foreground and background, in the design and implementation of sensing techniques for sensor-enhanced mobile devices. Our view is that the foreground concerns deliberate user activity where the user is attending to the device, while the background is the realm of inattention or split attention, using naturally occurring user activity as an input that allows the device to infer or anticipate user needs. The five questions for sensing systems of Bellotti et al. [2002] proposed as a framework for this special issue, primarily address the foreground, but neglect critical issues with background sensing. To support our perspective, we discuss a variety of foreground and background sensing techniques that we have implemented for sensor-enhanced mobile devices, such as powering on the device when the user picks it up, sensing when the user is holding the device to his ear, automatically switching between portrait and landscape display orientations depending on how the user is holding the device, and scrolling the display using tilt. We also contribute system architecture issues, such as using the foreground/background model to handle cross-talk between multiple sensor-based interaction techniques, and theoretical perspectives, such as a classification of recognition errors based on explicitly considering transitions between the foreground and background. Based on our experiences, we propose design issues and lessons learned for foreground/background sensing systems.


user interface software and technology | 2006

Soap: a pointing device that works in mid-air

Patrick Baudisch; Michael J. Sinclair; Andrew D. Wilson

Soap is a pointing device based on hardware found in a mouse, yet works in mid-air. Soap consists of an optical sensor device moving freely inside a hull made of fabric. As the user applies pressure from the outside, the optical sensor moves independent from the hull. The optical sensor perceives this relative motion and reports it as position input. Soap offers many of the benefits of optical mice, such as high-accuracy sensing. We describe the design of a soap prototype and report our experiences with four application scenarios, including a wall display, Windows Media Center, slide presentation, and interactive video games.


ieee automatic speech recognition and understanding workshop | 2003

Air- and bone-conductive integrated microphones for robust speech detection and enhancement

Yanli Zheng; Zicheng Liu; Zhengyou Zhang; Michael J. Sinclair; Jasha Droppo; Li Deng; Alex Acero; Xuedong Huang

We present a novel hardware device that combines a regular microphone with a bone-conductive microphone. The device looks like a regular headset and it can be plugged into any machine with a USB port. The bone-conductive microphone has an interesting property: it is insensitive to ambient noise and captures the low frequency portion of the speech signals. Thanks to the signals from the bone-conductive microphone, we are able to detect very robustly whether the speaker is talking, eliminating more than 90% of background speech. Furthermore, by combining both channels, we are able to remove background speech significantly, even when the background speaker speaks at the same time as the speaker wearing the headset.


wearable and implantable body sensor networks | 2006

A phone-centered body sensor network platform cost, energy efficiency & user interface

Lin Zhong; Michael J. Sinclair; Ray A. Bittner

We have designed a Bluetooth-based body sensor network platform for physiological diary applications and have addressed its challenges in cost, energy efficiency, and user interface. In our platform, an Internet-capable phone serves as the center and manages every network member. We designed a Bluetooth sensor node for general sensing devices to join the network without much alteration. Since Bluetooth imposes a large power overhead, we have taken extreme care to minimize its duty cycle. We also incorporated a wrist-worn device as the user interface. It displays information under the instruction of the phone in an ambient fashion, and enables the user to interact with the network conveniently. By leveraging resources on the phone, we are able to minimize the cost and energy consumption of the sensor nodes and the wrist-worn device


IEEE\/ASME Journal of Microelectromechanical Systems | 2003

Power delivery and locomotion of untethered microactuators

Bruce Randall Donald; Christopher G. Levey; Craig D. McGray; Daniela Rus; Michael J. Sinclair

The ability for a device to locomote freely on a surface requires the ability to deliver power in a way that does not restrain the devices motion. This paper presents a MEMS actuator that operates free of any physically restraining tethers. We show how a capacitive coupling can be used to deliver power to untethered MEMS devices, independently of the position and orientation of those devices. Then, we provide a simple mechanical release process for detaching these MEMS devices from the fabrication substrate once chemical processing is complete. To produce these untethered microactuators in a batch-compatible manner while leveraging existing MEMS infrastructure, we have devised a novel postprocessing sequence for a standard MEMS multiproject wafer process. Through the use of this sequence, we show how to add, post hoc , a layer of dielectric between two previously deposited polysilicon films. We have demonstrated the effectiveness of these techniques through the successful fabrication and operation of untethered scratch drive actuators. Locomotion of these actuators is controlled by frequency modulation, and the devices achieve maximum speeds of over 1.5 mm/s.


international conference on acoustics, speech, and signal processing | 2004

Multi-sensory microphones for robust speech detection, enhancement and recognition

Zhengyou Zhang; Zicheng Liu; Michael J. Sinclair; Alex Acero; Li Deng; Jasha Droppo; Xuedong Huang; Yanli Zheng

In this paper, we present new hardware prototypes that integrate several heterogeneous sensors into a single headset and describe the underlying DSP techniques for robust speech detection, enhancement and recognition in highly non-stationary noisy environments. We also speculate other business uses with this type of device.

Collaboration


Dive into the Michael J. Sinclair's collaboration.

Researchain Logo
Decentralizing Knowledge