Ryan E. Janzen
University of Toronto
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Ryan E. Janzen.
acm multimedia | 2011
Steve Mann; Jason Huang; Ryan E. Janzen; Raymond Chun Hing Lo; Valmiki Rampersad; Alexander Chen; Taqveer Doha
We present a way finding system that uses a range camera and an array of vibrotactile elements which we built into a helmet. The range camera is a Kinect 3D sensor from Microsoft that is meant to be kept stationary, and used to watch the user (i.e., to detect the persons gestures). Rather than using the camera to look at the user, we reverse the situation, by putting the Kinect range camera on a helmet for being worn by the user. In our case, the Kinect is in motion rather than stationary. Whereas stationary cameras have previously been used for gesture recognition, which the Kinect does very well, in our new modality, we take advantage of the Kinects resilience against rapidly changing background scenery, where the background in our case is now in motion (i.e., a conventional wearable camera would be presented with a constantly changing background that is difficult to manage by mere background subtraction). The goal of our project is collision avoidance for blind or visually impaired individuals, and for workers in harsh environments such as industrial environments with significant 3-dimensional obstacles, as well as for use in low-light environments.
acm multimedia | 2006
Steve Mann; Ryan E. Janzen; Mark Post
We present a musical keyboard that is not only velocity-sensitive, but in fact responds to absement (presement), displacement (placement), velocity, acceleration, jerk, jounce, etc. (i.e. to all the derivatives, as well as the integral, of displacement).Moreover, unlike a piano keyboard in which the keys reach a point of maximal displacement, our keys are essentially infinite in length, and thus never reach an end to their key travel. Our infinite length keys are achieved by using water jet streams that continue to flow past the fingers of a person playing the instrument. The instrument takes the form of a pipe with a row of holes, in which water flows out of each hole, while a user is invited to play the instrument by interfering with the flow of water coming out of the holes. The instrument resembles a large flute, but, unlike a flute, there is no complicated fingering pattern. Instead, each hole (each water jet) corresponds to one note (as with a piano or pipe organ). Therefore, unlike a flute, chords can be played by blocking more than one water jet hole at the same time. Because each note corresponds to only one hole, different fingers of the musician can be inserted into, onto, around, or near several of the instruments many water jet holes, in a variety of different ways, resulting in an ability to independently control the way in which each note in a chord sounds.Thus the hydraulophone combines the intricate embouchure control of woodwind instruments with the polyphony of keyboard instruments.Various forms of our instrument include totally acoustic, totally electronic, as well as hybrid instruments that are acoustic but also include an interface to a multimedia computer to produce a mixture of sounds that are produced by the acoustic properties of water screeching through orific plates, as well as synthesized sounds.
acm multimedia | 2011
Steve Mann; Ryan E. Janzen; Jason Huang
We propose a water-based multitouch multimedia user-interface based on total-internal reflection as viewed by an underwater camera. The underwater camera is arranged so that nothing above the water surface is visible until a user touches the water, at which time anything that penetrates the waters surface becomes clearly visible. Our contribution is twofold: (1) computer vision using underwater cameras aided by total internal reflection; (2) hyperacoustic signal processing (frequency shifting) to capture the natural, acoustically-originating sounds of water rather than using synthetic sounds. Using water itself as a touch screen creates a fun and playful user interface medium that captures the fluidity of the waters ebb and flow. In one application, a musical instrument is created in which acoustic disturbances in the water (received by underwater microphones or hydrophones) are frequency-shifted to musical notes corresponding to the location in which the water is touched, as determined by the underwater computer vision system.
acm multimedia | 2007
Ryan E. Janzen; Steve Mann
The hydraulophone is a fun-to-play self-cleaning keyboard instrument in which each key is a water jet. Many hydraulophones are already equipped with an array of underwater microphones (hydrophones), to pick up the turbulent sound from water inside musical sounding mechanisms under each water jet. Accordingly, we propose to make greater use of the sound of the water flow. We propose to extract more detailed information about flow and the obstruction of flow, based on sound alone. Beyond musical instruments, if further developed, this framework could have extensive applications in flow sensing for fuel lines in vehicles and for fresh water lines in buildings.
canadian conference on electrical and computer engineering | 2014
Steve Mann; Ryan E. Janzen; Tao Ai; Seyed Nima Yasrebi; Jad Kawwa; Mir Adnan Ali
Toposculpting is the creation of virtual objects by moving real physical objects through space to extrude patterns like beams and pipes. In one example, a method of making pipe sculptures is proposed, using a ring-shaped object moved through space. In particular, computational lightpainting is proposed as a new form of data entry, 3D object creation, or user-interface. When combined with wearable computational photography, especially by way of a true 3D time-of-flight camera system, such as the Meta Spaceglasses (extramissive spatial imaging glass manufactured by Meta-View), real physical objects are manipulated during an actual or simulated long-exposure 3D photography process.
canadian conference on electrical and computer engineering | 2014
Ryan E. Janzen; Steve Mann
The word “surveillance” comes from the French word “veillance” which means “watching” and the French prefix “sur”, which means “from above”. Thus “surveillance” means “to watch from above” (e.g. guards watching over prisoners or police watching over a city through a city-wide surveillance camera network). The closest purely English word is “oversight”. A more recent phenomenon, sousveillance (“undersight”) refers to the less hierarchical and more rhizomic veillance of social networking, distributed cloud-based computing, and body-worn technologies. Sousveillance forms a reciprocal power balance with surveillance, both being understood in the context of not just technology, but also complex human social and political relationships. In this paper we derive a precise theoretical and mathematical framework to understand, interpret, quantify, and classify “veillance” (“watching”) as to its directionality (i.e. surveillance versus sousveillance). While veillance can occur in a variety of sensory modalities, such as auditory sur/sousveillance, dataveillance, etc., we will focus especially on optical (visual) veillance. We define new physical concepts: the veillon, the vixel, and the veillance vector field, to provide insight into the measurement and demarcation of surveillance and sousveillance and their interplay.
canadian conference on electrical and computer engineering | 2012
Ryan E. Janzen; Steve Mann
High Dynamic Range (HDR) compositing is well established in the field of image processing, where a sequence of differently-exposed images of the same scene are combined to overcome the limited dynamic range of ordinary cameras. We extend this technique to audio. Rather than acquiring samples separated by time or space, as is done in HDR image processing, we propose to perform simultaneous sampling of the same input signal, using differently-gained versions of the same HDR signal fed into separate analog to digital converters (ADCs). An HDR audio signal is thus sampled by merging a set of low dynamic range (LDR) samplings of the original HDR input signal. We optimize the choice of LDR input gains to achieve as high a dynamic range as possible for a desired sampling accuracy.
tangible and embedded interaction | 2011
Steve Mann; Ryan E. Janzen; Jason Huang; Matthew Kelly; Lei Jimmy Ba; Alexander Chen
Water hammer, a well known phenomenon occurring in water pipes and plumbing fixtures, is generally considered destructive and undesirable. We propose the use of water hammer for a musical instrument akin to hammered percussion instruments like hammered dulcimer, piano, etc. In one embodiment, the instrument comprises an array of mouths each for being struck with the open palm or fingers, each mouth connected to a separate hydraulic resonator. In another embodiment, we use a basin or pool of water as a multitouch user-interface where sounds made by water are acoustically sensed by an array of hydrophones (underwater listening devices). Using water itself as a touch surface creates a fun and playful user interface medium that captures the fluidity of the waters ebb and flow.
ieee games media entertainment | 2014
Ryan E. Janzen; Steve Mann
We present a system that measures veillance flux, the ability of a camera to see, as the flux propagates through space. By wearing a veillance-integrating device, individuals are given an accumulated dosage readout based on their presence or absence in the veillance field of various cameras. The dose readout can be used as a score for players in an image-capture gaming environment, and as a means to assess camera coverage in security and cinematography applications. This veillance dosimeter detects radiation of information-bearing optical sensitivity, as opposed to radiation of light energy in the opposite direction.
tangible and embedded interaction | 2011
Steve Mann; Ryan E. Janzen; Tom Hobson
We propose the use of multiple sensors of different sensitivity that simultaneously sense the same signal. Outputs of these sensors are then combined in a way that allows the simultaneous sensing of large-signal and small-signal phenomena. This sensing methodology is applied to the andantephone, a musical instrument that allows a player to physically step through the notes of a song as if they were walking along the songs timeline. When you stop walking the music stops. If you walk faster the music plays faster. A new, more expressive design of andantephone was created using a wideband complementary set of geophones to detect seismic waves transmitted from human footsteps. Each tile in the andantephone has one or more high-frequency piezoelectric geophones that respond to small-signals, as well as one or more low-frequency carbon geophones that respond to large-signals. These sensors are subsequently connected to a real-time frequency-shifting system that shifts each geophones output to the correct musical pitch or chord for a particular note in a song. The proposed HDR sensing principle may be applied to many different sensing scenarios.