James L. Crowley
University of Grenoble
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by James L. Crowley.
Proceedings of the IFIP TC2/WG2.7 Working Conference on Engineering for Human-Computer Interaction | 1995
James L. Crowley; Joëlle Coutaz
Computer vision provides a powerful tool for the interaction between man and machine. The barrier between physical objects (paper, pencils, calculators) and their electronic counterparts limits both the integration of computing into human tasks, and the population willing to adapt to the required input devices. Computer vision, coupled with video projection using low cost devices, makes it possible for a human to use any convenient object, including fingers, as digital input devices. In such an “augmented reality”, information is projected onto ordinary objects and acquired by watching the way objects are manipulated. In the first part of this paper we describe experiments with techniques for watching the hands and recognizing gestures.
Proceedings of the IFIP TC2/TC13 WG2.7/WG13.4 Seventh Working Conference on Engineering for Human-Computer Interaction | 1998
Joëlle Coutaz; François Bérard; Eric Carraux; James L. Crowley
Mediaspaces have been designed to facilitate informal communication and support group awareness while assuring privacy protection. However, low bandwidth communication is a source of undesirable discontinuities in such systems, resulting in a loss of peripheral awareness. In addition, privacy is often implemented as an accessibility matrix coupled to an all-or-nothing exposure of personal state. In this article, we describe CoMedi, a mediaspace prototype that addresses the problem of discontinuity and privacy in an original way: computer vision and speech recognition are used in conjunction to minimize visual discontinuities while supporting free movements in a room. Publication filters maintain privacy at the desired level of transparency.
ubiquitous computing | 2014
Joëlle Coutaz; Alexandre Demeure; Sybille Caffiau; James L. Crowley
This paper presents early lessons from the development of SPOK, an End-User Development Environment for smart homes. SPOK (Simple PrOgramming Kit) uses a pseudo-natural language as an end-user programming language and runs on top of an extension of OSGi/iPOJO to support the dynamic and resilient management of web services and devices from a variety of protocols including EnOcean, UPnP, and Watteco. The motivation for SPOK is to give the power back to end-users so that they can shape their own smart home at will. This paper reports lessons learned from the methods we have used to validate our hypotheses as well as a number of technical issues concerning development of this type of EUDE. A Video of SPOK in action as of October 2013 is accessible at: http://iihm.imag.fr/demos/appsgate/appsgate2013.mp4
Contexts | 2017
Julien Cumin; Grégoire Lefebvre; Fano Ramparany; James L. Crowley
This paper describes the results of experiments where information about places is used in the recognition of activities in the home. We explore the use of place-specific activity recognition trained with supervised learning, coupled with a decision fusion step, for recognition of activities in the Opportunity dataset. Our experiments show that using place information to control recognition can substantially improve both the error rates and the computation cost of activity recognition compared to classical approaches where all sensors are used and all activities are possible. The use of place information for controlling recognition gives an F1 classification score of (92.70{%} pm 1.26{%}), requiring on average only 73 ms of computing time per instance of activity. These experiments demonstrate that organizing activity recognition with place-based context models can provide a scalable approach for building context-aware services based on activity recognition in smart home environments.
international conference on d imaging | 2016
Grégoire Nieto; Frédéric Devernay; James L. Crowley
Multi-view image-based rendering consists in generating a novel view of a scene from a set of source views. In general, this works by first doing a coarse 3D reconstruction of the scene, and then using this reconstruction to establish correspondences between source and target views, followed by blending the warped views to get the final image. Unfortunately, discontinuities in the blending weights, due to scene geometry or camera placement, result in artifacts in the target view. In this paper, we show how to avoid these artifacts by imposing additional constraints on the image gradients of the novel view. We propose a variational framework in which an energy functional is derived and optimized by iteratively solving a linear system. We demonstrate this method on several structured and unstructured multi-view datasets, and show that it numerically outperforms state-of-the-art methods, and eliminates artifacts that result from visibility discontinuities.
international conference on image analysis and recognition | 2014
Evanthia Mavridou; James L. Crowley; Augustin Lux
We propose a new local multiscale image descriptor of variable size. The descriptor combines Laplacian of Gaussian values at different scales with a Radial Fourier Transform. This descriptor provides a compact description of the appearance of a local neighborhood in a manner that is robust to changes in scale and orientation. We evaluate this descriptor by measuring repeatability and recall against 1-precision with the Affine Covariant Features benchmark dataset and as well as with a set of textureless images from the MIRFLICKR Retrieval Evaluation dataset. Experiments reveal performance competitive to the state of the art, while providing a more compact representation.
FGR | 1995
James L. Crowley; François Bérard; Joëlle Coutaz
Archive | 1987
James L. Crowley; Fano Ramparany
Archive | 1997
James L. Crowley; François Bérard
Archive | 2018
Julien Cumin; Fano Ramparany; Grégoire Lefebvre; James L. Crowley