Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Andre Plante is active.

Publication


Featured researches published by Andre Plante.


Proceedings of the Royal Society of London B: Biological Sciences | 2000

The Noh mask effect: vertical viewpoint dependence of facial expression perception

Michael J. Lyons; Ruth Campbell; Andre Plante; Mike Coleman; Miyuki Kamachi; Shigeru Akamatsu

Full-face masks, worn by skilled actors in the Noh tradition, can induce a variety of perceived expressions with changes in head orientation. Out-of-plane rotation of the head changes the two-dimensional image characteristics of the face which viewers may misinterpret as non-rigid changes due to muscle action. Three experiments with Japanese and British viewers explored this effect. Experiment 1 confirmed a systematic relationship between vertical angle of view of a Noh mask and judged affect. A forward tilted mask was more often judged happy, and one backward tilted more often judged sad. This effect was moderated by culture. Japanese viewers ascribed happiness to the mask at greater degrees of backward tilt with a reversal towards sadness at extreme forward angles. Cropping the facial image of chin and upper head contour reduced the forward-tilt reversal. Finally, the relationship between head tilt and affect was replicated with a laser-scanned human face image, but with no cultural effect. Vertical orientation of the head changes the apparent disposition of facial features and viewers respond systematically to these changes. Culture moderates this effect, and we discuss how perceptual strategies for ascribing expression to familiar and unfamiliar images may account for the differences.


acm multimedia | 1998

Avatar creation using automatic face processing

Michael J. Lyons; Andre Plante; Sebastien Jehan; Seiki Inoue; Shigeru Akamatsu

In tie context of multimedia, an avatar is the visual representation of the self in a virtual world. It is desirable to incorporate personal information, such as an image of fac~ about the user into the avatar. To Wi end we have developed an algorithm which can autom3tid1y extract a face from an imagq modify it characterize it in terms of high-level propertiw, and apply it to the creation of a persontized avatar. The algorithm h= been tested on several hundred facial imag=, including many taken under uncontro~ed acquisition conditions, and found to efilbit satisfactory performmce for immedi3te practid use.


human factors in computing systems | 1998

Evaluating the location of hot spots in interactive scenes using the 3R toolbox

Andre Plante; Shoji Tanaka; Seiki Inoue

Too often in interactive pictures, movies or VR scenes where no explicit buttons exist, de user is left to find hot spots (portals, gates, links) almost at random In any particular scene, although semantic information is present, a user may be overwhelmed by the number of possible and perfectly logical locations in which hot spots could be embedded. In this paper, we propose a new support tool based on the Highly Attractive Region Extraction Method and aimed at helping the designer identify and enhance hot spots image regions so that they become more attractive (i.e. get the user’s attention). This computer tool performs an evaluation on images based on their physical features (Hue, Saturation, Lightness, Sii and Contrast) and graphicahy shows which regions are more likely to attract a user’s gaze. Based on these results, the designer can den choose to fnrthermore highligbt a particular part of a picture or, alternatively, tone down regions that could cause confusion. Keyworcis User Interface, hot spots, Visual Communications, Interactive Movie, Image processing, navigation, Support Tool, Design ENTRODUCTGON While most interfaces rely on icons, words, menus or buttons, a growing numbers of games, interactive movies and virtual worlds rely solely on the graphical representation of a specific environment to permit user interaction. These environments can be reality or fantasy-based, depict familiar or unfamiliar objects or spaces. They can be still or in motion. They can be completely immersive as in the case of virtual Reality experienced through a head-mounted display or simply window-based as in de case of AppleT”’ Quicktime VR viewed on the screen of a computer. These navigable worlds or pictures lack de familiar labels found in other types of interfaces. The lack of words or icons contributes to de depiction of a more convincing scene, but it tends to decrease the effectiveness of the manner in which the user navigates a particular content Now that designers can afford to create rich environments composed of many objects or use photorealistic images to creIkrmission to make digitalkrd copies of alI or pat of tbii materizd for pmonal or classroom use is granted without fee provided that the copies are not made or diiiuted for profit or wmmercial admtage, the copyright notice. the title of Ihe publication and its date appar, and no&e is giventbat copyrigbl is by permission of the AChf. Inc To copy oUxnvise., to republish, to post on servers or to rediibute to lists, requires specific permission .andTor fee. CHI 98 Los Attgeles CA USA CQpyrigbt 1998 0-s9791-975-019s/4S.00 ate scenes, it has become more difftcult for the user to find where hot spots may have been placed. The semantic information contained in a scene obviously helps the user make a decision. The representation of a door will most likely trigger the belief that there will be a hot spot at this location, although sometimes one is not available. On the other hand a user may overlook an object which the developer used as a hot spot. As an example of this, consider a navigable photo of an office: the computer screen intended to be a hot spot to another scene may be overlooked for the more obvious door on the opposite wall. The user can easily be overwhelmed by the high number of possibilities or simply fail to find all but the most obvious hot spots. To alleviate this problem, in applications where a cursor is present, routines have been implemented to modify various aspects of the cursor’s behavior ifit is moved over an area of the picture defined as a hot spot. The user is often compelled to stop moving and perform a “mine sweeping” motion over the whole picture to identify hot spots. Although this method is helpful in some ways, it does not address the real problem, which is one of graphic design and visual communications. Designers, computer artists, photographers and cameramen not only must exercise good judgement in the composition of their pictures but also need a tool to analyze the physical features of their images as they are produced. These tools must show in a simple and direct manner where a user is likely to look within the frame. The designer must realize that an insignificant object, if of a certain hue and size, might detract attention from more important objects. Some previous research on the subject of extraction of attractive regions in a picture was done by Milanese et. al [l]. They assumed that the discontinuity found inside of a picture attracts the viewer’s attention. To extract those attractive regions, they first create feature maps for several physical features of the picture, then extract the discontinuous regions of each feature map. Finally the regions are integrated into a single map by using a relaxation process. Milanese et. al used a bank of difference-of-oriented-gaussians (DOORG) filters whose size are fixed to extract discontinuous regions. However, since the size of attractive regions are different within and among different pictures, it is difficult to properly extract attractive regions with filters limited to a fixed size. Here, we propose a method that first segments the original picture into regions of various sizes. Then this method uses a discrimination function to select attractive regions from those regions. This tool automatically identifies regions with bighest level of attractiveness based on physical properties (Hue,


international conference on multimedia computing and systems | 1999

Image Re-Composer: a post-production tool using composition information of pictures

Shoji Tanaka; Jun Kurumizawa; Andre Plante; Yuichi Iwadate; Seiji Inokuchi

We propose a post-production tool for refining pictures called the Image Re-Composer. This tool decomposes an original picture into figure objects and the ground, and then recomposes the figure objects according to composition information taken from a well designed picture, such as an art masterpiece. The figure extraction is performed by a figure extraction method that utilizes a characteristic of the V4 cortex in the human visual system. The composition information is extracted based on the idea of the golden section.


creativity and cognition | 1999

Composition analyzer: computer supported composition analysis of masterpieces

Shoji Tanaka; Jun Kurumisawa; Andre Plante; Yuichi Iwadate; Seiji Inokuchi

In this paper, we propose a tool for extracting composition information fi-om pictures called the Composition Analyzer. This tool extracts such composition information as the shapes, proportions, andlocations of figures, by two processes. More specifically, it first segments a picture into figures and a ground by a figure extraction method we developed. It then extracts the above composition information tirn ihe fiiures based on the Dynamic Symmetry principle. The extracted composition information is used to refine the picture, and as such, facilitates the production of multimedia for nonprofessionals.


international conference on multimedia computing and systems | 1999

Virtual Shinto shrine

Andre Plante; Shoji Tanaka; Yuichi Iwadate

The Virtual Shinto Shrine is an extensive image-based virtual tour of the Fushimi Inari Shrine located in the Kyotos mountains. Its ascending maze-like lay-out and the thousands of vermilion Torii Gates used to guide pilgrims to the summit make the visit to this shrine a unique experience. We have created this Virtual Tour to preserve and share the sacred beauty of this site with others.


ieee international conference on automatic face and gesture recognition | 2000

Classifying facial attributes using a 2-D Gabor wavelet representation and discriminant analysis

Michael J. Lyons; Julien Budynek; Andre Plante; Shigeru Akamatsu


Proceedings of the Annual Meeting of the Cognitive Science Society | 2000

Viewpoint Dependent Facial Expressions Recognition Japanese Noh Masks and the Human Face

Michael Lyons; Andre Plante; Miyui Kamachi; Shigeru Akamatsu; Ruth Campbell; Mike Coleman


Archive | 1999

The effect of vertical viewpoint on expression perception: the Noh mask and the human face.

Michael Lyons; Robert L. Campbell; Andre Plante; Mike Coleman; Miyuki Kamachi


Archive | 2000

The Noh Mask Effect: Culture and View Dependent Facial Expression Perception

Michael Lyons; Andre Plante; Miyuki Kamachi; Shigeru Akamatsu; Ruth Campbell; Mike Coleman

Collaboration


Dive into the Andre Plante's collaboration.

Top Co-Authors

Avatar

Mike Coleman

University College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ruth Campbell

University College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Michael Lyons

University College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge