Flora Ponjou Tasse
University of Cambridge
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Flora Ponjou Tasse.
international conference on computer vision | 2015
Flora Ponjou Tasse; Jiri Kosinka; Neil A. Dodgson
We propose a cluster-based approach to point set saliency detection, a challenge since point sets lack topological information. A point set is first decomposed into small clusters, using fuzzy clustering. We evaluate cluster uniqueness and spatial distribution of each cluster and combine these values into a cluster saliency function. Finally, the probabilities of points belonging to each cluster are used to assign a saliency to each point. Our approach detects fine-scale salient features and uninteresting regions consistently have lower saliency values. We evaluate the proposed saliency model by testing our saliency-based keypoint detection against a 3D interest point detection benchmark. The evaluation shows that our method achieves a good balance between false positive and false negative error rates, without using any topological information.
Computer Graphics Forum | 2012
Flora Ponjou Tasse; James E. Gain; Patrick Marais
Curvilinear features extracted from a 2D user‐sketched feature map have been used successfully to constraint a patch‐based texture synthesis of real landscapes. This map‐based user interface does not give fine control over the height profile of the generated terrain. We propose a new texture‐based terrain synthesis framework controllable by a terrain sketching interface. We enhance the realism of the generated landscapes by using a novel patch merging method that reduces boundary artefacts caused by overlapping terrain patches. A more constrained synthesis process is used to produce landscapes that better match user requirements. The high computational cost of texture synthesis is reduced with a parallel implementation on graphics hardware. Our GPU‐accelerated solution provides a significant speedup depending on the size of the example terrain. We show experimentally that our framework is more successful in generating realistic landscapes than current example‐based terrain synthesis methods. We conclude that texture‐based terrain synthesis combined with sketching provides an excellent solution to the user control and realism challenges of virtual landscape generation.
Computers & Graphics | 2014
Flora Ponjou Tasse; Arnaud Emilien; Marie-Paule Cani; Stefanie Hahmann; Neil A. Dodgson
We present a new method for first person sketch-based editing of terrain models. As in usual artistic pictures, the input sketch depicts complex silhouettes with cusps and T-junctions, which typically correspond to non-planar curves in 3D. After analysing depth constraints in the sketch based on perceptual cues, our method best matches the sketched silhouettes with silhouettes or ridges of the input terrain. A deformation algorithm is then applied to the terrain, enabling it to exactly match the sketch from the given perspective view, while insuring that none of the user-defined silhouettes is hidden by another part of the terrain. We extend this sketch-based terrain editing framework to handle a collection of multi-view sketches. As our results show, this method enables users to easily personalize an existing terrain, while preserving its plausibility and style. Graphical abstractWe present a new method for first person sketch-based editing of terrain models. As our results show, our method enables users to easily personalize an existing terrain, while preserving its plausibility and style.Display Omitted HighlightsAn algorithm for ordering strokes in a 2D complex sketch.A method for matching 3D terrain features with 2D user-specified silhouettes.A method for editing terrain with a collection of multi-view sketches.
Computer Graphics Forum | 2015
Henrik Lieng; Flora Ponjou Tasse; Jiri Kosinka; Neil A. Dodgson
A challenge in vector graphics is to define primitives that offer flexible manipulation of colour gradients. We propose a new primitive, called a shading curve, that supports explicit and local gradient control. This is achieved by associating shading profiles to each side of the curve. These shading profiles, which can be manually manipulated, represent the colour gradient out from their associated curves. Such explicit and local gradient control is challenging to achieve via the diffusion curve process, introduced in 2008, because it offers only implicit control of the colour gradient. We resolve this problem by using subdivision surfaces that are constructed from shading curves and their shading profiles.
computer graphics, virtual reality, visualisation and interaction in africa | 2009
Flora Ponjou Tasse; Kevin Glass; Shaun Bangay
Crowd simulation is an important feature in the computer graphics field. Typical implementations simulate battle scenes, emergency situations, safety issues or add content to virtual environments. The problem stated in this paper falls in the last category. We present a crowd simulation behavioural model which allows us to simulate identified phenomena in popular local African markets such as narrow street flows and crowd formation around street performances. We propose a three-tier architecture model enable to produce intentions, perform path planning and control movement. We demonstrate that this approach produces the desired behaviour associated with crowds in an African market, which includes navigation, flow formation and circle creation.
international conference on computer graphics and interactive techniques | 2016
Flora Ponjou Tasse; Jiří Kosinka; Neil A. Dodgson
Previous saliency detection research required the reader to evaluate performance qualitatively, based on renderings of saliency maps on a few shapes. This qualitative approach meant it was unclear which saliency models were better, or how well they compared to human perception. This paper provides a quantitative evaluation framework that addresses this issue. In the first quantitative analysis of 3D computational saliency models, we evaluate four computational saliency models and two baseline models against ground-truth saliency collected in previous work.
international conference on computer graphics and interactive techniques | 2016
Flora Ponjou Tasse; Neil A. Dodgson
Convolutional neural networks have been successfully used to compute shape descriptors, or jointly embed shapes and sketches in a common vector space. We propose a novel approach that leverages both labeled 3D shapes and semantic information contained in the labels, to generate semantically-meaningful shape descriptors. A neural network is trained to generate shape descriptors that lie close to a vector representation of the shape class, given a vector space of words. This method is easily extendable to range scans, hand-drawn sketches and images. This makes cross-modal retrieval possible, without a need to design different methods depending on the query type. We show that sketch-based shape retrieval using semantic-based descriptors outperforms the state-of-the-art by large margins, and mesh-based retrieval generates results of higher relevance to the query, than current deep shape descriptors.
eurographics | 2016
Flora Ponjou Tasse; Jiri Kosinka; Neil A. Dodgson
Local features are successfully used in 3D shape retrieval by encoding features descriptors into global shape signatures. Previous 3D retrieval systems use different encoding methods, such as histogram encoding and Fisher encodings, making it difficult to evaluate one encoding technique against another. We perform a comparative analysis of four recent encoding methods when used in shape retrieval. The analysis shows that Vector of Locally Aggregated Descriptors (VLAD) encoding is the best method of the four tested, since it offers the best trade-off between precision and computational cost.
Computers & Graphics | 2016
Flora Ponjou Tasse; Jiri Kosinka; Neil A. Dodgson
Sparse features have been successfully used in shape retrieval, by encoding feature descriptors into global shape signatures. We investigate how sparse features based on saliency models affect retrieval and provide recommendations on good saliency models for shape retrieval. Our results show that randomly selecting points on the surface produces better retrieval performance than using any of the evaluated salient keypoint detection, including ground-truth. We discuss the reasons for and implications of this unexpected result. Graphical abstractDisplay Omitted HighlightsAn evaluation of keypoint detectors on their performance in shape retrieval based on selected saliency models, including ground-truth.Sparse random points outperform human-selected salient points for shape retrieval on a generic dataset of watertight meshes.Restricting random points to non-salient regions causes a small decrease in retrieval performance.
international conference on computer vision | 2015
Flora Ponjou Tasse; Jiri Kosinka; Neil A. Dodgson