Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Marie-Paule Cani is active.

Publication


Featured researches published by Marie-Paule Cani.


sketch based interfaces and modeling | 2010

Sketch-based modeling of vascular systems: a first step towards interactive teaching of anatomy

Adeline Pihuit; Marie-Paule Cani; Olivier Palombi

We present a sketch-based modeling system, inspired from anatomical drawing, which constructs plausible 3D models of branching vessels from a single sketch. The input drawing typically includes non-flat silhouettes and occluded parts. We exploit the sketching conventions used in anatomical drawings to infer depth and curvature from contour and skeleton curves extracted from the sketch. We then model the set of branching vessels as a convolution surface generated by a graph of skeleton curves: while these curves are set to fit the sketch in the front plane, non-uniform B-spline interpolation is used to give them smoothly varying depth values that meet the set of constraints. The final model is displayed using an expressive rendering method that imitates the aspect of chalk drawing. We discuss the future use of this system as a step towards the interactive teaching of anatomy.


ACM Transactions on Graphics | 2017

Authoring landscapes by combining ecosystem and terrain erosion simulation

Guillaume Cordonnier; Eric Galin; James E. Gain; Bedrich Benes; Eric Guérin; Adrien Peytavie; Marie-Paule Cani

We introduce a novel framework for interactive landscape authoring that supports bi-directional feedback between erosion and vegetation simulation. Vegetation and terrain erosion have strong mutual impact and their interplay influences the overall realism of virtual scenes. Despite their importance, these complex interactions have been neglected in computer graphics. Our framework overcomes this by simulating the effect of a variety of geomorphological agents and the mutual interaction between different material and vegetation layers, including rock, sand, humus, grass, shrubs, and trees. Users are able to exploit these interactions with an authoring interface that consistently shapes the terrain and populates it with details. Our method, validated through side-by-side comparison with real terrains, can be used not only to generate realistic static landscapes, but also to follow the temporal evolution of a landscape over a few centuries.


IEEE Transactions on Visualization and Computer Graphics | 2018

Sculpting Mountains: Interactive Terrain Modeling Based on Subsurface Geology

Guillaume Cordonnier; Marie-Paule Cani; Bedrich Benes; Jean Braun; Eric Galin

Most mountain ranges are formed by the compression and folding of colliding tectonic plates. Subduction of one plate causes large-scale asymmetry while their layered composition (or stratigraphy) explains the multi-scale folded strata observed on real terrains. We introduce a novel interactive modeling technique to generate visually plausible, large scale terrains that capture these phenomena. Our method draws on both geological knowledge for consistency and on sculpting systems for user interaction. The user is provided hands-on control on the shape and motion of tectonic plates, represented using a new geologically-inspired model for the Earth crust. The model captures their volume preserving and complex folding behaviors under collision, causing mountains to grow. It generates a volumetric uplift map representing the growth rate of subsurface layers. Erosion and uplift movement are jointly simulated to generate the terrain. The stratigraphy allows us to render folded strata on eroded cliffs. We validated the usability of our sculpting interface through a user study, and compare the visual consistency of the earth crust model with geological simulation results and real terrains.


Surgical and Radiologic Anatomy | 2011

3D Modeling of branching vessels from anatomical sketches: towards a new interactive teaching of anatomy: Interactive virtual blackboard.

Olivier Palombi; A. Pihuit; Marie-Paule Cani

Sketching is an intuitive way to explain spatial relationships between complex objects. The French community of Anatomists are used to teaching didactic lectures on a blackboard with their colored chalks. The increasing complexity of the sketches affords to the students an opportunity to work out a mental representation of anatomical structures in 3D. To help students perform this labored step, we present a new interactive blackboard which constructs plausible 3D models of branching vessels from a single sketch. We exploit the sketching conventions used in anatomical drawings to infer depth and curvature. We then model the set of branching vessels as a convolution surface generated by a graph of skeleton curves. Classic situations, focused on arteries, have been analyzed to manage vessels’ curvatures, subdivisions and overlaps. Original sketches and 3D models are presented for each case. No specific training is required to use the interface. The anatomists have begun to embrace a new generation of 3D digital modeling applications as tools for anatomical teaching. We discuss the future use of this system as a step towards the interactive teaching of anatomy.


international conference on computer graphics and interactive techniques | 2017

Learning to group discrete graphical patterns

Zhaoliang Lun; Changqing Zou; Haibin Huang; Evangelos Kalogerakis; Ping Tan; Marie-Paule Cani; Hao Zhang

We introduce a deep learning approach for grouping discrete patterns common in graphical designs. Our approach is based on a convolutional neural network architecture that learns a grouping measure defined over a pair of pattern elements. Motivated by perceptual grouping principles, the key feature of our network is the encoding of element shape, context, symmetries, and structural arrangements. These element properties are all jointly considered and appropriately weighted in our grouping measure. To better align our measure with human perceptions for grouping, we train our network on a large, human-annotated dataset of pattern groupings consisting of patterns at varying granularity levels, with rich element relations and varieties, and tempered with noise and other data imperfections. Experimental results demonstrate that our deep-learned measure leads to robust grouping results.


Computers & Graphics | 2018

Exploratory design of mechanical devices with motion constraints

Robin Roussel; Marie-Paule Cani; Jean-Claude Léon; Niloy J. Mitra

Abstract Mechanical devices are ubiquitous in our daily lives, and the motion they are able to transmit is often a critical part of their function. While digital fabrication devices facilitate their realization, motion-driven mechanism design remains a challenging task. We take drawing machines as a case study in exploratory design. Devices such as the Spirograph can generate intricate patterns from an assembly of simple mechanical elements. Trying to control and customize these patterns, however, is particularly hard, especially when the number of parts increases. We propose a novel constrained exploration method that enables a user to easily explore feasible drawings by directly indicating pattern preferences at different levels of control. The user starts by selecting a target pattern with the help of construction lines and rough sketching, and then fine-tunes it by prescribing geometric features of interest directly on the drawing. The designed pattern can then be directly realized with an easy-to-fabricate drawing machine. The key technical challenge is to facilitate the exploration of the high dimensional configuration space of such fabricable machines. To this end, we propose a novel method that dynamically reparameterizes the local configuration space and allows the user to move continuously between pattern variations, while preserving user-specified feature constraints. We tested our framework on several examples, conducted a user study, and fabricated a sample of the designed examples.


Computer Graphics Forum | 2018

Interactive Generation of Time-evolving, Snow-Covered Landscapes with Avalanches

Guillaume Cordonnier; Pierre Ecormier; Eric Galin; James E. Gain; Bedrich Benes; Marie-Paule Cani

We introduce a novel method for interactive generation of visually consistent, snow‐covered landscapes and provide control of their dynamic evolution over time. Our main contribution is the real‐time phenomenological simulation of avalanches and other user‐guided events, such as tracks left by Nordic skiing, which can be applied to interactively sculpt the landscape. The terrain is modeled as a height field with additional layers for stable, compacted, unstable, and powdery snow, which behave in combination as a semi‐viscous fluid. We incorporate the impact of several phenomena, including sunlight, temperature, prevailing wind direction, and skiing activities. The snow evolution includes snow‐melt and snow‐drift, which affect stability of the snow mass and the probability of avalanches. A user can shape landscapes and their evolution either with a variety of interactive brushes, or by prescribing events along a winter season time‐line. Our optimized GPU‐implementation allows interactive updates of snow type and depth across a large (10 × 10 km) terrain, including real‐time avalanches, making this suitable for visual assets in computer games. We evaluate our method through perceptual comparison against exiting methods and real snow‐depth data.


non photorealistic animation and rendering | 2018

Automatic generation of geological stories from a single sketch

Maxime Garcia; Marie-Paule Cani; Rémi Ronfard; Claude Gout; Christian Perrenoud

Describing the history of a terrain from a vertical geological cross-section is an important problem in geology, called geological restoration. Designing the sequential evolution of the geometry is usually done manually, involving many trials and errors. In this work, we recast this problem as a storyboarding problem, where the different stages in the restoration are automatically generated as storyboard panels and displayed as geological stories. Our system allows geologists to interactively explore multiple scenarios by selecting plausible geological event sequences and backward simulating them at interactive rate, causing the terrain layers to be progressively un-deposited, un-eroded, un-compacted, un-folded and un-faulted. Storyboard sketches are generated along the way. When a restoration is complete, the storyboard panels can be used for automatically generating a forward animation of the terrain history, enabling quick visualization and validation of hypotheses. As a proof-of-concept, we describe how our system was used by geologists to restore and animate cross-sections in real examples at various spatial and temporal scales and with different levels of complexity, including the Chartreuse region in the French Alps.


non photorealistic animation and rendering | 2018

Structuring and layering contour drawings of organic shapes

Even Entem; Amal Dev Parakkat; Marie-Paule Cani; Loïc Barthe

Complex vector drawings serve as convenient and expressive visual representations, but they remain difficult to edit or manipulate. For clean-line vector drawings of smooth organic shapes, we describe a method to automatically extract a layered structure for the drawn object from the current or nearby viewpoints. The layers correspond to salient regions of the drawing, which are often naturally associated to parts of the underlying shape. We present a method that automatically extracts salient structure, organized as parts with relative depth orderings, from clean-line vector drawings of smooth organic shapes. Our method handles drawings that contain complex internal contours with T-junctions indicative of occlusions, as well as internal curves that may either be expressive strokes or substructures. To extract the structure, we introduce a new part-aware metric for complex 2D drawings, the radial variation metric, which is used to identify salient sub-parts. These sub-parts are then considered in a priority-ordered fashion, which enables us to identify and recursively process new shape parts while keeping track of their relative depth ordering. The output is represented in terms of scalable vector graphics layers, thereby enabling meaningful editing and manipulation. We evaluate the method on multiple input drawings and show that the structure we compute is convenient for subsequent posing and animation from nearby viewpoints.


13th Pacific Graphics Short papers | 2005

Shape Modeling by Sketching using Convolution Surfaces

Anca Alexe; Loïc Barthe; Marie-Paule Cani; Véronique Gaildrat

Collaboration


Dive into the Marie-Paule Cani's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Olivier Palombi

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Niloy J. Mitra

University College London

View shared research outputs
Top Co-Authors

Avatar

Robin Roussel

University College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge