Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mario Costa Sousa is active.

Publication


Featured researches published by Mario Costa Sousa.


Computers & Graphics | 2009

Technical Section: Sketch-based modeling: A survey

Luke Olsen; Faramarz F. Samavati; Mario Costa Sousa; Joaquim A. Jorge

User interfaces in modeling have traditionally followed the WIMP (Window, Icon, Menu, Pointer) paradigm. Though functional and very powerful, they can also be cumbersome and daunting to a novice user, and creating a complex model requires considerable expertise and effort. A recent trend is toward more accessible and natural interfaces, which has lead to sketch-based interfaces for modeling (SBIM). The goal is to allow sketches-hasty freehand drawings-to be used in the modeling process, from rough model creation through to fine detail construction. Mapping a 2D sketch to a 3D modeling operation is a difficult task, rife with ambiguity. To wit, we present a categorization based on how a SBIM application chooses to interpret a sketch, of which there are three primary methods: to create a 3D model, to add details to an existing model, or to deform and manipulate a model. Additionally, in this paper we introduce a survey of sketch-based interfaces focused on 3D geometric modeling applications. The canonical and recent works are presented and classified, including techniques for sketch acquisition, filtering, and interpretation. The survey also provides an overview of some specific applications of SBIM and a discussion of important challenges and open problems for researchers to tackle in the coming years.


international conference on computer graphics and interactive techniques | 2006

ShapeShop: sketch-based solid modeling with BlobTrees

Ryan Schmidt; Brian Wyvill; Mario Costa Sousa; Joaquim A. Jorge

Various systems have explored the idea of inferring 3D models from sketched 2D outlines. In all of these systems the underlying modeling methodology limits the complexity of models that can be created interactively. The ShapeShop sketch-based modeling system utilizes Hierarchical Implicit Volume Models (BlobTrees) as an underlying shape representation. The BlobTree framework supports interactive creation of complex, detailed solid models with arbitrary topology. A new technique is described for inflating 2D contours into rounded three-dimensional implicit volumes. Sketch-based modeling operations are defined that combine these basic shapes using standard blending and CSG operators. Since the underlying volume hierarchy is by definition a construction history, individual sketched components can be non-linearly edited and removed. For example, holes can be interactively dragged through a shape. ShapeShop also provides 2D drawing assistance using a new curve-sketching system based on variational contours. A wide range of models can be sketched with ShapeShop, from cartoon-like characters to detailed mechanical parts. Examples are shown which demonstrate significantly higher model complexity than existing systems.


spring conference on computer graphics | 2005

Sketch-based modeling with few strokes

Joseph Jacob Cherlin; Faramarz F. Samavati; Mario Costa Sousa; Joaquim A. Jorge

We present a novel sketch-based system for the interactive modeling of a variety of free-form 3D objects using just a few strokes. Our technique is inspired by the traditional illustration strategy for depicting 3D forms where the basic geometric forms of the subjects are identified, sketched and progressively refined using few key strokes. We introduce two parametric surfaces, rotational and cross sectional blending, that are inspired by this illustration technique. We also describe orthogonal deformation and cross sectional oversketching as editing tools to complement our modeling techniques. Examples with models ranging from cartoon style to botanical illustration demonstrate the capabilities of our system.


Computer Graphics Forum | 2003

A Few Good Lines: Suggestive Drawing of 3D Models

Mario Costa Sousa; Przemyslaw Prusinkiewicz

We present a method for rendering 3D models in the traditionalline‐drawing style used in artistic and scientificillustrations. The goal is to suggest the 3D shape of the objectsusing a small number of lines drawn with carefullychosen line qualities. The system combines several known techniquesinto a simple yet effective non‐photorealisticline renderer. Feature edges related to the outline and interiorof a given 3D mesh are extracted, segmented, andsmoothed, yielding chains of lines with varying path, length, thickness,gaps, and enclosures. The paper includessample renderings obtained for a variety of models.


non-photorealistic animation and rendering | 2006

Non-photorealistic rendering in context: an observational study

Tobias Isenberg; Petra Neumann; M. Sheelagh T. Carpendale; Mario Costa Sousa; Joaquim A. Jorge

Pen-and-ink line drawing techniques are frequently used to depict form, tone, and texture in artistic, technical, and scientific illustration. In non-photorealistic rendering (NPR), considerable progress has been made towards reproducing traditional pen-and-ink techniques for rendering 3D objects. However, formal evaluation and validation of these NPR images remain an important open research problem. In this paper we present an observational study with three groups of users to examine their understanding and assessment of hand-drawn pen-and-ink illustrations of objects in comparison with NPR renditions of the same 3D objects. The results show that people perceive differences between those two types of illustration but that those that look computer-generated are still highly valued as scientific illustrations.


international conference on computer graphics and interactive techniques | 2010

A work-efficient GPU algorithm for level set segmentation

Mike Roberts; Mario Costa Sousa; Joseph Ross Mitchell

We present a novel GPU level set segmentation algorithm that is both work-efficient and step-efficient. Our algorithm: (1) has linear work-complexity and logarithmic step-complexity, both of which depend only on the size of the active computational domain and do not depend on the size of the level set field; (2) limits the active computational domain to the minimal set of changing elements by examining both the temporal and spatial derivatives of the level set field; (3) tracks the active computational domain at the granularity of individual level set field elements instead of tiles without performance penalty; and (4) employs a novel parallel method for removing duplicate elements from unsorted data streams in a constant number of steps. We apply our algorithm to 3D medical images and we demonstrate that in typical clinical scenarios, our algorithm reduces the total number of processed level set field elements by 16× and is 14× faster than previous GPU algorithms with no reduction in segmentation accuracy.


interactive tabletops and surfaces | 2012

Eliciting usable gestures for multi-display environments

Teddy Seyed; Chris Burns; Mario Costa Sousa; Frank Maurer; Anthony Tang

Multi-display environments (MDEs) have advanced rapidly in recent years, incorporating multi-touch tabletops, tablets, wall displays and even position tracking systems. Designers have proposed a variety of interesting gestures for use in an MDE, some of which involve a user moving their hands, arms, body or even a device itself. These gestures are often used as part of interactions to move data between the various components of an MDE, which is a longstanding research problem. But designers, not users, have created most of these gestures and concerns over implementation issues such as recognition may have influenced their design. We performed a user study to elicit these gestures directly from users, but found a low level of convergence among the gestures produced. This lack of agreement is important and we discuss its possible causes and the implication it has for designers. To assist designers, we present the most prevalent gestures and some of the underlying conceptual themes behind them. We also provide analysis of how certain factors such as distance and device type impact the choice of gestures and discuss how to apply them to real-world systems.


non-photorealistic animation and rendering | 2009

Stippling by example

Sung Ye Kim; Ross Maciejewski; Tobias Isenberg; William M. Andrews; Wei Chen; Mario Costa Sousa; David S. Ebert

In this work, we focus on stippling as an artistic style and discuss our technique for capturing and reproducing stipple features unique to an individual artist. We employ a texture synthesis algorithm based on the gray-level co-occurrence matrix (GLCM) of a texture field. This algorithm uses a texture similarity metric to generate stipple textures that are perceptually similar to input samples, allowing us to better capture and reproduce stipple distributions. First, we extract example stipple textures representing various tones in order to create an approximate tone map used by the artist. Second, we extract the stipple marks and distributions from the extracted example textures, generating both a lookup table of stipple marks and a texture representing the stipple distribution. Third, we use the distribution of stipples to synthesize similar distributions with slight variations using a numerical measure of the error between the synthesized texture and the example texture as the basis for replication. Finally, we apply the synthesized stipple distribution to a 2D grayscale image and place stipple marks onto the distribution, thereby creating a stippled image that is statistically similar to images created by the example artist.


virtual reality software and technology | 2008

Napkin sketch: handheld mixed reality 3D sketching

Min Xin; Ehud Sharlin; Mario Costa Sousa

This paper describes, Napkin Sketch, a 3D sketching interface which attempts to support sketch-based artistic expression in 3D, mimicking some of the qualities of conventional sketching media and tools both in terms of physical properties and interaction experience. A portable tablet PC is used as the sketching platform, and handheld mixed reality techniques are employed to allow 3D sketches to be created on top of a physical napkin. Intuitive manipulation and navigation within the 3D design space is achieved by visually tracking the tablet PC with a camera and mixed reality markers. For artistic expression using sketch input, we improve upon the projective 3D sketching approach with a one stroke sketch plane definition technique. This coupled with the hardware setup produces a natural and fluid sketching experience.


non-photorealistic animation and rendering | 2007

Single camera flexible projection

John Brosz; Faramarz F. Samavati; M. Sheelagh T. Carpendale; Mario Costa Sousa

We introduce a flexible projection framework that is capable of modeling a wide variety of linear, nonlinear, and hand-tailored artistic projections with a single camera. This framework introduces a unified geometry for all of these types of projections using the concept of a flexible viewing volume. With a parametric representation of the viewing volume, we obtain the ability to create curvy volumes, curvy near and far clipping surfaces, and curvy projectors. Through a description of the frameworks geometry, we illustrate its capabilities to recreate existing projections and reveal new projection variations. Further, we apply two techniques for rendering the frameworks projections: ray casting, and a limited GPU based scanline algorithm that achieves real-time results.

Collaboration


Dive into the Mario Costa Sousa's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Luiz Velho

Instituto Nacional de Matemática Pura e Aplicada

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge