Marcelo Cohen
Pontifícia Universidade Católica do Rio Grande do Sul
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Marcelo Cohen.
conference on computability in europe | 2009
Rossana Baptista Queiroz; Marcelo Cohen; Soraia Raupp Musse
In this article we describe our approach to generating convincing and empathetic facial animation. Our goal is to develop a robust facial animation platform that is usable and can be easily extended. We also want to facilitate the integration of research in the area and to directly incorporate the characters in interactive applications such as embodied conversational agents and games. We have developed a framework capable of easily animating MPEG-4 parameterized faces through high-level description of facial actions and behaviors. The animations can be generated in real time for interactive applications. We present some case studies that integrate computer vision techniques in order to provide interaction between the user and a character that interacts with different facial actions according to events in the application.
BMC Genomics | 2011
Elisangela Ml Cohen; Karina S. Machado; Marcelo Cohen; Osmar Norberto de Souza
BackgroundProtein/receptor explicit flexibility has recently become an important feature of molecular docking simulations. Taking the flexibility into account brings the docking simulation closer to the receptors’ real behaviour in its natural environment. Several approaches have been developed to address this problem. Among them, modelling the full flexibility as an ensemble of snapshots derived from a molecular dynamics simulation (MD) of the receptor has proved very promising. Despite its potential, however, only a few studies have employed this method to probe its effect in molecular docking simulations. We hereby use ensembles of snapshots obtained from three different MD simulations of the InhA enzyme from M. tuberculosis (Mtb), the wild-type (InhA_wt), InhA_I16T, and InhA_I21V mutants to model their explicit flexibility, and to systematically explore their effect in docking simulations with three different InhA inhibitors, namely, ethionamide (ETH), triclosan (TCL), and pentacyano(isoniazid)ferrate(II) (PIF).ResultsThe use of fully-flexible receptor (FFR) models of InhA_wt, InhA_I16T, and InhA_I21V mutants in docking simulation with the inhibitors ETH, TCL, and PIF revealed significant differences in the way they interact as compared to the rigid, InhA crystal structure (PDB ID: 1ENY). In the latter, only up to five receptor residues interact with the three different ligands. Conversely, in the FFR models this number grows up to an astonishing 80 different residues. The comparison between the rigid crystal structure and the FFR models showed that the inclusion of explicit flexibility, despite the limitations of the FFR models employed in this study, accounts in a substantial manner to the induced fit expected when a protein/receptor and ligand approach each other to interact in the most favourable manner.ConclusionsProtein/receptor explicit flexibility, or FFR models, represented as an ensemble of MD simulation snapshots, can lead to a more realistic representation of the induced fit effect expected in the encounter and proper docking of receptors to ligands. The FFR models of InhA explicitly characterizes the overall movements of the amino acid residues in helices, strands, loops, and turns, allowing the ligand to properly accommodate itself in the receptor’s binding site. Utilization of the intrinsic flexibility of Mtb’s InhA enzyme and its mutants in virtual screening via molecular docking simulation may provide a novel platform to guide the rational or dynamical-structure-based drug design of novel inhibitors for Mtb’s InhA. We have produced a short video sequence of each ligand (ETH, TCL and PIF) docked to the FFR models of InhA_wt. These videos are available at http://www.inf.pucrs.br/~osmarns/LABIO/Videos_Cohen_et_al_19_07_2011.htm.
intelligent virtual agents | 2010
Rossana Baptista Queiroz; Adriana Braun; Juliano Lucas Moreira; Marcelo Cohen; Soraia Raupp Musse; Marcelo Thielo; Ramin Samadani
This paper presents a model to generate personalized facial animations for avatars using Performance Driven Animation (PDA). This approach allows the users to reflect their face expressions in his/her avatar, considering as input a small set of feature points provided by Computer Vision (CV) tracking algorithms. The model is based on the MPEG-4 Facial Animation standard, and uses a hierarchy of the animation parameters to provide animation of face regions where it lacks CV data. To deform the face, we use two skin mesh deformation methods, which are computationally cheap and provide avatar animation in real time. We performed an evaluation with subjects in order to qualitatively evaluate our method. Results show that the proposed model can generate coherent and visually satisfactory animations.
acm symposium on applied computing | 2008
Marcelo Cohen; Ken Brodlie; Nick Phillips
In many volume visualization applications there is some region of specific interest where we wish to see fine detail - yet we do not want to lose an impression of the overall picture. In this research we apply the notion of focus and context to texture-based volume rendering. A framework has been developed that enables users to achieve fast volumetric distortion and other effects of practical use. The framework has been implemented through direct programming of the graphics processor and integrated into a volume rendering system. Our driving application is the effective visualization of aneurysms, an important issue in neurosurgery. We have developed and evaluated an easy-to-use system that allows a neurosurgical team to explore the nature of cerebral aneurysms, visualizing the aneurysm itself in fine detail while still retaining a view of the surrounding vasculature.
2009 VIII Brazilian Symposium on Games and Digital Entertainment | 2009
Henry Braun; Rafael Hocevar; Rossana Baptista Queiroz; Marcelo Cohen; Juliano Lucas Moreira; Júlio C. S. Jacques Júnior; Adriana Braun; Soraia Raupp Musse; Ramin Samadani
In this paper we present a platform called VhCVE, in which relevant issues related to Collaborative Virtual Environments applications are integrated. The main goal is to provide a framework where participants can interact with others by voice and chat. Also, manipulation tools such as a mouse using Computer Vision and Physics are included, as well as rendering techniques (e.g. light sources, shadows and weather effects). In addition, avatar animation in terms of face and body motion are provided. Results indicate that our platform can be used as a interactive virtual world to help communication among people.
brazilian symposium on computer graphics and image processing | 2010
Rossana Baptista Queiroz; Marcelo Cohen; Juliano Lucas Moreira; Adriana Braun; Júlio C. S. Jacques Júnior; Soraia Raupp Musse
This work describes a methodology for generation of facial ground truth with synthetic faces. Our focus is to provide a way to generate accurate data for the evaluation of Computer Vision algorithms, in terms of facial detection and its components. Such algorithms play a key role in face detection. We present a prototype in which we can generate facial animation videos using a 3D face models database, controlling face actions, illumination conditions and camera position. The facial animation platform allows us to generate animations with speech, facial expressions and eye motion, in order to approach realistic human face behavior. In addition, our model provides the ground truth of a set of facial feature points at each frame. As result, we are able to build a video database of synthetic human faces with ground truth, which can be used for training/evaluation of several algorithms for tracking and/or detection. We also present experiments using our generated videos to evaluate face, eye and mouth detection algorithms, comparing their performance with real video sequences.
Journal of Communication and Information Systems | 2015
John Soldera; José Bins; Marcelo Cohen; Julio Cesar Silveira Jacques; Soraia Raupp Musse; Cláudio Rosito Jung
This paper describes a new approach for eventdetection in video sequences. A tracking algorithm for obliquecamera setups is initially used to extract trajectories in a trainingperiod, and a map of spatial occupancy of the scene is built. In thetest stage, Voronoi Diagrams are used to obtain some informationregarding interpersonal relationships, such as distances fromneighbors, formation and classification of groups. A variety ofcomplex events can be detected through a query formulated bythe user, that may combine concurrent or sequential occurrencesof simpler events based on either spatial occupancy or interpersonalrelationships (e.g. group formation in a region with smallspatial occupancy). These queries can be used to detect eventson-the-fly as the video is processed, or applied to stored videodatabases.
Archive | 2005
Marcelo Cohen; Ken Brodlie; Nick Phillips
Journal of Communication and Information Systems | 2015
John Soldera; José Bins; Marcelo Cohen; Julio Cesar Silveira Jacques; Soraia Raupp Musse; Cláudio Rosito Jung
Archive | 2010
Rossana Baptista Queiroz; Marcelo Cohen; Juliano Lucas Moreira; Adriana Braun; Júlio C. S. Jacques Júnior; Soraia Raupp Musse