Isaac Rudomin
Barcelona Supercomputing Center
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Isaac Rudomin.
The Visual Computer | 2014
Leonel Toledo; Oriam De Gyves; Isaac Rudomin
Crowd simulation is typically an expensive task. We introduce a level of detail system, useful for varied animated crowds, capable of handling several thousands of different animated characters at interactive frame rates. This is accomplished using two complementary structures to reduce memory consumption and optimize the rendering stage. The first structure is a skeleton, with associated octrees per limb, that is used for computing level of detail of geometry and animation. The second structure is a tiling of the scene that is used to select the character’s level of detail for geometry, animation and behavior. A quadtree is built on top of this tiling and used for further rendering optimization, allowing us to combine geometry from different characters in parts of the scene that are far away from the camera. The system outperforms similar methods in memory requirements and/or complexity and is capable of rendering crowds composed of a quarter million characters.
motion in games | 2013
Sergio Ruiz; Benjamín Hernández; Adriana Alvarado; Isaac Rudomin
Large-scale crowd simulation and visualization is crucial for the next generation of interactive virtual environments. Current authoring techniques produce good results but are laborious, and demand valuable graphics memory and computational resources beyond the reach of consumer-level hardware. In this paper, we propose a technique for generating animatable characters for crowds that reduces memory requirements. The first step consists in reducing, segmenting and labeling a data set of virtual characters into simpler body parts; labeling information is then manually generated and used to correctly match different body parts in order to generate new characters. The second step comprises a method to embed the rig and skinning information into the texture space shared among the new characters. Additional methods using color, skin features, pattern, fat, wrinkle and textile fold maps are used to add more variety. Animation sequences are stored in auxiliary textures. These can be transferred between different characters, as well as versions of the same characters with different level of detail; such animations can be modified, and otherwise reused, increasing variety yet reducing memory requirements. We will demonstrate that our technique has four advantages: first, memory requirements are reduced by 91% when compared to traditional libraries; second, it can also generate previously nonexistent characters from the original data set; third, embedding the rig and skinning into texture space allows painless animation transfer between different characters and fourth, between different levels of detail of the same characters.
international conference on computer graphics theory and applications | 2014
Jorge Ivan Rivalcoba Rivas; Oriam De Gyves; Isaac Rudomin; Nuria Pelechano
Our objective with this paper is to show how we can couple a group of real people and a simulated crowd of virtual humans. We attach group behaviors to the simulated humans to get a plausible reaction to real people. We use a two stage system: in the first stage, a group of people are segmented from a live video, then a human detector algorithm extracts the positions of the people in the video, which are finally used to feed the second stage, the simulation system. The positions obtained by this process allow the second module to render the real humans as avatars in the scene, while the behavior of additional virtual humans is determined by using a simulation based on a social forces model. Developing the method required three specific contributions: a GPU implementation of the codebook algorithm that includes an auxiliary codebook to improve the background subtraction against illumination changes; the use of semantic local binary patterns as a human descriptor; the parallelization of a social forces model, in which we solve a case of agents merging with each other. The experimental results show how a large virtual crowd reacts to over a dozen humans in a real environment.
international conference on computational science | 2018
Leonel Toledo; Ivan Rivalcoba; Isaac Rudomin
In this work we present a system able to simulate crowds in complex urban environments; the system is built in two stages, urban environment generation and pedestrian simulation, for the first stage we integrate the WRLD3D plug-in with real data collected from GPS traces, then we use a hybrid approach done by incorporating steering pedestrian behaviors with the goal of simulating the subtle variations present in real scenarios without needing large amounts of data for those low-level behaviors, such as pedestrian motion affected by other agents and static obstacles nearby. Nevertheless, realistic human behavior cannot be modeled using deterministic approaches, therefore our simulations are both data-driven and sometimes are handled by using a combination of finite state machines (FSM) and fuzzy logic in order to handle the uncertainty of people motion.
international conference on supercomputing | 2015
Hugo Pérez; Benjamín Hernández; Isaac Rudomin; Eduard Ayguadé
Programmers need to combine different programming models and fully optimize their codes to take advantage of various levels of parallelism available in heterogeneous clusters. To reduce the complexity of this process, we propose a task-based approach for crowd simulation using OmpSs, CUDA and MPI, which allows taking the full advantage of computational resources available in heterogeneous clusters. We also present the performance analysis of the algorithm under different workloads executed on a GPU Cluster.
international conference on computer vision | 2014
Ivan Rivalcoba; Oriam De Gyves; Isaac Rudomin; Nuria Pelechano
Our objective with this paper is to show how we can couple a group of real people and a simulated crowd of virtual humans. We attach group behaviors to the simulated humans to get a plausible reaction to real people. We use a two stage system: in the first stage, a group of people are segmented from a live video, then a human detector algorithm extracts the positions of the people in the video, which are finally used to feed the second stage, the simulation system. The positions obtained by this process allow the second module to render the real humans as avatars in the scene, while the behavior of additional virtual humans is determined by using a simulation based on a social forces model. Developing the method required three specific contributions: a GPU implementation of the codebook algorithm that includes an auxiliary codebook to improve the background subtraction against illumination changes; the use of semantic local binary patterns as a human descriptor; the parallelization of a social forces model, in which we solve a case of agents merging with each other. The experimental results show how a large virtual crowd reacts to over a dozen humans in a real environment.
Computación y Sistemas | 2013
Isaac Rudomin; Benjamín Hernández; Oriam De Gyves; Leonel Toledo; Ivan Rivalcoba; Sergio Ruiz
Computación Y Sistemas | 2017
Carlos Alberto Ochoa Zezzatti; Isaac Rudomin; Genoveva Vargas Solar; Javier A. Espinosa-Oviedo; Hugo Pérez; José-Luis Zechinelli Martini
Computación Y Sistemas | 2015
Benjamín Hernández; Hugo Pérez; Isaac Rudomin; Sergio Ruiz; Oriam De Gyves; Leonel Toledo
Archive | 2016
Hugo Pérez; Benjamín Hernández; Isaac Rudomin; Eduard Ayguadé