Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Oscar Meruvia-Pastor is active.

Publication


Featured researches published by Oscar Meruvia-Pastor.


image and vision computing new zealand | 2014

DeReEs: Real-Time Registration of RGBD Images Using Image-Based Feature Detection And Robust 3D Correspondence Estimation and Refinement

Sahand Seifi; Afsaneh Rafighi; Oscar Meruvia-Pastor

We present DeReEs, a real-time RGBD registration algorithm for the scenario where multiple RGBD images of the same scene are obtained from depth-sensing cameras placed at different viewpoints, with partial overlaps between their views. DeReEs (Detection, Rejection and Estimation) is a combination of 2D image-based feature detection algorithms, a RANSAC based false correspondence rejection and a rigid 3D transformation estimation. DeReEs performs global registration not only in real-time, but also supports large transformation distances for both translations and rotations. DeReEs is designed as part of a virtual/augmented reality solution for a remote 3D collaboration system that does not require initial setup and allows users to freely move the cameras during use. We present comparisons of DeReEs with other common registration algorithms. Our results suggest that DeReEs provides better speed and accuracy especially in scenes with partial overlapping.


Journal of Graphics Tools | 2010

Adaptive Incremental Stippling using the Poisson-Disk Distribution

Ignacio Ascencio-Lopez; Oscar Meruvia-Pastor; Hugo Hidalgo-Silva

Abstract Recently efficient algorithms have been published for generating large point sets with Poisson-disk distribution. With their blue noise spectral characteristics, Poisson-disk distributions are considered to produce visually pleasing patterns. Some applications, e.g., non photo-realistic rendering (NPR), require, in addition to efficiency, the production of aesthetically pleasing point sets adapted to an arbitrary image or function. We present a novel linear order stippling method that generates a set of points with Poisson-disk distribution adapted to arbitrary images and compare this method with existing methods using two quantitative evaluation metrics, radial mean and anisotropy, to assess the technique.


International Journal of Biomedical Imaging | 2011

Estimating cell count and distribution in labeled histological samples using incremental cell search

Oscar Meruvia-Pastor; Jung Soh; Eric J. Schmidt; Julia C. Boughner; Mei Xiao; Heather A. Jamniczky; Benedikt Hallgrímsson; Christoph W. Sensen

Cell proliferation is critical to the outgrowth of biological structures including the face and limbs. This cellular process has traditionally been studied via sequential histological sampling of these tissues. The length and tedium of traditional sampling is a major impediment to analyzing the large datasets required to accurately model cellular processes. Computerized cell localization and quantification is critical for high-throughput morphometric analysis of developing embryonic tissues. We have developed the Incremental Cell Search (ICS), a novel software tool that expedites the analysis of relationships between morphological outgrowth and cell proliferation in embryonic tissues. Based on an estimated average cell size and stain color, ICS rapidly indicates the approximate location and amount of cells in histological images of labeled embryonic tissue and provides estimates of cell counts in regions with saturated fluorescence and blurred cell boundaries. This capacity opens the door to high-throughput 3D and 4D quantitative analyses of developmental patterns.


Sensors | 2017

Augmented reality as a telemedicine platform for remote procedural training

Shiyao Wang; Michael Parsons; Jordan Stone-McLean; Peter Rogers; Sarah E. Boyd; Kristopher Hoover; Oscar Meruvia-Pastor; Minglun Gong; Andrew Smith

Traditionally, rural areas in many countries are limited by a lack of access to health care due to the inherent challenges associated with recruitment and retention of healthcare professionals. Telemedicine, which uses communication technology to deliver medical services over distance, is an economical and potentially effective way to address this problem. In this research, we develop a new telepresence application using an Augmented Reality (AR) system. We explore the use of the Microsoft HoloLens to facilitate and enhance remote medical training. Intrinsic advantages of AR systems enable remote learners to perform complex medical procedures such as Point of Care Ultrasound (PoCUS) without visual interference. This research uses the HoloLens to capture the first-person view of a simulated rural emergency room (ER) through mixed reality capture (MRC) and serves as a novel telemedicine platform with remote pointing capabilities. The mentor’s hand gestures are captured using a Leap Motion and virtually displayed in the AR space of the HoloLens. To explore the feasibility of the developed platform, twelve novice medical trainees were guided by a mentor through a simulated ultrasound exploration in a trauma scenario, as part of a pilot user study. The study explores the utility of the system from the trainees, mentor, and objective observers’ perspectives and compares the findings to that of a more traditional multi-camera telemedicine solution. The results obtained provide valuable insight and guidance for the development of an AR-supported telemedicine platform.


international conference on computer graphics and interactive techniques | 2015

Automatic and adaptable registration of live RGBD video streams

Afsaneh Rafighi; Sahand Seifi; Oscar Meruvia-Pastor

We introduce DeReEs-4V, an algorithm that receives two separate RGBD video streams and automatically produces a unified scene through RGBD registration in a few seconds. The motivation behind the solution presented here is to allow game players to place the depth-sensing cameras at arbitrary locations to capture any scene where there is some partial overlap between the parts of the scene captured by the sensors. A typical way to combine partially overlapping views from multiple cameras is through visual calibration using external markers within the field of view of both cameras. Calibration can be time consuming and may require fine tuning, interrupting gameplay. If the cameras are even slightly moved or bumped into, the calibration process typically needs to be repeated from scratch. In this article we demonstrate how RGBD registration can be used to automatically find a 3D viewing transformation to match the view of one camera with respect to the other without calibration while the system is running. To validate this approach, a comparison of our method against standard checkerboard target calibration is provided, with a thorough examination of the system performance under different scenarios. The system presented supports any application that might benefit from a wider operational field-of-view video capture. Our results show that the system is robust to camera movements while simultaneously capturing and registering live point clouds from two depth-sensing cameras.


oceans conference | 2014

Robot arm manipulation using depth-sensing cameras and inverse kinematics

Akhilesh Kumar Mishra; Oscar Meruvia-Pastor

In this work we propose a new technique to manipulate a robotic arm which uses a depth camera to capture the user input and inverse kinematics to define the motion of the robotic arm. The presented technique is inexpensive to implement and easier to learn as compared to the current methods. Along with the easier manipulation of the robotic arm, the presented approach also adds some simple speech and gesture commands to control the end-effector which makes the interaction more intuitive.


PeerJ | 2017

GeNET: a web application to explore and share Gene Co-expression Network Analysis data

Amit P. Desai; Mehdi Razeghin; Oscar Meruvia-Pastor; Lourdes Peña-Castillo

Gene Co-expression Network Analysis (GCNA) is a popular approach to analyze a collection of gene expression profiles. GCNA yields an assignment of genes to gene co-expression modules, a list of gene sets statistically over-represented in these modules, and a gene-to-gene network. There are several computer programs for gene-to-gene network visualization, but these programs have limitations in terms of integrating all the data generated by a GCNA and making these data available online. To facilitate sharing and study of GCNA data, we developed GeNET. For researchers interested in sharing their GCNA data, GeNET provides a convenient interface to upload their data and automatically make it accessible to the public through an online server. For researchers interested in exploring GCNA data published by others, GeNET provides an intuitive online tool to interactively explore GCNA data by genes, gene sets or modules. In addition, GeNET allows users to download all or part of the published data for further computational analysis. To demonstrate the applicability of GeNET, we imported three published GCNA datasets, the largest of which consists of roughly 17,000 genes and 200 conditions. GeNET is available at bengi.cs.mun.ca/genet.


BMC Medical Imaging | 2010

Building generic anatomical models using virtual model cutting and iterative registration

Mei Xiao; Jung Soh; Oscar Meruvia-Pastor; Eric J. Schmidt; Benedikt Hallgrímsson; Christoph W. Sensen

BackgroundUsing 3D generic models to statistically analyze trends in biological structure changes is an important tool in morphometrics research. Therefore, 3D generic models built for a range of populations are in high demand. However, due to the complexity of biological structures and the limited views of them that medical images can offer, it is still an exceptionally difficult task to quickly and accurately create 3D generic models (a model is a 3D graphical representation of a biological structure) based on medical image stacks (a stack is an ordered collection of 2D images). We show that the creation of a generic model that captures spatial information exploitable in statistical analyses is facilitated by coupling our generalized segmentation method to existing automatic image registration algorithms.MethodsThe method of creating generic 3D models consists of the following processing steps: (i) scanning subjects to obtain image stacks; (ii) creating individual 3D models from the stacks; (iii) interactively extracting sub-volume by cutting each model to generate the sub-model of interest; (iv) creating image stacks that contain only the information pertaining to the sub-models; (v) iteratively registering the corresponding new 2D image stacks; (vi) averaging the newly created sub-models based on intensity to produce the generic model from all the individual sub-models.ResultsAfter several registration procedures are applied to the image stacks, we can create averaged image stacks with sharp boundaries. The averaged 3D model created from those image stacks is very close to the average representation of the population. The image registration time varies depending on the image size and the desired accuracy of the registration. Both volumetric data and surface model for the generic 3D model are created at the final step.ConclusionsOur method is very flexible and easy to use such that anyone can use image stacks to create models and retrieve a sub-region from it at their ease. Java-based implementation allows our method to be used on various visualization systems including personal computers, workstations, computers equipped with stereo displays, and even virtual reality rooms such as the CAVE Automated Virtual Environment. The technique allows biologists to build generic 3D models of their interest quickly and accurately.


International Conference on Augmented Reality, Virtual Reality and Computer Graphics | 2017

Operating Virtual Panels with Hand Gestures in Immersive VR Games

Yin Zhang; Oscar Meruvia-Pastor

Portable depth-sensing cameras allow users to control interfaces using hand gestures at a short range from the camera. These technologies are being combined with virtual reality (VR) headsets to produce immersive VR experiences that respond more naturally to user actions. In this research, we explore gesture-based interaction in immersive VR games by using the Unity game engine, the LeapMotion sensor, a laptop, a smartphone, and the Freefly VR headset. By avoiding Android deployment, this novel setup allowed for fast prototyping and testing of different ideas for immersive VR interaction, at an affordable cost. We implemented a system that allows users to play a game in a virtual world and compared placements of the leap motion sensor on the desk and on the headset. In this experimental setup, users interacted with a numeric dial panel and then played a Tetris game inside the VR environment by pressing the buttons of a virtual panel. The results suggest that, although the tracking quality of the Leap Motion sensor was rather limited when used in the head-mounted setup for pointing and selection tasks, its performance was much better in the desk-mounted setup, providing a novel platform for research and rapid application development.


international conference of design, user experience, and usability | 2016

Evaluation of an Inverse-Kinematics Depth-Sensing Controller for Operation of a Simulated Robotic Arm

Akhilesh Kumar Mishra; Lourdes Peña-Castillo; Oscar Meruvia-Pastor

Interaction using depth-sensing cameras has many applications in computer vision and spatial manipulation tasks. We present a user study that compares a short-range depth-sensing camera-based controller with an inverse-kinematics keyboard controller and a forward-kinematics joystick controller for two placement tasks. The study investigated ease of use, user performance and user preferences. Task completion times were recorded and insights on the measured and perceived advantages and disadvantages of these three alternative controllers from the perspective of user efficiency and satisfaction were obtained. The results indicate that users performed equally well using the depth-sensing camera and the keyboard controllers. User performance was significantly better with these two approaches than with the joystick controller, the reference method used in comparable commercial simulators. Most participants found that the depth-sensing camera controller was easy to use and intuitive, but some expressed discomfort stemming from the pose required for interaction with the controller.

Collaboration


Dive into the Oscar Meruvia-Pastor's collaboration.

Top Co-Authors

Avatar

Jung Soh

University of Calgary

View shared research outputs
Top Co-Authors

Avatar

Mei Xiao

University of Calgary

View shared research outputs
Top Co-Authors

Avatar

Christoph W. Sensen

Graz University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lourdes Peña-Castillo

Memorial University of Newfoundland

View shared research outputs
Top Co-Authors

Avatar

Afsaneh Rafighi

Memorial University of Newfoundland

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sahand Seifi

Memorial University of Newfoundland

View shared research outputs
Top Co-Authors

Avatar

Akhilesh Kumar Mishra

Memorial University of Newfoundland

View shared research outputs
Top Co-Authors

Avatar

Amit P. Desai

Memorial University of Newfoundland

View shared research outputs
Researchain Logo
Decentralizing Knowledge