Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Roberto Canas is active.

Publication


Featured researches published by Roberto Canas.


Journal of Intelligent Manufacturing | 2008

Developing alternative design concepts in VR environments using volumetric self-organizing feature maps

Philip C. Igwe; George K. Knopf; Roberto Canas

The conceptual design process has not benefited from conventional computer-aided design (CAD) technology to the same degree as embodiment design because the creative activities associated with developing and communicating alternative solutions, with minimal detail, is far less formulaic in its implementation. Any CAD system that seeks to support and enhance conceptual design must, therefore, enable natural and haptic modes of human–computer interaction. A computational framework for economically representing deformable solid objects for conceptual design is described in this paper. The physics-based deformation model consists of a set of point masses, connected by a series of springs and dampers, which undergo movement through the influence of external and internal forces. The location of each mass point corresponds to a node on a 3D mesh defined by a volumetric self-organizing feature map (VSOFM). A reference mesh is first created by fitting the exterior nodes of the VSOFM to sampled data from the surface of a primitive shape, such as a cube, and then redistributing the interior nodes to reflect evenly spaced hexahedral elements. Material properties are introduced to the mesh by assigning a mass value to individual nodes and spring coefficients to the nodal connections. Several illustrations involving the redesign of an ergonomic writing pen is used to demonstrate how the proposed virtual reality-based modeling system will permit the industrial designer to interactively change the shape and function of a design concept.


Computer-aided Design | 2008

Efficient algorithm to detect collision between deformable B-spline surfaces for virtual sculpting

Harish Pungotra; George K. Knopf; Roberto Canas

A structured computational framework to efficiently detect collision between deformable freeform shapes in a VR environment is proposed in this paper. The deformable shape is represented as a B-spline surface and no assumption is made with regard to the degree of the surface, extent of deformation or virtual material properties. The proposed technique calculates and stores transformation matrices and their inverse during preprocessing, which are then used to discretize the B-spline surfaces. It exploits the fact that the transformation matrices for calculating discrete points on the B-spline are independent of the position of control points and therefore can be pre-calculated. The intensity of the points is dynamically increased at lower levels of detail as per accuracy requirements, and finally the regions which are likely to undergo collision are tessellated using these points. Spheres are used to determine lower levels of detail which makes this algorithm highly suitable for multiple contact collision detection. The algorithm efficiently calculates tangents and surface normals at these points. The surface normals give inside/outside property to the triangulated region and tangents provide the necessary information to model tangential forces such as frictional forces. The proposed algorithm is especially suitable for sculpting during concept design and its validation before exchanging information with existing CAD softwares for detailed design. A comparison based on the worst case scenario is presented to prove the efficiency of the present algorithm.


Computer-aided Design | 2010

Merging multiple B-spline surface patches in a virtual reality environment

Harish Pungotra; George K. Knopf; Roberto Canas

Although a number of different algorithms have been described in the literature for merging two or more B-spline/Bezier curves and stitching B-spline surfaces, these techniques are not suitable for virtual reality applications that require the user to effortlessly combine multiple dissimilar patches in real-time to create the final object shape. This paper presents a novel approach for merging arbitrary B-spline surfaces within a very low tolerance limit. The technique exploits blending matrices that are independent of the control point positions and, hence, can be pre-calculated prior to haptic interaction. Once determined, the pre-calculated blending matrices are used to generate discrete points on the B-spline surface. When two or more surfaces are merged, these discrete point matrices are combined to form a single matrix that represents the resultant shape. By using the inverse of the revised blending matrices and the combined discrete point matrix, a new set of control points can be directly computed. The merged surface can be made to have C^0,C^1 or higher connectivity at the joining edge. A brief study comparing the proposed merging technique with a commercially available CAD system is presented and the results show improved computational efficiency, accuracy, and robustness.


Journal of Infrastructure Systems | 2014

Effect of Past Delivery Practices on Current Conditions of Cast-Iron Water Pipes

Balvant Rajani; John Dickinson; Henry Xue; Paul Woodard; Roberto Canas

AbstractCast-iron pipes installed between 1850 and the early 1960s in North America, United Kingdom, and European countries were produced in foundries located near growing urban centers. Their considerable weight, size (especially larger-diameter pipes), and limited transportation facilities made their handling and delivery to the installation site difficult. Historical anecdotal evidence exists to suggest that some cast-iron pipes may have been damaged during delivery. This paper examines different mechanical models to examine what specific conditions may have led to pipe damage during delivery and installation. Analyses show that if pipes did incur damage, then cracks were likely to have occurred on the inside of the pipe bell or spigot ends. Furthermore, it appears that the spigot ends of smaller-diameter pipes had higher risk of damage during delivery, whereas both bell and spigot ends faced increased risk of damage in larger-diameter pipes. Monte Carlo simulations were conducted to account for uncert...


Spie Newsroom | 2011

Reconstructing complex scenes for virtual reality

George K. Knopf; Kuldeep K. Sareen; Roberto Canas

Digital reconstruction of existing objects and complex 3D environments is often necessary for developing realistic virtual reality scenes. Three-dimensional range scanners capture shape information about complex objects and real-world environments as immense data clouds comprising discrete 3D surface points and their corresponding RGB (red, green, blue) color values. To improve the usability of the raw point clouds, the data must be broken into meaningful clusters.1 Once properly segmented, it is possible to recreate individual objects and successfully model the scanned space. Accurate segmentation of building interiors2, 3 and architectural shapes1, 4 poses unique challenges due to the presence of multiple objects, partially occluded geometry, and vast geometric diversity. Unfortunately, segmentation algorithms that rely on pure surface geometry,3 prior-shape knowledge,5, 6 and simplified shape approximations5, 7 have difficulty handling such complex point clouds. As a result, they often generate only primitive shape information through piecewise approximation of planar surfaces. We propose a hierarchical clustering algorithm8 that exploits color and geometry to improve clustering reliability. This shape-based hierarchy manages geometric diversity by extracting large, planar (e.g., walls, floor, and ceiling) and small, freeform regions (interior complex objects) in two successive stages. Each hierarchical stage uses its own geometric complexity-driven algorithmic parameters to handle planar and complex regions alike. The segmentation accuracy is further improved by investigating the color (RGB) along with the geometry of spatial data points (XYZ). This additional similarity ensures coherent clustering even in geometrically uncertain areas. It also helps in identifying unique data clusters representing multiple objects with similar overlapping geometries. We demonstrated the approach on a colored point cloud acquired from an office room with multiple objects (table, chairs, monitor, printer, statue head, and so on) and colored sheets on Figure 1. Colored point cloud generation using stationary range scanner and its mapped colored digital pictures. RGB: Red, green, blue. XYZ: Geometry of spatial data points.


Optical Engineering | 2011

Hierarchical data clustering approach for segmenting colored three-dimensional point clouds of building interiors

Kuldeep K. Sareen; George K. Knopf; Roberto Canas

A range scan of a buildings interior typically produces an immense cloud of colorized three-dimensional data that represents diverse surfaces ranging from simple planes to complex objects. To create a virtual reality model of the preexisting room, it is necessary to segment the data into meaningful clusters. Unfortunately, segmentation algorithms based solely on surface curvature have difficulty in handling such diverse interior geometries, occluded boundaries, and closely placed objects with similar curvature properties. The proposed two stage hierarchical clustering algorithm overcomes many of these challenges by exploiting the registered color and spatial information simultaneously. Large planar regions are initially identified using constraints that combine color (hue) and a measure of local planarity called planar alignment factor. This stage assigns 72 to 84% of the sampled points to clusters representing flat surfaces such as walls, ceilings, or floors. The significantly reduced data points are clustered further using local surface normal and hue deviation information. A local density driven investigation distance (fixed density distance) is used for normal computation and cluster expansion. The methodology is tested on colorized range data of a typical room interior. The combined approach enabled the successful segmentation of planar and complex geometries in both dense and sparse data regions.


international conference on e learning and games | 2009

Construction Knowledge Transfer through Interactive Visualization

Paul Woodard; Shafee Ahamed; Roberto Canas; John Dickinson

Changing population demographics and infrastructure demands are having a significant impact on the average level of worker expertise in the North American construction sector. Experienced employees with specialized knowledge are leaving the workforce, and their replacements are required to install and maintain a broader variety of complex systems. Because of this, it is imperative that construction knowledge be quickly and effectively transferred to practitioners through educational processes. However, recent history has demonstrated that traditional techniques may not be effective at transferring sufficient knowledge to eliminate many common mistakes. It has been suggested that new forms of knowledge transfer may be more effective and result in fewer construction errors, especially those which result from installing components out of sequence. In this paper, the authors describe efforts to adapt a traditional paper-based best practice guide into an interactive 3-D tool that can be used on a variety of devices, from laptop computers to commercially available entertainment systems.


International Journal of Shape Modeling | 2009

CONTOUR-BASED 3D POINT DATA SIMPLIFICATION FOR FREEFORM SURFACE RECONSTRUCTION

Kuldeep K. Sareen; George K. Knopf; Roberto Canas

Three-dimensional clouds of largely unorganized coordinate data are often used to reconstruct freeform surfaces and shapes for a variety of seemingly diverse reverse engineering applications involving computer-aided design, anatomical reconstruction, cartography, digital archaeology, and infrastructural renewal. The point cloud data acquired by non-contact digitizers is very dense and includes numerous scanning errors. As a consequence, the captured data must be filtered and simplified for accurate surface reconstruction. Many existing data simplification techniques are, however, complex and do not directly support the development of spline-based surface models. In this paper a novel contour-based simplification algorithm is introduced for creating B-spline facial surface models directly from scanned data. The algorithm first extracts a series of equally-spaced sectioned contours from an unorganized 3D point cloud by mapping points onto a set of user-defined parallel planes. Each extracted contour is then regenerated as a cubic B-spline curve with a reduced number of control points using a user-defined reduction ratio. A freeform surface is finally created from these contiguous reconstructed contours by a lofting process. Deviation analysis that compares the final reconstructed surface to the original point cloud data is used to demonstrate the effectiveness of the proposed algorithm. The results show that the proposed algorithm generates a fairly accurate spline-based surface model from unstructured points using less than 20% of the actual scanned data. Surface accuracies are enhanced with increased number of initial contours and a greater second stage data reduction ratio.


Journal of Information Technology in Construction | 2011

Game-based trench safety education: development and lessons learned

John Dickinson; Paul Woodard; Roberto Canas; Shafee Ahamed; Doug Lockston


Computer-aided Civil and Infrastructure Engineering | 2012

Consistent Point Clouds of Narrow Spaces Using Multiscan Domain Mapping

Kuldeep K. Sareen; George K. Knopf; Roberto Canas

Collaboration


Dive into the Roberto Canas's collaboration.

Top Co-Authors

Avatar

George K. Knopf

University of Western Ontario

View shared research outputs
Top Co-Authors

Avatar

Harish Pungotra

University of Western Ontario

View shared research outputs
Top Co-Authors

Avatar

Kuldeep K. Sareen

University of Western Ontario

View shared research outputs
Top Co-Authors

Avatar

John Dickinson

National Research Council

View shared research outputs
Top Co-Authors

Avatar

Paul Woodard

National Research Council

View shared research outputs
Top Co-Authors

Avatar

Shafee Ahamed

National Research Council

View shared research outputs
Top Co-Authors

Avatar

Balvant Rajani

National Research Council

View shared research outputs
Top Co-Authors

Avatar

Henry Xue

National Research Council

View shared research outputs
Top Co-Authors

Avatar

Philip C. Igwe

University of Western Ontario

View shared research outputs
Researchain Logo
Decentralizing Knowledge