Mingshao Zhang
Stevens Institute of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Mingshao Zhang.
ASME 2013 International Mechanical Engineering Congress and Exposition | 2013
Zhou Zhang; Mingshao Zhang; Yizhe Chang; El-Sayed Aziz; Sven K. Esche; Constantin Chassapis
Over the last few years, academic institutions have started to explore the potential of using computer game engines for developing virtual laboratory environments. Recent studies have shown that developing a realistic visualization of a physical laboratory space poses a number of challenges. A significant number of modifications are required for adding customized interactions that are not built into the game engine itself. For example, a major challenge in creating a realistic virtual environment using a computer game engine is the process of preparing and converting custom models for integration into the environment, which is too complicated to be performed by untrained users. This paper describes the usage of the Microsoft Kinect for rapidly creating a 3D model of an object for implementation in a virtual environment by retrieving the object’s depth and RGB information. A laboratory experiment was selected to demonstrate how real experimental components are reconstructed and embedded into a game-based virtual laboratory by using the Kinect. The users are then able to interact with the experimental components. This paper presents both the technical details of the implementation and some initial results of the system validation.
ASME 2012 International Mechanical Engineering Congress and Exposition | 2012
Serdar Tumkor; Mingshao Zhang; Zhou Zhang; Yizhe Chang; Sven K. Esche; Constantin Chassapis
While real-time remote experiments have been used in engineering and science education for over a decade, more recently virtual learning environments based on game systems have been explored for their potential usage in educational laboratories. However, combining the advantages of both these approaches and integrating them into an effective learning environment has not been reported yet. One of the challenges in creating such a combination is to overcome the barriers between real and virtual systems, i.e. to select compatible platforms, to achieve an efficient mapping between the real world and the virtual environment and to arrange for efficient communications between the different system components.This paper will present a pilot implementation of a multi-player game-based virtual laboratory environment that is linked to the remote experimental setup of an air flow rig. This system is designed for a junior-level mechanical engineering laboratory on fluid mechanics. In order to integrate this remote laboratory setup into the virtual laboratory environment, an existing remote laboratory architecture had to be redesigned. The integrated virtual laboratory platform consists of two main parts, namely an actual physical experimental device controlled by a remote controller and a virtual laboratory environment that was implemented using the ‘Source’ game engine, which forms the basis of the commercially available computer game ‘Half-Life 2’ in conjunction with ‘Garry’s Mod’ (GM). The system implemented involves a local device controller that exchanges data in the form of shared variables and Dynamical Link Library (DLL) files with the virtual laboratory environment, thus establishing the control of real physical experiments from inside the virtual laboratory environment. The application of a combination of C++ code, Lua scripts [1] and LabVIEW Virtual Instruments makes the platform very flexible and expandable. This paper will present the architecture of this platform and discuss the general benefits of virtual environments that are linked with real physical devices.Copyright
ASME 2015 International Mechanical Engineering Congress and Exposition | 2015
Zhou Zhang; Mingshao Zhang; Yizhe Chang; Sven K. Esche; Constantin Chassapis
Virtual laboratories are one popular form of implementation of virtual reality. They are now widely used at various levels of education. Game-based virtual laboratories created using game engines take advantage of the resources of these game engines. While providing convenience to developers of virtual laboratories, game engines also exhibit the following shortcomings: (1) They are not realistic enough. (2) They require long design and modification periods. (3) They lack customizability and flexibility. (4) They are endowed with limited artificial intelligence. These shortcomings render game-based virtual laboratories (and other virtual laboratories) inferior to traditional laboratories.This paper proposes a smart method for developing game-based virtual laboratories that overcomes these shortcomings. In this method, 3D reconstruction and pattern recognition techniques are adopted. 3D reconstruction techniques are used to create a virtual environment, which includes virtual models of real objects and a virtual space. These techniques can render this virtual environment fairly realistic, can reduce the time and effort of creating the virtual environment and can increase the flexibility in the creation of the virtual environment. Furthermore, pattern recognition techniques are used to endow game-based virtual laboratories with general artificial intelligence. The scanned objects can be recognized, and certain attributes of real objects can be added automatically to their virtual models. In addition, the emphasis of the experiments can be adjusted according to the users’ abilities in order to get better training results. As a prototype, an undergraduate student laboratory was developed and implemented. Finally, additional improvements in the approach for creating game-based virtual laboratories are discussed.Copyright
ASME 2014 International Mechanical Engineering Congress and Exposition | 2014
Yizhe Chang; El-Sayed Aziz; Zhou Zhang; Mingshao Zhang; Sven K. Esche; Constantin Chassapis
Mechanical assembly activities involve multiple factors including humans, mechanical parts, tools and assembly environments. In order to simulate assembly processes by computers for educational purposes, all these factors should be considered. Virtual reality (VR) technology, which aims to integrate natural human motion into real-world scenarios, provides an ideal simulation medium. Novel VR devices such as 3D glasses, motion-tracking gloves, haptic sensors, etc. are able to fulfill fundamental assembly simulation needs. However, most of these implementations focus on assembly simulations for computer-aided design, which are geared toward professionals rather than students, thus leading to complicated assembly procedures not suitable for students. Furthermore, the costs of these novel VR devices and specifically designed VR platforms represent an untenable financial burden for most educational institutions.In this paper, a virtual platform for mechanical assembly education based on the Microsoft Kinect sensor and Garry’s Mod (GMod) is presented. With the help of the Kinect’s body tracking function and voice recognition technology in conjunction with the graphics and physics simulation capabilities of GMod, a low-cost VR platform that enables educators to author their own assembly simulations was implemented. This platform utilizes the Kinect as the sole input device. Students can use voice commands to navigate their avatars inside of a GMod powered virtual laboratory as well as use their body’s motions to integrate pre-defined mechanical parts into assemblies. Under this platform, assembly procedures involving the picking, placing and attaching of parts can be performed collaboratively by multiple users. In addition, the platform allows collaborative learning without the need for the learners to be co-located. A pilot study for this platform showed that, with the instructor’s assistance, mechanical engineering undergraduate students are able to complete basic assembly operations.Copyright
ASME 2013 International Mechanical Engineering Congress and Exposition | 2013
Mingshao Zhang; Zhou Zhang; El-Sayed Aziz; Sven K. Esche; Constantin Chassapis
The Microsoft Kinect is part of a wave of new sensing technologies. Its RGB-D camera is capable of providing high quality synchronized video of both color and depth data. Compared to traditional 3-D tracking techniques that use two separate RGB cameras’ images to calculate depth data, the Kinect is able to produce more robust and reliable results in object recognition and motion tracking. Also, due to its low cost, the Kinect provides more opportunities for use in many areas compared to traditional more expensive 3-D scanners. In order to use the Kinect as a range sensor, algorithms must be designed to first recognize objects of interest and then track their motions. Although a large number of algorithms for both 2-D and 3-D object detection have been published, reliable and efficient algorithms for 3-D object motion tracking are rare, especially using Kinect as a range sensor.In this paper, algorithms for object recognition and tracking that can make use of both RGB and depth data in different scenarios are introduced. Subsequently, efficient methods for scene segmentation including background and noise filtering are discussed. Taking advantage of those two kinds of methods, a prototype system that is capable of working efficiently and stably in various applications related to educational laboratories is presented.Copyright
Archive | 2018
Zhou Zhang; Mingshao Zhang; Yizhe Chang; El-Saved Aziz; Sven K. Esche; Constantin Chassapis
Over the last decade, the research community has expanded substantial efforts aiming at designing, agreeing on, and rolling out technical standards and powerful universal development tools that allow the rapid and cost-effective integration of specific experimental devices into standardized remote laboratory platforms. In this chapter, a virtual laboratory system with experimental hardware in the loop is described.
ASME 2016 International Mechanical Engineering Congress and Exposition, IMECE 2016 | 2016
Zhou Zhang; Mingshao Zhang; Yizhe Chang; Sven K. Esche; Constantin Chassapis
Virtual laboratories are used in professional skill development, online education, and corporate training. There are several aspects that determine the effectiveness and popularity of virtual laboratories: (i) the benefits brought to the users compared with those provided by traditional physical hands-on laboratories, (ii) the cost of adopting a virtual laboratory which includes the costs of creating the virtual environment and developing virtual experiments, and (iii) the operation which includes the communication between trainers and trainees, the authentication and remote proctoring of the trainees, etc. At present, the procedures of building and operating a virtual laboratory are still tedious, time-consuming and resource-intense, thus considerably limiting the potential applications and popularization of virtual laboratories.In this paper, a virtual laboratory built and operated with 3D reconstruction and biometric authentication is introduced and an evaluation of the feasibility of the proposed approaches is presented.3D reconstruction techniques are used to create the virtual environment of this virtual laboratory. The traditional tools used to survey the real world are replaced by a hand-held camera. Then, all of the information acquired by this hand-held camera is processed. Finally, the virtual environment of the virtual laboratory is generated automatically in real-time.The biometric authentication techniques (here facial recognition techniques) are used to create a remote proctor. The general logic and basic algorithms used to enable biometric authentication and remote proctoring are described. When using this virtual laboratory, the students log in by capturing their face with a camera. While performing a laboratory exercise, they sit in front of the camera and the virtual laboratory system monitors their facial expressions and the motion of their head in order to identify suspicious behaviors. Upon detection of such suspicious behaviors, the system records a video for further analysis by the laboratory administrator.Copyright
ASME 2014 International Mechanical Engineering Congress and Exposition | 2014
Mingshao Zhang; Zhou Zhang; Sven K. Esche; Constantin Chassapis
Since its introduction in 2010, Microsoft’s Kinect input device for game consoles and computers has shown its great potential in a large number of applications, including game development, research and education. Many of these implementations are still in the prototype stages and exhibit a somewhat limited performance. These limitations are mainly caused by the quality of the point clouds generated by the Kinect, which include limited range, high dependency on surface properties, shadowing, low depth accuracy, etc. One of the Kinect’s most significant limitations is the low accuracy and high error associated with its point cloud. The severity of these defects varies with the points’ locations in the Kinect’s camera coordinate system. The available traditional algorithms for processing point clouds are based on the same assumption that input point clouds are perfect and have the same characteristics throughout the entire point cloud.In the first part of this paper, the Kinect’s point cloud properties (including resolution, depth accuracy, noise level and error) and their dependency on the point pixel location will be systematically studied. Second, the Kinect’s calibration, both by hardware and software approaches, will be explored and methods for improving the quality of its output point clouds will be identified. Then, modified algorithms adapted to the Kinect’s unique properties will be introduced. This method allows to better judge the output point cloud properties in a quantifiable manner and then to modify traditional computer vision algorithms by adjusting their assumptions regarding the input cloud properties to the actual parameters of the Kinect. Finally, the modified algorithms will be tested in a prototype application, which shows that the Kinect does have the potential for successful usage in educational applications if the according algorithms are design properly.Copyright
International Journal of Online Engineering (ijoe) | 2013
Zhou Zhang; Mingshao Zhang; Serdar Tumkor; Yizhe Chang; Sven K. Esche; Constantin Chassapis
2013 ASEE Annual Conference & Exposition | 2013
Mingshao Zhang; Zhou Zhang; Sven K. Esche