Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Long Qian is active.

Publication


Featured researches published by Long Qian.


computer assisted radiology and surgery | 2017

Comparison of optical see-through head-mounted displays for surgical interventions with object-anchored 2D-display

Long Qian; Alexander Barthel; Alex Johnson; Greg Osgood; Peter Kazanzides; Nassir Navab; Bernhard Fuerst

PurposeOptical see-through head-mounted displays (OST-HMD) feature an unhindered and instantaneous view of the surgery site and can enable a mixed reality experience for surgeons during procedures. In this paper, we present a systematic approach to identify the criteria for evaluation of OST-HMD technologies for specific clinical scenarios, which benefit from using an object-anchored 2D-display visualizing medical information.MethodsCriteria for evaluating the performance of OST-HMDs for visualization of medical information and its usage are identified and proposed. These include text readability, contrast perception, task load, frame rate, and system lag. We choose to compare three commercially available OST-HMDs, which are representatives of currently available head-mounted display technologies. A multi-user study and an offline experiment are conducted to evaluate their performance.ResultsStatistical analysis demonstrates that Microsoft HoloLens performs best among the three tested OST-HMDs, in terms of contrast perception, task load, and frame rate, while ODG R-7 offers similar text readability. The integration of indoor localization and fiducial tracking on the HoloLens provides significantly less system lag in a relatively motionless scenario.ConclusionsWith ever more OST-HMDs appearing on the market, the proposed criteria could be used in the evaluation of their suitability for mixed reality surgical intervention. Currently, Microsoft HoloLens may be more suitable than ODG R-7 and Epson Moverio BT-200 for clinical usability in terms of the evaluated criteria. To the best of our knowledge, this is the first paper that presents a methodology and conducts experiments to evaluate and compare OST-HMDs for their use as object-anchored 2D-display during interventions.


ieee virtual reality conference | 2017

Robust optical see-through head-mounted display calibration: Taking anisotropic nature of user interaction errors into account

Ehsan Azimi; Long Qian; Peter Kazanzides; Nassir Navab

Uncertainty in measurement of point correspondences negatively affects the accuracy and precision in the calibration of head-mounted displays (HMD). In general, the distribution of alignment errors for optical see-through calibration are not isotropic, and one can estimate its distribution based on interaction requirements of a given calibration process and the users measurable head motion and hand-eye coordination characteristics. Current calibration methods, however, mostly utilize the Direct Linear Transformation (DLT) method which minimizes Euclidean distances for HMD projection matrix estimation, disregarding the anisotropicity in the alignment errors. We utilize the error covariance in order to take the anisotropic nature of error distribution into account. The main hypothesis of this study is that using Mahalonobis distance within the nonlinear optimization can improve the accuracy of the HMD calibration. The simulation results indicate that our new method outperforms the standard DLT method both in accuracy and precision, and is more robust against user alignment errors. To the best of our knowledge, this is the first time that anisotropic noise has been accommodated in the optical see-through HMD calibration.


international symposium on mixed and augmented reality | 2016

Reduction of Interaction Space in Single Point Active Alignment Method for Optical See-Through Head-Mounted Display Calibration

Long Qian; Alexander Winkler; Bernhard Fuerst; Peter Kazanzides; Nassir Navab

With users always involved in the calibration of optical see-through head-mounted displays, the accuracy of calibration is subject to human-related errors, for example, postural sway, an unstable input medium, and fatigue. In this paper we propose a new calibration approach: Fixed-head 2 degree-of-freedom (DOF) interaction for Single Point Active Alignment Method (SPAAM) reduces the interaction space from a typical 6 DOF head motion to a 2 DOF cursor position on the semi-transparent screen. It uses a mouse as input medium, which is more intuitive and stable, and reduces user fatigue by simplifying and speeding up the calibration procedure.A multi-user study confirmed the significant reduction of humanrelated error by comparing our novel fixed-head 2 DOF interaction to the traditional interaction methods for SPAAM.


international symposium on mixed and augmented reality | 2016

Modeling Physical Structure as Additional Constraints for Stereoscopic Optical See-Through Head-Mounted Display Calibration

Long Qian; Alexander Winkler; Bernhard Fuerst; Peter Kazanzides; Nassir Navab

For stereoscopic optical see-through head-mounted display calibration, existing methods that calibrate both eyes at the same time highly depend on the HMD users unreliable depth perception. On the other hand, treating both eyes separately requires the user to perform twice the number of alignment tasks, and does not satisfy the physical structure of the system. This paper introduces a novel method that models physical structure as additional constraints and explicitly solves for intrinsic and extrinsic parameters of the stereoscopic system by optimizing a unified cost function. The calibration does not involve the unreliable depth alignment of the user, and lessens the burden for user interaction.


ieee virtual reality conference | 2017

Prioritization and static error compensation for multi-camera collaborative tracking in augmented reality

Jianren Wang; Long Qian; Ehsan Azimi; Peter Kazanzides

An effective and simple method is proposed for multi-camera collaborative tracking, based on the prioritization of all tracking units, and then modeling the discrepancy between different tracking units as a locally static transformation error. Static error compensation is applied to the lower-priority tracking systems when high-priority trackers are not available. The method does not require high-end or carefully calibrated tracking units, and is able to effectively provide a comfortable augmented reality experience for users. A pilot study demonstrates the validity of the proposed method.


emerging technologies and factory automation | 2015

An Ethernet to FireWire bridge for real-time control of the da Vinci Research Kit (dVRK)

Long Qian; Zihan Chen; Peter Kazanzides

In this paper, a real-time control network based on Ethernet and FireWire is presented, where Ethernet provides a convenient, cross-platform interface between a central control PC and a FireWire subnetwork that contains multiple distributed nodes (I/O boards). Real-time performance is achieved because this architecture limits the number of Ethernet transactions on the host PC, benefits from the availability of real-time Ethernet drivers, and uses the broadcast and peer-to-peer capabilities of FireWire to efficiently transfer data among the distributed nodes. This approach and resulting benefits are comparable to EtherCAT, but preserves existing investments in FireWire-based controllers and relies only on conventional, vendor-neutral network hardware and protocols. The system performance is demonstrated on the da Vinci Research Kit (dVRK), which consists of 8 FireWire nodes that control 2 Master Tool Manipulators (MTMs) and 2 Patient Side Manipulators (PSMs), for a total of 28 axes. This approach is generally applicable to interface existing FireWire-based systems to new control PCs via Ethernet or to serve as an open-source alternative to EtherCAT for new designs.


Journal of NeuroInterventional Surgery | 2018

Image guided percutaneous spine procedures using an optical see-through head mounted display: proof of concept and rationale

Gerard Deib; Alex Johnson; Mathias Unberath; Kevin Yu; Sebastian Andress; Long Qian; Gregory Osgood; Nassir Navab; Ferdinand Hui; Philippe Gailloud

Background and purpose Optical see-through head mounted displays (OST-HMDs) offer a mixed reality (MixR) experience with unhindered procedural site visualization during procedures using high resolution radiographic imaging. This technical note describes our preliminary experience with percutaneous spine procedures utilizing OST-HMD as an alternative to traditional angiography suite monitors. Methods MixR visualization was achieved using the Microsoft HoloLens system. Various spine procedures (vertebroplasty, kyphoplasty, and percutaneous discectomy) were performed on a lumbar spine phantom with commercially available devices. The HMD created a real time MixR environment by superimposing virtual posteroanterior and lateral views onto the interventionalist’s field of view. The procedures were filmed from the operator’s perspective. Videos were reviewed to assess whether key anatomic landmarks and materials were reliably visualized. Dosimetry and procedural times were recorded. The operator completed a questionnaire following each procedure, detailing benefits, limitations, and visualization mode preferences. Results Percutaneous vertebroplasty, kyphoplasty, and discectomy procedures were successfully performed using OST-HMD image guidance on a lumbar spine phantom. Dosimetry and procedural time compared favorably with typical procedural times. Conventional and MixR visualization modes were equally effective in providing image guidance, with key anatomic landmarks and materials reliably visualized. Conclusion This preliminary study demonstrates the feasibility of utilizing OST-HMDs for image guidance in interventional spine procedures. This novel visualization approach may serve as a valuable adjunct tool during minimally invasive percutaneous spine treatment.


Healthcare technology letters | 2018

ARssist: augmented reality on a head-mounted display for the first assistant in robotic surgery

Long Qian; Anton Deguet; Peter Kazanzides

In robot-assisted laparoscopic surgery, the first assistant (FA) is responsible for tasks such as robot docking, passing necessary materials, manipulating hand-held instruments, and helping with trocar planning and placement. The performance of the FA is critical for the outcome of the surgery. The authors introduce ARssist, an augmented reality application based on an optical see-through head-mounted display, to help the FA perform these tasks. ARssist offers (i) real-time three-dimensional rendering of the robotic instruments, hand-held instruments, and endoscope based on a hybrid tracking scheme and (ii) real-time stereo endoscopy that is configurable to suit the FAs hand–eye coordination when operating based on endoscopy feedback. ARssist has the potential to help the FA perform his/her task more efficiently, and hence improve the outcome of robot-assisted laparoscopic surgeries.


arXiv: Other Computer Science | 2017

Technical Note: Towards Virtual Monitors for Image Guided Interventions - Real-time Streaming to Optical See-Through Head-Mounted Displays.

Long Qian; Mathias Unberath; Kevin Yu; Bernhard Fuerst; Alex Johnson; Nassir Navab; Greg Osgood


arXiv: Human-Computer Interaction | 2017

Comprehensive Tracker Based Display Calibration for Holographic Optical See-Through Head-Mounted Display

Long Qian; Ehsan Azimi; Peter Kazanzides; Nassir Navab

Collaboration


Dive into the Long Qian's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ehsan Azimi

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alex Johnson

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar

Greg Osgood

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar

Kevin Yu

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Anton Deguet

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar

Emerson Tucker

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar

Ferdinand Hui

Johns Hopkins University

View shared research outputs
Researchain Logo
Decentralizing Knowledge