Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kevin Cleary is active.

Publication


Featured researches published by Kevin Cleary.


IEEE Transactions on Medical Imaging | 2014

Electromagnetic Tracking in Medicine—A Review of Technology, Validation, and Applications

Alfred M. Franz; Tamás Haidegger; Wolfgang Birkfellner; Kevin Cleary; Terry M. Peters; Lena Maier-Hein

Object tracking is a key enabling technology in the context of computer-assisted medical interventions. Allowing the continuous localization of medical instruments and patient anatomy, it is a prerequisite for providing instrument guidance to subsurface anatomical structures. The only widely used technique that enables real-time tracking of small objects without line-of-sight restrictions is electromagnetic (EM) tracking. While EM tracking has been the subject of many research efforts, clinical applications have been slow to emerge. The aim of this review paper is therefore to provide insight into the future potential and limitations of EM tracking for medical use. We describe the basic working principles of EM tracking systems, list the main sources of error, and summarize the published studies on tracking accuracy, precision and robustness along with the corresponding validation protocols proposed. State-of-the-art approaches to error compensation are also reviewed in depth. Finally, an overview of the clinical applications addressed with EM tracking is given. Throughout the paper, we report not only on scientific progress, but also provide a review on commercial systems. Given the continuous debate on the applicability of EM tracking in medicine, this paper provides a timely overview of the state-of-the-art in the field.


International Journal of Medical Robotics and Computer Assisted Surgery | 2011

Does Needle Rotation Improve Lesion Targeting

Shadi Badaan; Doru Petrisor; Chunwoo Kim; Pierre Mozer; Dumitru Mazilu; Lucian Gruionu; Alex Patriciu; Kevin Cleary; Dan Stoianovici

Image‐guided robots are manipulators that operate based on medical images. Perhaps the most common class of image‐guided robots are robots for needle interventions. Typically, these robots actively position and/or orient a needle guide, but needle insertion is still done by the physician. While this arrangement may have safety advantages and keep the physician in control of needle insertion, actuated needle drivers can incorporate other useful features.


Surgical Endoscopy and Other Interventional Techniques | 2014

Stereoscopic augmented reality for laparoscopic surgery

Xin Kang; Mahdi Azizian; Emmanuel Wilson; Kyle Wu; Aaron D. Martin; Timothy D. Kane; Craig A. Peters; Kevin Cleary; Raj Shekhar

BackgroundConventional laparoscopes provide a flat representation of the three-dimensional (3D) operating field and are incapable of visualizing internal structures located beneath visible organ surfaces. Computed tomography (CT) and magnetic resonance (MR) images are difficult to fuse in real time with laparoscopic views due to the deformable nature of soft-tissue organs. Utilizing emerging camera technology, we have developed a real-time stereoscopic augmented-reality (AR) system for laparoscopic surgery by merging live laparoscopic ultrasound (LUS) with stereoscopic video. The system creates two new visual cues: (1) perception of true depth with improved understanding of 3D spatial relationships among anatomical structures, and (2) visualization of critical internal structures along with a more comprehensive visualization of the operating field.MethodsThe stereoscopic AR system has been designed for near-term clinical translation with seamless integration into the existing surgical workflow. It is composed of a stereoscopic vision system, a LUS system, and an optical tracker. Specialized software processes streams of imaging data from the tracked devices and registers those in real time. The resulting two ultrasound-augmented video streams (one for the left and one for the right eye) give a live stereoscopic AR view of the operating field. The team conducted a series of stereoscopic AR interrogations of the liver, gallbladder, biliary tree, and kidneys in two swine.ResultsThe preclinical studies demonstrated the feasibility of the stereoscopic AR system during in vivo procedures. Major internal structures could be easily identified. The system exhibited unobservable latency with acceptable image-to-video registration accuracy.ConclusionsWe presented the first in vivo use of a complete system with stereoscopic AR visualization capability. This new capability introduces new visual cues and enhances visualization of the surgical anatomy. The system shows promise to improve the precision and expand the capacity of minimally invasive laparoscopic surgeries.


Frontiers in Aging Neuroscience | 2014

Quantitative Ultrasound: Measurement Considerations for the Assessment of Muscular Dystrophy and Sarcopenia

Michael O. Harris-Love; Reza Monfaredi; Catheeja Ismail; Marc R. Blackman; Kevin Cleary

Diagnostic musculoskeletal ultrasound is a non-invasive, low-cost, imaging modality that may be used to characterize normal and pathological muscle tissue. Sonography has been long proposed as a method of assessing muscle damage due to neuromuscular diseases such as muscular dystrophy (Reimers et al., 1996), and more recently, changes in body and tissue composition associated with muscle wasting disorders such as sarcopenia (Pillen and van Alfen, 2011). The use of quantitative ultrasound as an adjunct diagnostic procedure has different technical challenges than the traditional use of ultrasound in clinical medicine. Examiner-dependent technique and variation are critical considerations when assessing the presence of muscle atrophy via tissue dimension estimates using muscle thickness measures, or when quantifying pathological changes in muscle quality via estimates of echointensity using grayscale analysis. Understanding both the promise of quantitative ultrasound as an assessment tool for muscle disorders and the known threats to measurement validity may foster greater adoption of this imaging modality in the management of muscular dystrophy and sarcopenia.


Software - Practice and Experience | 2011

Agile methods for open source safety-critical software

Kevin Gary; Andinet Enquobahrie; Luis Ibanez; Patrick Cheng; Ziv Yaniv; Kevin Cleary; Shylaja Kokoori; Benjamin Muffih; John Heidenreich

The introduction of software technology in a life‐dependent environment requires the development team to execute a process that ensures a high level of software reliability and correctness. Despite their popularity, agile methods are generally assumed to be inappropriate as a process family in these environments due to their lack of emphasis on documentation, traceability, and other formal techniques. Agile methods, notably Scrum, favor empirical process control, or small constant adjustments in a tight feedback loop. This paper challenges the assumption that agile methods are inappropriate for safety‐critical software development. Agile methods are flexible enough to encourage the right amount of ceremony; therefore if safety‐critical systems require greater emphasis on activities, such as formal specification and requirements management, then an agile process will include these as necessary activities. Furthermore, agile methods focus more on continuous process management and code‐level quality than classic software engineering process models. We present our experiences on the image‐guided surgical toolkit (IGSTK) project as a backdrop. IGSTK is an open source software project employing agile practices since 2004. We started with the assumption that a lighter process is better, focused on evolving code, and only adding process elements as the need arose. IGSTK has been adopted by teaching hospitals and research labs, and used for clinical trials. Agile methods have matured since the academic community suggested almost a decade ago that they were not suitable for safety‐critical systems; we present our experiences as a case study for renewing the discussion. Copyright


Computerized Medical Imaging and Graphics | 2013

On mixed reality environments for minimally invasive therapy guidance: systems architecture, successes and challenges in their implementation from laboratory to clinic.

Cristian A. Linte; Katherine P. Davenport; Kevin Cleary; Craig A. Peters; Kirby G. Vosburgh; Nassir Navab; Philip “Eddie” Edwards; Pierre Jannin; Terry M. Peters; David R. Holmes; Richard A. Robb

Mixed reality environments for medical applications have been explored and developed over the past three decades in an effort to enhance the clinicians view of anatomy and facilitate the performance of minimally invasive procedures. These environments must faithfully represent the real surgical field and require seamless integration of pre- and intra-operative imaging, surgical instrument tracking, and display technology into a common framework centered around and registered to the patient. However, in spite of their reported benefits, few mixed reality environments have been successfully translated into clinical use. Several challenges that contribute to the difficulty in integrating such environments into clinical practice are presented here and discussed in terms of both technical and clinical limitations. This article should raise awareness among both developers and end-users toward facilitating a greater application of such environments in the surgical practice of the future.


Minimally Invasive Therapy & Allied Technologies | 2015

Robot-assisted ultrasound imaging: Overview and development of a parallel telerobotic system

Reza Monfaredi; Emmanuel Wilson; Bamshad Azizi Koutenaei; Brendan Labrecque; kristen Leroy; James Goldie; Eric Louis; Daniel Swerdlow; Kevin Cleary

Abstract Ultrasound imaging is frequently used in medicine. The quality of ultrasound images is often dependent on the skill of the sonographer. Several researchers have proposed robotic systems to aid in ultrasound image acquisition. In this paper we first provide a short overview of robot-assisted ultrasound imaging (US). We categorize robot-assisted US imaging systems into three approaches: autonomous US imaging, teleoperated US imaging, and human-robot cooperation. For each approach several systems are introduced and briefly discussed. We then describe a compact six degree of freedom parallel mechanism telerobotic system for ultrasound imaging developed by our research team. The long-term goal of this work is to enable remote ultrasound scanning through teleoperation. This parallel mechanism allows for both translation and rotation of an ultrasound probe mounted on the top plate along with force control. Our experimental results confirmed good mechanical system performance with a positioning error of < 1 mm. Phantom experiments by a radiologist showed promising results with good image quality.


ieee international conference on biomedical robotics and biomechatronics | 2014

A prototype body-mounted MRI-compatible robot for needle guidance in shoulder arthrography

Reza Monfaredi; Reza Seifabadi; Iulian Iordachita; Raymond W. Sze; Nabile M. Safdar; Karun Sharma; Stanley T. Fricke; Axel Krieger; Kevin Cleary

A novel compact and lightweight patient-mounted MRI-compatible robot has been designed for MRI image-guided interventions. This robot is intended to enable MRI-guided needle placement as done in shoulder arthrography. The robot could make needle placement more accurate and simplify the current workflow by converting the traditional two-stage arthrography procedure (fluoroscopy-guided needle insertion followed by a diagnostic MRI scan) to a one-stage procedure (streamlined workflow all in MRI suite). The robot has 4 degrees of freedom (DOF), two for orientation of the needle and two for needle positioning. The mechanical design was based on several criteria including rigidity, MRI compatibility, compact design, sterilizability, and adjustability. The proposed workflow is discussed and initial MRI compatibility experiments are presented. The results show that artifacts in the region of interest are minimal and that MRI images of the shoulder were not adversely affected by placing the robot on a human volunteer.


medical image computing and computer assisted intervention | 2014

Improved Screw Placement for Slipped Capital Femoral Epiphysis (SCFE) Using Robotically-Assisted Drill Guidance

Bamshad Azizi Koutenaei; Ozgur Guler; Emmanuel Wilson; Ramesh U. Thoranaghatte; Matthew E. Oetgen; Nassir Navab; Kevin Cleary

Slipped Capital Femoral Epiphysis (SCFE) is a common hip displacement condition in adolescents. In the standard treatment, the surgeon uses intra-operative fluoroscopic imaging to plan the screw placement and the drill trajectory. The accuracy, duration, and efficacy of this procedure are highly dependent on surgeon skill. Longer procedure times result in higher radiation dose, to both patient and surgeon. A robotic system to guide the drill trajectory might help to reduce screw placement errors and procedure time by reducing the number of passes and confirmatory fluoroscopic images needed to verify accurate positioning of the drill guide along a planned trajectory. Therefore, with the long-term goals of improving screw placement accuracy, reducing procedure time and intra-operative radiation dose, our group is developing an image-guided robotic surgical system to assist a surgeon with pre-operative path planning and intra-operative drill guide placement.


Medical Imaging 2008: PACS and Imaging Informatics | 2008

Integration of a real-time video grabber component with the open source image-guided surgery toolkit IGSTK

Ole Vegard Solberg; Geir-Arne Tangen; Frank Lindseth; Torleif Sandnes; Andinet Enquobahrie; Luis Ibanez; Patrick Cheng; David G. Gobbi; Kevin Cleary

The image-guided surgery toolkit (IGSTK) is an open source C++ library that provides the basic components required for developing image-guided surgery applications. While the initial version of the toolkit has been released, some additional functionalities are required for certain applications. With increasing demand for real-time intraoperative image data in image-guided surgery systems, we are adding a video grabber component to IGSTK to access intraoperative imaging data such as video streams. Intraoperative data could be acquired from real-time imaging modalities such as ultrasound or endoscopic cameras. The acquired image could be displayed as a single slice in a 2D window or integrated in a 3D scene. For accurate display of the intraoperative image relative to the patients preoperative image, proper interaction and synchronization with IGSTKs tracker and other components is necessary. Several issues must be considered during the design phase: 1) Functions of the video grabber component 2) Interaction of the video grabber component with existing and future IGSTK components; and 3) Layout of the state machine in the video grabber component. This paper describes the video grabber component design and presents example applications using the video grabber component.

Collaboration


Dive into the Kevin Cleary's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Craig A. Peters

University of Texas Southwestern Medical Center

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bamshad Azizi Koutenaei

Children's National Medical Center

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ziv Yaniv

Georgetown University

View shared research outputs
Top Co-Authors

Avatar

Amy White

Georgetown University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge