Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Philip Nicolai is active.

Publication


Featured researches published by Philip Nicolai.


international conference on advanced robotics | 2013

Multi kinect people detection for intuitive and safe human robot cooperation in the operating room

Tim Beyl; Philip Nicolai; Jörg Raczkowsky; Heinz Wörn; Mirko Daniele Comparetti; Elena De Momi

Microsoft Kinect cameras are widely used in robotics. The cameras can be mounted either to the robot itself (in case of mobile robotics) or can be placed where they have a good view on robots and/or humans. The use of cameras in the surgical operating room adds additional complexity in placing the cameras and adds the necessity of coping with a highly uncontrolled environment with occlusions and unknown objects. In this paper we present an approach that accurately detects humans using multiple Kinect cameras. Experiments were performed to show that our approach is robust to interference, noise and occlusions. It provides a good detection and identification rate of the user which is crucial for safe human robot cooperation.


robotics and biomimetics | 2011

A supervision system for the intuitive usage of a telemanipulated surgical robotic setup

Holger Mönnich; Philip Nicolai; Tim Beyl; Jörg Raczkowsky; Heinz Wörn

This paper introduces the OP:Sense system that is able to track objects and humans and. To reach this goal a complete surgical robotic system is built up that can be used for telemanipulation as well as for autonomous tasks, e.g. cutting or needle-insertion. Two KUKA lightweight robots that feature seven DOF and allow variable stiffness and damping due to an integrated impedance controller are used as actuators. The system includes two haptic input devices for providing haptic feedback in telemanipulation mode as well as including virtual fixtures to guide the surgeon even during telemanipulation mode. The supervision system consists of a marker-based optical tracking system, Photonic Mixer Device cameras (PMD) and rgb-d cameras (Microsoft Kinect). A simulation environment is constantly updated with the model of the environment, the model of the robots and tracked objects, the occupied space as well as tracked models of humans.


Robot Operating System (ROS) - The Complete Reference (Volume 1). Ed.: A. Koubaa | 2016

ROS-Based Cognitive Surgical Robotics

Andreas Bihlmaier; Tim Beyl; Philip Nicolai; Mirko Kunze; Julien Mintenbeck; Luzie Schreiter; Thorsten Brennecke; Jessica Hutzl; Jörg Raczkowsky; Heinz Wörn

The case study at hand describes our ROS-based setup for robot-assisted (minimally-invasive) surgery. The system includes different perception components (Kinects, Time-of-Flight Cameras, Endoscopic Cameras, Marker-based Trackers, Ultrasound), input devices (Force Dimension Haptic Input Devices), robots (KUKA LWRs, Universal Robots UR5, ViKY Endoscope Holder), surgical instruments and augmented reality displays. Apart from bringing together the individual components in a modular and flexible setup, many subsystems have been developed based on combinations of the single components. These subsystems include a bimanual telemanipulator, multiple Kinect people tracking, knowledge-based endoscope guidance and ultrasound tomography. The platform is not a research project in itself, but a basic infrastructure used for various research projects. We want to show how to build a large robotics platform, in fact a complete lab setup, based on ROS. It is flexible and modular enough to do research on different robotics related questions concurrently. The whole setup is running on ROS Indigo and Ubuntu Trusty (14.04). A repository of already open sourced components is available at https://github.com/KITmedical.


robotics and biomimetics | 2011

OP:Sense — An integrated rapid development environment in the context of robot assisted surgery and operation room sensing

Philip Nicolai; Tim Beyl; Holger Mönnich; Jörg Raczkowsky; Heinz Wörn

In this video we show the capabilities of the OP:Sense system. OP:Sense is an integrated rapid application development environment for robot assisted surgery. It mainly aims on MIRS and on open head neurosurgery as OP:Sense is developed for the EU Projects FP7 SAFROS and FP7 ACTIVE that aim on these use-cases. Besides the framework, OP:Sense also integrates applications. Thus it is not only the framework itself but also a system that demonstrates how robots can be used for surgical interventions. Core of the system is the ACE TAO framework [1] [2] that implements realtime CORBA for communication between distributed systems. We built interfaces based on CORBA for use in Matlab and Simulink. Also there are modules for 3D Slicer and applications for the control of devices like robots, or surgical tools. As Matlab is a mighty tool for rapid application development it can be used to develop applications in a faster way compared to using C++ or similar programming languages. We use Matlab for setting up our environment and for tasks and computations that does not need to run in realtime. For Realtime tasks like telemanipulation we use Simulink models.


international conference on informatics in control automation and robotics | 2015

Continuous Pre-Calculation of Human Tracking with Time-delayed Ground-truth

Philip Nicolai; Jörg Raczkowsky; Heinz Wörn

We present an approach to track a point cloud with a 3D camera system with low latency and/or high frame rate, based on ground truth provided by a second 3D camera system with higher latency and/or lower frame rate. In particular, we employ human tracking based on Kinect cameras and combine it with higher frame-rate/lower latency of Time-of-Flight (ToF) cameras. We present the system setup, methods used and evaluation results showing a very high accuracy in combination with a latency reduction of up to factor 30.


Studies in health technology and informatics | 2012

Haptic feedback in OP:Sense - augmented reality in telemanipulated robotic surgery.

Tim Beyl; Philip Nicolai; Holger Mönnich; Raczkowksy J; Heinz Wörn

In current research, haptic feedback in robot assisted interventions plays an important role. However most approaches to haptic feedback only regard the mapping of the current forces at the surgical instrument to the haptic input devices, whereas surgeons demand a combination of medical imaging and telemanipulated robotic setups. In this paper we describe how this feature is integrated in our robotic research platform OP:Sense. The proposed method allows the automatic transfer of segmented imaging data to the haptic renderer and therefore allows enriching the haptic feedback with virtual fixtures based on imaging data. Anatomical structures are extracted from pre-operative generated medical images or virtual walls are defined by the surgeon inside the imaging data. Combining real forces with virtual fixtures can guide the surgeon to the regions of interest as well as helps to prevent the risk of damage to critical structures inside the patient. We believe that the combination of medical imaging and telemanipulation is a crucial step for the next generation of MIRS-systems.


Studies in health technology and informatics | 2016

3D Perception Technologies for Surgical Operating Theatres.

Tim Beyl; Luzie Schreiter; Philip Nicolai; Jörg Raczkowsky; Heinz Wörn

3D Perception technologies have been explored in various fields. This paper explores the application of such technologies for surgical operating theatres. Clinical applications can be found in workflow detection, tracking and analysis, collision avoidance with medical robots, perception of interaction between participants of the operation, training of the operation room crew, patient calibration and many more. In this paper a complete perception solution for the operating room is shown. The system is based on the ToF technology integrated to the Microsoft Kinect One implements a multi camera approach. Special emphasize is put on the tracking of the personnel and the evaluation of the system performance and accuracy.


Informatics in Control, Automation and Robotics 12th International Conference, ICINCO 2015 Colmar, France, July 21-23, 2015 Revised Selected Papers. Ed.: J. Filipe | 2016

Model-Free (Human) Tracking Based on Ground Truth with Time Delay: A 3D Camera Based Approach for Minimizing Tracking Latency and Increasing Tracking Quality

Philip Nicolai; Jörg Raczkowsky; Heinz Wörn

Model-free tracking allows tracking of objects without prior knowledge of their characteristics. However, many algorithms require a manual initialization to select the target object(s) and perform only a coarse tracking. This article presents a new hybrid approach that allows combining a new fast, model-free tracking algorithm using 3D cameras with an arbitrary separate, slower tracking method that provides a time-delayed ground truth. In particular, we focus on human tracking human and employ Time-of-Flight cameras for the model-free tracking, based on ground truth provided by (multiple) Kinect cameras. The article describes the setup of the system, the model-free tracking algorithm and presents evaluation results for two different scenarios. Results show a high precision and recall, even with large time-delays of the ground truth of up to 10 s.


International Journal of Computer Assisted Radiology and Surgery | 2013

The OP : Sense Surgical Robotics Platform : First Feasibility Studies and Current Research

Philip Nicolai; Thorsten Brennecke; Mirko Kunze; Luzie Schreiter; Tim Beyl; Yaokun Zhang; Julien Mintenbeck; Jörg Raczkowsky; Heinz Wörn


computer assisted radiology and surgery | 2016

Time-of-flight-assisted Kinect camera-based people detection for intuitive human robot cooperation in the surgical operating room

Tim Beyl; Philip Nicolai; Mirko Daniele Comparetti; Jörg Raczkowsky; Elena De Momi; Heinz Wörn

Collaboration


Dive into the Philip Nicolai's collaboration.

Top Co-Authors

Avatar

Heinz Wörn

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Jörg Raczkowsky

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Tim Beyl

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Holger Mönnich

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Luzie Schreiter

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Björn Hein

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Julien Mintenbeck

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Mirko Kunze

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Thorsten Brennecke

Karlsruhe Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge