Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Bradley S. Duerstock is active.

Publication


Featured researches published by Bradley S. Duerstock.


Journal of Microscopy | 2003

A comparative study of the quantitative accuracy of three-dimensional reconstructions of spinal cord from serial histological sections

Bradley S. Duerstock; Chandrajit L. Bajaj; Richard B. Borgens

We evaluated the accuracy of estimating the volume of biological soft tissues from their three‐dimensional (3D) computer wireframe models, reconstructed from histological data sets obtained from guinea‐pig spinal cords. We compared quantification from two methods of three‐dimensional surface reconstruction to standard quantitative techniques, Cavalieri method employing planimetry and point counting and Geometric Best‐Fitting. This involved measuring a group of spinal cord segments and test objects to evaluate the accuracy of our novel quantification approaches. Once a quantitative methodology was standardized there was no statistical difference in volume measurement of spinal segments between quantification methods. We found that our 3D surface reconstructions’ ability to model precisely actual soft tissues provided an accurate volume quantification of complex anatomical structures as standard approaches of Cavalieri estimation and Geometric Best‐Fitting. Additionally, 3D reconstruction quantitatively interrogates and three‐dimensionally images spinal cord segments and obscured internal pathological features with approximately the same effort required for standard quantification alone.


ieee international conference on rehabilitation robotics | 2013

Integrated vision-based robotic arm interface for operators with upper limb mobility impairments

Hairong Jiang; Juan P. Wachs; Bradley S. Duerstock

An integrated, computer vision-based system was developed to operate a commercial wheelchair-mounted robotic manipulator (WMRM). In this paper, a gesture recognition interface system developed specifically for individuals with upper-level spinal cord injuries (SCIs) was combined with object tracking and face recognition systems to be an efficient, hands-free WMRM controller. In this test system, two Kinect cameras were used synergistically to perform a variety of simple object retrieval tasks. One camera was used to interpret the hand gestures to send as commands to control the WMRM and locate the operators face for object positioning. The other sensor was used to automatically recognize different daily living objects for test subjects to select. The gesture recognition interface incorporated hand detection, tracking and recognition algorithms to obtain a high recognition accuracy of 97.5% for an eight-gesture lexicon. An object recognition module employing Speeded Up Robust Features (SURF) algorithm was performed and recognition results were sent as a command for “coarse positioning” of the robotic arm near the selected daily living object. Automatic face detection was also provided as a shortcut for the subjects to position the objects to the face by using a WMRM. Completion time tasks were conducted to compare manual (gestures only) and semi-manual (gestures, automatic face detection and object recognition) WMRM control modes. The use of automatic face and object detection significantly increased the completion times for retrieving a variety of daily living objects.


Computerized Medical Imaging and Graphics | 2000

Advances in three-dimensional reconstruction of the experimental spinal cord injury

Bradley S. Duerstock; Chandrajit L. Bajaj; Valerio Pascucci; Daniel R. Schikore; Kwun-Nan Lin; Richard B. Borgens

Three-dimensional (3D) computer reconstruction is an ideal tool for evaluating the centralized pathology of mammalian spinal cord injury (SCI) where multiple anatomical features are embedded within each other. Here, we evaluate three different reconstruction algorithms to three-dimensionally visualize SCIs. We also show for the first time, that determination of the volume and surface area of pathological features is possible using the reconstructed 3D images themselves. We compare these measurements to those calculated by older morphometric approaches. Finally, we demonstrate dynamic navigation into a 3D spinal cord reconstruction.


Journal of the Neurological Sciences | 2015

Increase in oxidative stress biomarkers in dogs with ascending–descending myelomalacia following spinal cord injury

Andrew Marquis; Rebecca A. Packer; Richard B. Borgens; Bradley S. Duerstock

Multiple biochemical and immunohistochemical tests were performed to elucidate the role of oxidative stress during ascending-descending (A-D) myelomalacia by comparing dogs with this progressive terminal condition to dogs with chronic, focal spinal cord injuries (SCIs) and controls without SCI. Dogs with A-D myelomalacia exhibited increased biochemical markers for oxidative stress, including 8-isoprostane F2α and acrolein, as well as decreased endogenous glutathione with greatest changes occurring at the lesion center. Inflammation, as evident by the concentration of CD18+ phagocytes and hemorrhagic necrosis, was also exacerbated in the lesion of A-D myelomalacic spinal cord compared to focal SCI. The greatest differences in oxidative stress occurred at the lesion center and diminished distally in both spinal cords with A-D myelomalacia and focal SCIs. The spatial progression and time course of A-D myelomalacia are consistent with the development of secondary injury post-SCI. Ascending-descending myelomalacia is proposed as a clinical model that may further the understanding of the role of oxidative stress during secondary injury. Our results indicate that the pathology of A-D myelomalacia is also similar to subacute progressive ascending myelopathy in humans, which is characterized by recurrent neurodegeneration of spinal cord post-injury.


iberoamerican congress on pattern recognition | 2012

Facilitated Gesture Recognition Based Interfaces for People with Upper Extremity Physical Impairments

Hairong Jiang; Juan P. Wachs; Bradley S. Duerstock

A gesture recognition based interface was developed to facilitate people with upper extremity physical impairments as an alternative way to perform laboratory experiments that require ‘physical’ manipulation of components. A color, depth and spatial information based particle filter framework was constructed with unique descriptive features for face and hands representation. The same feature encoding policy was subsequently used to detect, track and recognize users’ hands. Motion models were created employing dynamic time warping (DTW) method for better observation encoding. Finally, the hand trajectories were classified into different classes (commands) by applying the CONDENSATION method and, in turn, an interface was designed for robot control, with a recognition accuracy of 97.5%. To assess the gesture recognition and control policies, a validation experiment consisting in controlling a mobile service robot and a robotic arm in a laboratory environment was conducted.


ieee international conference on rehabilitation robotics | 2013

3D joystick for robotic arm control by individuals with high level spinal cord injuries

Hairong Jiang; Juan P. Wachs; Martin Pendergast; Bradley S. Duerstock

An innovative 3D joystick was developed to enable quadriplegics due to spinal cord injuries (SCIs) to more independently and efficiently operate a robotic arm as an assistive device. The 3D joystick was compared to two different manual input modalities, a keyboard control and a traditional joystick, in performing experimental robotic arm tasks by both subjects without disabilities and those with upper extremity mobility impairments. Fittss Law targeting and practical pouring tests were conducted to compare the performance and accuracy of the proposed 3D joystick. The Fittss law measurements showed that the 3D joystick had the best index of performance (IP), though it required an equivalent number of operations and errors as the standard robotic arm joystick. The pouring task demonstrated that the 3D joystick took significantly less task completion time and was more accurate than keyboard control. The 3D joystick also showed a decreased learning curve to the other modalities.


Assistive Technology | 2006

Accessible Microscopy Workstation for Students and Scientists with Mobility Impairments

Bradley S. Duerstock

An integrated accessible microscopy workstation was designed and developed to allow persons with mobility impairments to control all aspects of light microscopy with minimal human assistance. This system, named AccessScope, is capable of performing brightfield and fluorescence microscopy, image analysis, and tissue morphometry requisite for undergraduate science courses to graduate-level research. An accessible microscope is necessary for students and scientists with mobility impairments to be able to use a microscope independently to better understand microscopical imaging concepts and cell biology. This knowledge is not always apparent by simply viewing a catalog of histological images. The ability to operate a microscope independently eliminates the need to hire an assistant or rely on a classmate and permits one to take practical laboratory examinations by oneself. Independent microscope handling is also crucial for graduate students and scientists with disabilities to perform scientific research. By making a personal computer as the user interface for controlling AccessScope functions, different upper limb mobility impairments could be accommodated by using various computer input devices and assistive technology software. Participants with a range of upper limb mobility impairments evaluated the prototype microscopy workstation. They were able to control all microscopy functions including loading different slides without assistance.


Journal of Neuroscience Methods | 2004

Double labeling serial sections to enhance three-dimensional imaging of injured spinal cord

Bradley S. Duerstock

A method of double labeling a set of serial histological sections was performed to produce multiple three-dimensional (3D) reconstructions from the same segment of injured spinal cord. Alternate groups of consecutive histological sections were stained with Luxol fast blue with cresyl violet and Mallorys trichrome in order to reconstruct two different 3D images that reveal different pathological features of the same 1-month-old compression spinal cord injury. Three-dimensional visualization of the two reconstructions was accomplished using an isocontouring algorithm that automatically extracts surfaces of features of interest based on pixel intensity. The two 3D reconstructions demonstrated the sparing of myelinated nerve fibers and the composition of neuroglia through the chronic lesion of an adult guinea pig. The 3D images provided a comprehensive and explicit view of a chronically injured spinal cord that is not possible by the inspection of two-dimensional (2D) histological sections or from magnetic resonance imaging. Using every histological section, we believe this double labeling 3D reconstruction technique provides a more enhanced and accurate visualization of the entire spinal cord lesion than has been possible before. Furthermore, we contend that this double labeling technique can further elucidate the histopathological events of secondary injury at different time points post-injury by using different combinations of complementary histological makers.


Computer Vision and Image Understanding | 2016

Enhanced control of a wheelchair-mounted robotic manipulator using 3-D vision and multimodal interaction

Hairong Jiang; Ting Zhang; Juan P. Wachs; Bradley S. Duerstock

Wheelchair-mounted robotic arm integrates 3D computer vision with multimodal input.Object recognition was improved by combining RGB information and 3D point clouds.Hybrid input (using both gestures or speech) outperformed using a single modality.Input performance was validated for daily living tasks: feeding and dressing. This paper presents a multiple-sensors, 3D vision-based, autonomous wheelchair-mounted robotic manipulator (WMRM). Two 3D sensors were employed: one for object recognition, and the other for recognizing body parts (face and hands). The goal is to recognize everyday items and automatically interact with them in an assistive fashion. For example, when a cereal box is recognized, it is grasped, poured in a bowl, and brought to the user. Daily objects (i.e. bowl and hat) were automatically detected and classified using a three-steps procedure: (1) remove background based on 3D information and find the point cloud of each object; (2) extract feature vectors for each segmented object from its 3D point cloud and its color image; and (3) classify feature vectors as objects after applying a nonlinear support vector machine (SVM). To retrieve specific objects, three user interface methods were adopted: voice-based, gesture-based, and hybrid commands. The presented system was tested using two common activities of daily living -- feeding and dressing. The results revealed that an accuracy of 98.96% is achieved for a dataset with twelve daily objects. The experimental results indicated that hybrid (gesture and speech) interaction outperforms any single modal interaction.


systems, man and cybernetics | 2014

An analytic approach to decipher usable gestures for quadriplegic users

Hairong Jiang; Bradley S. Duerstock; Juan P. Wachs

With the advent of new gaming technologies, hand gestures are gaining popularity as an effective communication channel for human computer interaction (HCI). This is particularly relevant for patients recovering from mobility-related injuries or debilitating conditions who use gesture-based gaming for rehabilitation therapy. Unfortunately, most gesture-based gaming systems are designed for able-bodied users and are difficult and costly to adapt to people with upper extremity mobility impairments. While interface customization is an active area of work in assistive technologies (AT), there is no existing formal and analytical grounded methodology to adapt gesture-based control systems for quadriplegics. The goal of this work is to solve this hurdle by developing a mathematical framework to project the patterns of gestural behavior designed for existing gesture systems to those exhibited by quadriplegic subjects due to spinal cord injury (SCI). A key component of our framework relied on Laban movement analysis (LMA) theory, and consisted of four steps: acquiring and preprocessing gesture trajectories, extracting feature vectors, training transform functions, and generating constrained gestures. The feasibility of this framework was validated through user-based experimental paradigms and subject validation. It was found that 100% of gestures selected by subjects with high-level SCIs came from the constrained gesture set. Even for the low-level quadriplegic subject, the alternative gestures were preferred.

Collaboration


Dive into the Bradley S. Duerstock's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Chandrajit L. Bajaj

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge