Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where David P. Azari is active.

Publication


Featured researches published by David P. Azari.


Applied Ergonomics | 2017

Visualizing stressful aspects of repetitive motion tasks and opportunities for ergonomic improvements using computer vision

Runyu L. Greene; David P. Azari; Yu Hen Hu; Robert G. Radwin

Patterns of physical stress exposure are often difficult to measure, and the metrics of variation and techniques for identifying them is underdeveloped in the practice of occupational ergonomics. Computer vision has previously been used for evaluating repetitive motion tasks for hand activity level (HAL) utilizing conventional 2D videos. The approach was made practical by relaxing the need for high precision, and by adopting a semi-automatic approach for measuring spatiotemporal characteristics of the repetitive task. In this paper, a new method for visualizing task factors, using this computer vision approach, is demonstrated. After videos are made, the analyst selects a region of interest on the hand to track and the hand location and its associated kinematics are measured for every frame. The visualization method spatially deconstructs and displays the frequency, speed and duty cycle components of tasks that are part of the threshold limit value for hand activity for the purpose of identifying patterns of exposure associated with the specific job factors, as well as for suggesting task improvements. The localized variables are plotted as a heat map superimposed over the video, and displayed in the context of the task being performed. Based on the intensity of the specific variables used to calculate HAL, we can determine which task factors most contribute to HAL, and readily identify those work elements in the task that contribute more to increased risk for an injury. Work simulations and actual industrial examples are described. This method should help practitioners more readily measure and interpret temporal exposure patterns and identify potential task improvements.


Ergonomics | 2015

A hand speed-duty cycle equation for estimating the ACGIH hand activity level rating.

Oguz Akkas; David P. Azari; Chia Hsiung Eric Chen; Yu Hen Hu; Sheryl S. Ulin; Thomas J. Armstrong; David Rempel; Robert G. Radwin

An equation was developed for estimating hand activity level (HAL) directly from tracked root mean square (RMS) hand speed (S) and duty cycle (D). Table lookup, equation or marker-less video tracking can estimate HAL from motion/exertion frequency (F) and D. Since automatically estimating F is sometimes complex, HAL may be more readily assessed using S. Hands from 33 videos originally used for the HAL rating were tracked to estimate S, scaled relative to hand breadth (HB), and single-frame analysis was used to measure D. Since HBs were unknown, a Monte Carlo method was employed for iteratively estimating the regression coefficients from US Army anthropometry survey data. The equation: , R2 = 0.97, had a residual range ± 0.5 HAL. The S equation superiorly fits the Latko et al. (1997) data and predicted independently observed HAL values (Harris 2011) better (MSE = 0.16) than the F equation (MSE = 1.28). Practitioner Summary: An equation was developed for estimating the HAL rating for the American Conference of Government Industrial Hygienists threshold limit value® based on hand RMS speed and duty cycle. Speed is more readily evaluated from videos using semi-automatic marker-less tracking, than frequency. The speed equation predicted observed HAL values much better than the F equation.


Ergonomics | 2015

A frequency–duty cycle equation for the ACGIH hand activity level

Robert G. Radwin; David P. Azari; Mary J. Lindstrom; Sheryl S. Ulin; Thomas J. Armstrong; David Rempel

A new equation for predicting the hand activity level (HAL) used in the American Conference for Government Industrial Hygienists threshold limit value® (TLV®) was based on exertion frequency (F) and percentage duty cycle (D). The TLV® includes a table for estimating HAL from F and D originating from data in Latko et al. (Latko WA, Armstrong TJ, Foulke JA, Herrin GD, Rabourn RA, Ulin SS, Development and evaluation of an observational method for assessing repetition in hand tasks. American Industrial Hygiene Association Journal, 58(4):278–285, 1997) and post hoc adjustments that include extrapolations outside of the data range. Multimedia video task analysis determined D for two additional jobs from Latkos study not in the original data-set, and a new nonlinear regression equation was developed to better fit the data and create a more accurate table. The equation, , generally matches the TLV® HAL lookup table, and is a substantial improvement over the linear model, particularly for F>1.25 Hz and D>60% jobs. The equation more closely fits the data and applies the TLV® using a continuous function. Practitioner Summary: The original HAL lookup table is limited in resolution, omits values and extrapolates values outside of the range of data. A new equation and table were developed to address these issues.


Ergonomics | 2015

The Accuracy of Conventional 2D Video for Quantifying Upper Limb Kinematics in Repetitive Motion Occupational Tasks

Chia-Hsiung Chen; David P. Azari; Yu Hen Hu; Mary J. Lindstrom; Darryl G. Thelen; Thomas Y. Yen; Robert G. Radwin

Marker-less 2D video tracking was studied as a practical means to measure upper limb kinematics for ergonomics evaluations. Hand activity level (HAL) can be estimated from speed and duty cycle. Accuracy was measured using a cross-correlation template-matching algorithm for tracking a region of interest on the upper extremities. Ten participants performed a paced load transfer task while varying HAL (2, 4, and 5) and load (2.2 N, 8.9 N and 17.8 N). Speed and acceleration measured from 2D video were compared against ground truth measurements using 3D infrared motion capture. The median absolute difference between 2D video and 3D motion capture was 86.5 mm/s for speed, and 591 mm/s2 for acceleration, and less than 93 mm/s for speed and 656 mm/s2 for acceleration when camera pan and tilt were within ± 30 degrees. Single-camera 2D video had sufficient accuracy ( < 100 mm/s) for evaluating HAL. Practitioner Summary: This study demonstrated that 2D video tracking had sufficient accuracy to measure HAL for ascertaining the American Conference of Government Industrial Hygienists Threshold Limit Value® for repetitive motion when the camera is located within ± 30 degrees off the plane of motion when compared against 3D motion capture for a simulated repetitive motion task.


Human Factors | 2016

Evaluation of Simulated Clinical Breast Exam Motion Patterns Using Marker-Less Video Tracking.

David P. Azari; Carla M. Pugh; Shlomi Laufer; Calvin Kwan; Chia-Hsiung Chen; Thomas Y. Yen; Yu Hen Hu; Robert G. Radwin

Objective: This study investigates using marker-less video tracking to evaluate hands-on clinical skills during simulated clinical breast examinations (CBEs). Background: There are currently no standardized and widely accepted CBE screening techniques. Methods: Experienced physicians attending a national conference conducted simulated CBEs presenting different pathologies with distinct tumorous lesions. Single hand exam motion was recorded and analyzed using marker-less video tracking. Four kinematic measures were developed to describe temporal (time pressing and time searching) and spatial (area covered and distance explored) patterns. Results: Mean differences between time pressing, area covered, and distance explored varied across the simulated lesions. Exams were objectively categorized as either sporadic, localized, thorough, or efficient for both temporal and spatial categories based on spatiotemporal characteristics. The majority of trials were temporally or spatially thorough (78% and 91%), exhibiting proportionally greater time pressing and time searching (temporally thorough) and greater area probed with greater distance explored (spatially thorough). More efficient exams exhibited proportionally more time pressing with less time searching (temporally efficient) and greater area probed with less distance explored (spatially efficient). Just two (5.9 %) of the trials exhibited both high temporal and spatial efficiency. Conclusions: Marker-less video tracking was used to discriminate different examination techniques and measure when an exam changes from general searching to specific probing. The majority of participants exhibited more thorough than efficient patterns. Application: Marker-less video kinematic tracking may be useful for quantifying clinical skills for training and assessment.


58th International Annual Meeting of the Human Factors and Ergonomics Society, HFES 2014 | 2014

An Equation for Estimating Hand Activity Level Based on Measured Hand Speed and Duty Cycle

Oguz Akas; David P. Azari; Chia Hsiung Chen; Yu Hen Hu; Thomas J. Armstrong; Sheryl S. Ulin; Robert G. Radwin

We are developing video processing algorithms for automatically measuring the ACGIH TLV® hand activity level (HAL) using marker-less tracking of hand movements. An equation for computing HAL ratings directly from tracked kinematics, rather than using a frequency-duty cycle (DC) look-up table, more readily lends itself to automated processing. Videos from the 33 Latko et al. (1997) jobs were digitized and analyzed using marker-less tracking, and hand root mean square (RMS) speed (S) was measured. A linear regression model was developed for predicting the average observer rated HAL based on the measured hand RMS speed and DC. Since the videos did not contain distance calibration, speed was quantified in pixels/s and normalized by the distance of each worker’s hand breadth, measured in pixels. A Monte Carlo simulation was performed using the US Army (1991) hand anthropometry data to determine how variation is introduced in the equation as hand breadth varies. The resulting equation was HAL= −1.06 + 0.0047 S + 0.053 DC and it predicted HAL ratings within ±1. The development of an accurate equation for estimating HAL ratings should enable use of automated and objective measurement in practice. While expert observer HAL ratings offer speed and efficiency, use of objective measurements based on worker hand kinematics should provide greater reliability, as well as offering specific engineering aspects of the job that may be addressed for reducing exposures and the risk of musculoskeletal disorders. Furthermore, automated videos analysis may help improve the speed and efficiency of making objective measurements in practice.


Proceedings of the Human Factors and Ergonomics Society Annual Meeting | 2018

Can Surgical Performance for Varying Experience be Measured from Hand Motions

David P. Azari; Brady L. Miller; Brian Le; Jacob A. Greenberg; Caprice C. Greenberg; Carla M. Pugh; Yu Hen Hu; Robert G. Radwin

This study evaluates if hand movements, tracked using digital video, can quantify in-context surgical performance. Participants of varied experience completed simple interrupted suturing and running subcuticular suturing tasks. Marker-less motion tracking software traced the two-dimensional position of a region of the hand for every video frame. Four expert observers rated 219 short video clips of participants performing the task from 0 to 10 along the following visual analog scales: fluidity of motion, motion economy, tissue handling, and coordination. Expert ratings of attending surgeon hand motions (mean=7.5, sd=1.3) were significantly greater (p<0.05) than medical students (mean=5.0, sd=1.9) and junior residents (mean=6.4, sd=1.5) for all rating scales. Significant differences (p<0.02) in mean path length per cycle were also observed both between medical students (803 mm, sd=374) and senior residents (491 mm, sd=216), and attendings (424 mm, sd=250) and junior residents (609 mm, sd=187). These results suggest that substantial gains in performance are attained after the second year of residency and that hand kinematics can predict differences in expert ratings for simulated suturing tasks commensurate with experience – a necessary step to develop valid and automatic on-demand feedback tools.


Proceedings of the Human Factors and Ergonomics Society ... Annual Meeting Human Factors and Ergonomics Society. Annual Meeting | 2014

Evaluation of hands-on clinical exam performance using marker-less video tracking

David P. Azari; Carla M. Pugh; Shlomi Laufer; Elaine R. Cohen; Calvin Kwan; Chia-Hsiung Chen; Thomas Y. Yen; Yu Hen Hu; Robert G. Radwin

This study investigates the potential of using marker-less video tracking for evaluating hands-on clinical skills. Experienced family practitioners attending a national conference were recruited and asked to conduct a breast examination on a simulator that presents different clinical pathologies. Videos were taken of the clinician’s hands during the exam. Video processing software for tracking and quantifying hand motion kinematics was used. Videos were divided into two segments: a general search segment and a mass exploration segment. The general exploration segments exhibited motion patterns which included 72% faster movement and 73% higher acceleration across clinical pathologies. The most complex pathology exhibited 14% greater displacement for pressing/rubbing than for general exploration. Marker-less video kinematic tracking shows promise in discriminating between different examination procedures, clinicians, and pathologies.


Surgery | 2016

A marker-less technique for measuring kinematics in the operating room

Lane L. Frasier; David P. Azari; Yue Ma; Sudha R. Pavuluri Quamme; Robert G. Radwin; Carla M. Pugh; Thomas Y. Yen; Chia-Hsiung Chen; Caprice C. Greenberg


The Journal of Urology | 2018

MP01-07 USE OF COMPUTER VISION MOTION ANALYSIS TO AID IN SURGICAL SKILL ASSESSMENT OF SUTURING TASKS

Brady L. Miller; David P. Azari; Robert G. Radwin; Brian Le

Collaboration


Dive into the David P. Azari's collaboration.

Top Co-Authors

Avatar

Robert G. Radwin

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Yu Hen Hu

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Carla M. Pugh

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Chia-Hsiung Chen

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Thomas Y. Yen

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Brady L. Miller

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar

Brian Le

University of Wisconsin-Madison

View shared research outputs
Researchain Logo
Decentralizing Knowledge