Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yubing Tong is active.

Publication


Featured researches published by Yubing Tong.


Medical Image Analysis | 2014

Body-Wide Hierarchical Fuzzy Modeling, Recognition, and Delineation of Anatomy in Medical Images

Jayaram K. Udupa; Dewey Odhner; Liming Zhao; Yubing Tong; Monica M. S. Matsumoto; Krzysztof Ciesielski; Alexandre X. Falcão; Pavithra Vaideeswaran; Victoria Ciesielski; Babak Saboury; Syedmehrdad Mohammadianrasanani; Sanghun Sin; Raanan Arens; Drew A. Torigian

To make Quantitative Radiology (QR) a reality in radiological practice, computerized body-wide Automatic Anatomy Recognition (AAR) becomes essential. With the goal of building a general AAR system that is not tied to any specific organ system, body region, or image modality, this paper presents an AAR methodology for localizing and delineating all major organs in different body regions based on fuzzy modeling ideas and a tight integration of fuzzy models with an Iterative Relative Fuzzy Connectedness (IRFC) delineation algorithm. The methodology consists of five main steps: (a) gathering image data for both building models and testing the AAR algorithms from patient image sets existing in our health system; (b) formulating precise definitions of each body region and organ and delineating them following these definitions; (c) building hierarchical fuzzy anatomy models of organs for each body region; (d) recognizing and locating organs in given images by employing the hierarchical models; and (e) delineating the organs following the hierarchy. In Step (c), we explicitly encode object size and positional relationships into the hierarchy and subsequently exploit this information in object recognition in Step (d) and delineation in Step (e). Modality-independent and dependent aspects are carefully separated in model encoding. At the model building stage, a learning process is carried out for rehearsing an optimal threshold-based object recognition method. The recognition process in Step (d) starts from large, well-defined objects and proceeds down the hierarchy in a global to local manner. A fuzzy model-based version of the IRFC algorithm is created by naturally integrating the fuzzy model constraints into the delineation algorithm. The AAR system is tested on three body regions - thorax (on CT), abdomen (on CT and MRI), and neck (on MRI and CT) - involving a total of over 35 organs and 130 data sets (the total used for model building and testing). The training and testing data sets are divided into equal size in all cases except for the neck. Overall the AAR method achieves a mean accuracy of about 2 voxels in localizing non-sparse blob-like objects and most sparse tubular objects. The delineation accuracy in terms of mean false positive and negative volume fractions is 2% and 8%, respectively, for non-sparse objects, and 5% and 15%, respectively, for sparse objects. The two object groups achieve mean boundary distance relative to ground truth of 0.9 and 1.5 voxels, respectively. Some sparse objects - venous system (in the thorax on CT), inferior vena cava (in the abdomen on CT), and mandible and naso-pharynx (in neck on MRI, but not on CT) - pose challenges at all levels, leading to poor recognition and/or delineation results. The AAR method fares quite favorably when compared with methods from the recent literature for liver, kidneys, and spleen on CT images. We conclude that separation of modality-independent from dependent aspects, organization of objects in a hierarchy, encoding of object relationship information explicitly into the hierarchy, optimal threshold-based recognition learning, and fuzzy model-based IRFC are effective concepts which allowed us to demonstrate the feasibility of a general AAR system that works in different body regions on a variety of organs and on different modalities.


Medical Physics | 2014

Optimization of abdominal fat quantification on CT imaging through use of standardized anatomic space: a novel approach.

Yubing Tong; Jayaram K. Udupa; Drew A. Torigian

PURPOSE The quantification of body fat plays an important role in the study of numerous diseases. It is common current practice to use the fat area at a single abdominal computed tomography (CT) slice as a marker of the body fat content in studying various disease processes. This paper sets out to answer three questions related to this issue which have not been addressed in the literature. At what single anatomic slice location do the areas of subcutaneous adipose tissue (SAT) and visceral adipose tissue (VAT) estimated from the slice correlate maximally with the corresponding fat volume measures? How does one ensure that the slices used for correlation calculation from different subjects are at the same anatomic location? Are there combinations of multiple slices (not necessarily contiguous) whose area sum correlates better with volume than does single slice area with volume? METHODS The authors propose a novel strategy for mapping slice locations to a standardized anatomic space so that same anatomic slice locations are identified in different subjects. The authors then study the volume-to-area correlations and determine where they become maximal. To address the third issue, the authors carry out similar correlation studies by utilizing two and three slices for calculating area sum. RESULTS Based on 50 abdominal CT data sets, the proposed mapping achieves significantly improved consistency of anatomic localization compared to current practice. Maximum correlations are achieved at different anatomic locations for SAT and VAT which are both different from the L4-L5 junction commonly utilized currently for single slice area estimation as a marker. CONCLUSIONS The maximum area-to-volume correlation achieved is quite high, suggesting that it may be reasonable to estimate body fat by measuring the area of fat from a single anatomic slice at the site of maximum correlation and use this as a marker. The site of maximum correlation is not at L4-L5 as commonly assumed, but is more superiorly located at T12-L1 for SAT and at L3-L4 for VAT. Furthermore, the optimal anatomic locations for SAT and VAT estimation are not the same, contrary to common assumption. The proposed standardized space mapping achieves high consistency of anatomic localization by accurately managing nonlinearities in the relationships among landmarks. Multiple slices achieve greater improvement in correlation for VAT than for SAT. The optimal locations in the case of multiple slices are not contiguous.


Tsinghua Science & Technology | 2008

Image and Video Quality Assessment Using Neural Network and SVM

Wenrui Ding; Yubing Tong; Qi-Shan Zhang; Dongkai Yang

Abstract An image and video quality assessment method was developed using neural network and support vector machines (SVM) with the peak signal to noise ratio (PSNR) and the structure similarity indexes used to describe image quality. The neural network was used to obtain the mapping functions between the objective quality assessment indexes and subjective quality assessment. The SVM was used to classify the images into different types which were accessed using different mapping functions. Video quality was assessed based on the quality of each frame in the video sequence with various weights to describe motion and scene changes in the video. The number of isolated points in the correlations of the image and video subjective and objective quality assessments was reduced by this method. Simulation results show that the method accurately accesses image quality. The monotonicity of the method for images is 6.94% higher than with the PSNR method, and the root mean square error is at least 35.90% higher than with the PSNR.


international symposium on distributed computing | 2010

Predictive Saliency Maps for Surveillance Videos

Fahad Fazal Elahi Guraya; Faouzi Alaya Cheikh; Alain Trémeau; Yubing Tong; Hubert Konik

When viewing video sequences, the human visual system (HVS) tends to focus on the active objects. These are perceived as the most salient regions in the scene. Additionally, human observers tend to predict the future positions of moving objects in a dynamic scene and to direct their gaze to these positions. In this paper we propose a saliency detection model that accounts for the motion in the sequence and predicts the positions of the salient objects in future frames. This is a novel technique for attention models that we call Predictive Saliency Map (PSM). PSM improves the consistency of the estimated saliency maps for video sequences. PSM uses both static information provided by static saliency maps (SSM) and motion vectors to predict future salient regions in the next frame. In this paper we focus only on surveillance videos therefore, in addition to low-level features such as intensity, color and orientation we consider high-level features such as faces as salient regions that attract naturally viewers attention. Saliency maps computed based on these static features are combined with motion saliency maps to account for saliency created by the activity in the scene. The predicted saliency map is computed using previous saliency maps and motion information. The PSMs are compared with the experimentally obtained gaze maps and saliency maps obtained using approaches from the literature. The experimental results show that our enhanced model yields higher ability to predict eye fixations in surveillance videos.


Proceedings of SPIE | 2013

Fuzzy model-based body-wide anatomy recognition in medical images

Jayaram K. Udupa; Dewey Odhner; Yubing Tong; Monica M. S. Matsumoto; Krzysztof Ciesielski; Pavithra Vaideeswaran; Victoria Ciesielski; Babak Saboury; Liming Zhao; Syedmehrdad Mohammadianrasanani; Drew A. Torigian

To make Quantitative Radiology a reality in routine radiological practice, computerized automatic anatomy recognition (AAR) becomes essential. Previously, we presented a fuzzy object modeling strategy for AAR. This paper presents several advances in this project including streamlined definition of open-ended anatomic objects, extension to multiple imaging modalities, and demonstration of the same AAR approach on multiple body regions. The AAR approach consists of the following steps: (a) Collecting image data for each population group G and body region B. (b) Delineating in these images the objects in B to be modeled. (c) Building Fuzzy Object Models (FOMs) for B. (d) Recognizing individual objects in a given image of B by using the models. (e) Delineating the recognized objects. (f) Implementing the computationally intensive steps in a graphics processing unit (GPU). Image data are collected for B and G from our existing patient image database. Fuzzy models for the individual objects are built and assembled into a model of B as per a chosen hierarchy of the objects in B. A global recognition strategy is used to determine the pose of the objects within a given image I following the hierarchy. The recognized pose is utilized to delineate the objects, also hierarchically. Based on three body regions tested utilizing both computed tomography (CT) and magnetic resonance (MR) imagery, recognition accuracy for non-sparse objects has been found to be generally sufficient ( 3 to 11 mm or 2-3 voxels) to yield delineation false positive (FP) and true positive (TP) values of < 5% and ≥ 90%, respectively. The sparse objects require further work to improve their recognition accuracy.


information sciences, signal processing and their applications | 2010

A non-reference perceptual quality metric based on visual attention model for videos

Fahad Fazal Elahi Guraya; Ali Shariq Imran; Yubing Tong; Faouzi Alaya Cheikh

The Human Visual System (HVS) tends to focus on specific regions of viewed images or video frames, this is done effortlessly, instantly and unconsciously. These are called salient regions and form a saliency map, which could be used to improve a number of image and video processing techniques. In this paper, we propose a novel non-reference objective video quality metric based on the saliency map to improve the estimation of the perceived video quality. This metric estimates the degree of blur and blockiness in each video frame from the impaired video only, and uses it with the saliency map to derive a weighting function. The latter is used to modulate the contribution of the pixel differences to the final quality score. The salient regions of the videos are automatically computed using our video saliency model. A psychophysical experiment is conducted to estimate the perceived quality of the impaired videos. The results of this subjective test are compared to the scores obtained with the proposed objective metric. The objective and subjective scores are found to be highly correlated, which shows that our metric correctly estimates the perceived quality of a video.


Proceedings of SPIE | 2013

Abdominal adiposity quantification at MRI via fuzzy model-based anatomy recognition

Yubing Tong; Jayaram K. Udupa; Dewey Odhner; Sanghun Sin; Raanan Arens

In studying Obstructive Sleep Apnea Syndrome (OSAS) in obese children, the quantification of obesity through MRI has been shown to be useful. For large-scale studies, interactive or manual segmentation strategies become inadequate. Our goal is to automate this process to facilitate high throughput, precision, and accuracy and to eliminate subjectivity in quantification. In this paper, we demonstrate the adaptation, to this application, of a general body-wide Automatic Anatomy Recognition (AAR) system that is being developed separately. The AAR system has been developed based on existing clinical CT image data of 50-60 year-old male subjects and using fuzzy models of a large number of objects in each body region. The individual objects and their models are arranged in a hierarchy that is specific to each body region. In the application under consideration in this paper, we are primarily interested in only the skin boundary, and subcutaneous and visceral adipose region. Further, the image modality is MRI, and the study subjects are 8-17 year-old females. We demonstrate in this paper that, once such a full AAR system is built, it can be easily adapted to a new application by specifying the objects of interest, their hierarchy, and a few other application-specific parameters. Our tests based on MRI of 14 obese subjects indicate a recognition accuracy of about 2 voxels or better for both types of adipose regions. This seems quite adequate in terms of the initialization of model-based graph-cut (GC) and iterative relative fuzzy connectedness (IRFC) algorithms implemented in our AAR system for subsequent delineation of the objects. Both algorithms achieved low false positive volume fraction (FPVF) and high true positive volume fraction (TPVF), with IRFC performing better than GC.


visual communications and image processing | 2010

Multi-Feature Based Visual Saliency Detection in Surveillance Video

Yubing Tong; Hubert Konik; Faouzi Alaya Cheikh; Fahad Fazal Elahi Guraya; Alain Trémeau

The perception of video is different from that of image because of the motion information in video. Motion objects lead to the difference between two neighboring frames which is usually focused on. By far, most papers have contributed to image saliency but seldom to video saliency. Based on scene understanding, a new video saliency detection model with multi-features is proposed in this paper. First, background is extracted based on binary tree searching, then main features in the foreground is analyzed using a multi-scale perception model. The perception model integrates faces as a high level feature, as a supplement to other low-level features such as color, intensity and orientation. Motion saliency map is calculated using the statistic of the motion vector field. Finally, multi-feature conspicuities are merged with different weights. Compared with the gaze map from subjective experiments, the output of the multi-feature based video saliency detection model is close to gaze map.


PLOS ONE | 2016

MR Image Analytics to Characterize the Upper Airway Structure in Obese Children with Obstructive Sleep Apnea Syndrome.

Yubing Tong; Jayaram K. Udupa; Sanghun Sin; Zhengbing Liu; E. Paul Wileyto; Drew A. Torigian; Raanan Arens

Purpose Quantitative image analysis in previous research in obstructive sleep apnea syndrome (OSAS) has focused on the upper airway or several objects in its immediate vicinity and measures of object size. In this paper, we take a more general approach of considering all major objects in the upper airway region and measures pertaining to their individual morphological properties, their tissue characteristics revealed by image intensities, and the 3D architecture of the object assembly. We propose a novel methodology to select a small set of salient features from this large collection of measures and demonstrate the ability of these features to discriminate with very high prediction accuracy between obese OSAS and obese non-OSAS groups. Materials and Methods Thirty children were involved in this study with 15 in the obese OSAS group with an apnea-hypopnea index (AHI) = 14.4 ± 10.7) and 15 in the obese non-OSAS group with an AHI = 1.0 ± 1.0 (p<0.001). Subjects were between 8–17 years and underwent T1- and T2-weighted magnetic resonance imaging (MRI) of the upper airway during wakefulness. Fourteen objects in the vicinity of the upper airways were segmented in these images and a total of 159 measurements were derived from each subject image which included object size, surface area, volume, sphericity, standardized T2-weighted image intensity value, and inter-object distances. A small set of discriminating features was identified from this set in several steps. First, a subset of measures that have a low level of correlation among the measures was determined. A heat map visualization technique that allows grouping of parameters based on correlations among them was used for this purpose. Then, through T-tests, another subset of measures which are capable of separating the two groups was identified. The intersection of these subsets yielded the final feature set. The accuracy of these features to perform classification of unseen images into the two patient groups was tested by using logistic regression and multi-fold cross validation. Results A set of 16 features identified with low inter-feature correlation (< 0.36) yielded a high classification accuracy of 96% with sensitivity and specificity of 97.8% and 94.4%, respectively. In addition to the previously observed increase in linear size, surface area, and volume of adenoid, tonsils, and fat pad in OSAS, the following new markers have been found. Standardized T2-weighted image intensities differed between the two groups for the entire neck body region, pharynx, and nasopharynx, possibly indicating changes in object tissue characteristics. Fat pad and oropharynx become less round or more complex in shape in OSAS. Fat pad and tongue move closer in OSAS, and so also oropharynx and tonsils and fat pad and tonsils. In contrast, fat pad and oropharynx move farther apart from the skin object. Conclusions The study has found several new anatomic bio-markers of OSAS. Changes in standardized T2-weighted image intensities in objects may imply that intrinsic tissue composition undergoes changes in OSAS. The results on inter-object distances imply that treatment methods should respect the relationships that exist among objects and not just their size. The proposed method of analysis may lead to an improved understanding of the mechanisms underlying OSAS.


Proceedings of SPIE | 2015

Interactive Non-Uniformity Correction And Intensity Standardization Of MR Images

Yubing Tong; Jayaram K. Udupa; Dewey Odhner; Shobhit Sharma; Drew A. Torigian

Image non-uniformity and intensity non-standardness are two major hurdles encountered in human and computer interpretation and analysis of magnetic resonance (MR) images. Automated methods for image non-uniformity correction (NC) and intensity standardization (IS) may fail because solutions for them require identifying regions representing the same tissue type for several different tissues, and the automatic strategies, irrespective of the approach, may fail in this task. This paper presents interactive strategies to overcome this problem: interactive NC and interactive IS. The methods require sample tissue regions to be specified for several different types of tissues. Interactive NC estimates the degree of non-uniformity at each voxel in a given image, builds a global function for non-uniformity correction, and then corrects the image to improve quality. Interactive IS includes two steps: a calibration step and a transformation step. In the first step, tissue intensity signatures of each tissue from a few subjects are utilized to set up key landmarks in a standardized intensity space. In the second step, a piecewise linear intensity mapping function is built between the same tissue signatures derived from the given image and those in the standardized intensity space to transform the intensity of the given image into standardized intensity. Preliminary results on abdominal T1-weighted and T2-weighted MR images of 20 subjects show that interactive NC and IS are feasible and can significantly improve image quality over automatic methods. Interactive IS for MR images combined with interactive NC can substantially improve numeric characterization of tissues.

Collaboration


Dive into the Yubing Tong's collaboration.

Top Co-Authors

Avatar

Jayaram K. Udupa

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Drew A. Torigian

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Dewey Odhner

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Caiyun Wu

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Gargi Pednekar

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Joseph M. McDonough

Children's Hospital of Philadelphia

View shared research outputs
Top Co-Authors

Avatar

Raanan Arens

Albert Einstein College of Medicine

View shared research outputs
Top Co-Authors

Avatar

Robert M. Campbell

Children's Hospital of Philadelphia

View shared research outputs
Top Co-Authors

Avatar

Sanghun Sin

Albert Einstein College of Medicine

View shared research outputs
Top Co-Authors

Avatar

Xingyu Wu

University of Pennsylvania

View shared research outputs
Researchain Logo
Decentralizing Knowledge