Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jingyu Cui is active.

Publication


Featured researches published by Jingyu Cui.


acm multimedia | 2008

Real time google and live image search re-ranking

Jingyu Cui; Fang Wen; Xiaoou Tang

Nowadays, web-scale image search engines (e.g. Google, Live Image Search) rely almost purely on surrounding text features. This leads to ambiguous and noisy results. We propose to use adaptive visual similarity to re-rank the text-based search results. A query image is first categorized into one of several predefined intention categories, and a specific similarity measure is used inside each category to combine image features for re-ranking based on the query image. Extensive experiments demonstrate that using this algorithm to filter output of Google and Live Image Search is a practical and effective way to dramatically improve the user experience. A real-time image search engine is developed for on-line image search with re-ranking: http://mmlab.ie.cuhk.edu.hk/intentsearch


international conference on multimedia and expo | 2010

User intention modeling for interactive image retrieval

Jingyu Cui; Fang Wen; Xiaoou Tang

We propose three innovative interactive methods to let computer better understand user intention in content-based image retrieval: 1. Smart intention list induces user intention, thereby improves search results by intention-specific search schema; 2. Reference strokes interaction allows user to specify in detail about the intention by pointing out interested regions; 3. Natural user feedback easily collects data of user relevance feedbacks to boost the performance of the system. Systematic user study shows that the proposed interactive mechanism improves search efficiency, reduces user workload, and enhances user experience.


Medical Physics | 2011

Fully 3D list-mode time-of-flight PET image reconstruction on GPUs using CUDA.

Jingyu Cui; Guillem Pratx; Sven Prevrhal; Craig S. Levin

PURPOSE List-mode processing is an efficient way of dealing with the sparse nature of positron emission tomography (PET) data sets and is the processing method of choice for time-of-flight (ToF) PET image reconstruction. However, the massive amount of computation involved in forward projection and backprojection limits the application of list-mode reconstruction in practice, and makes it challenging to incorporate accurate system modeling. METHODS The authors present a novel formulation for computing line projection operations on graphics processing units (GPUs) using the compute unified device architecture (CUDA) framework, and apply the formulation to list-mode ordered-subsets expectation maximization (OSEM) image reconstruction. Our method overcomes well-known GPU challenges such as divergence of compute threads, limited bandwidth of global memory, and limited size of shared memory, while exploiting GPU capabilities such as fast access to shared memory and efficient linear interpolation of texture memory. Execution time comparison and image quality analysis of the GPU-CUDA method and the central processing unit (CPU) method are performed on several data sets acquired on a preclinical scanner and a clinical ToF scanner. RESULTS When applied to line projection operations for non-ToF list-mode PET, this new GPU-CUDA method is >200 times faster than a single-threaded reference CPU implementation. For ToF reconstruction, we exploit a ToF-specific optimization to improve the efficiency of our parallel processing method, resulting in GPU reconstruction >300 times faster than the CPU counterpart. For a typical whole-body scan with 75 × 75 × 26 image matrix, 40.7 million LORs, 33 subsets, and 3 iterations, the overall processing time is 7.7 s for GPU and 42 min for a single-threaded CPU. Image quality and accuracy are preserved for multiple imaging configurations and reconstruction parameters, with normalized root mean squared (RMS) deviation less than 1% between CPU and GPU-generated images for all cases. CONCLUSIONS A list-mode ToF OSEM library was developed on the GPU-CUDA platform. Our studies show that the GPU reformulation is considerably faster than a single-threaded reference CPU method especially for ToF processing, while producing virtually identical images. This new method can be easily adapted to enable more advanced algorithms for high resolution PET reconstruction based on additional information such as depth of interaction (DoI), photon energy, and point spread functions (PSFs).


computer vision and pattern recognition | 2008

Transductive object cutout

Jingyu Cui; Qiong Yang; Fang Wen; Qiying Wu; Changshui Zhang; L. Van Gool; Xiaoou Tang

In this paper, we address the issue of transducing the object cutout model from an example image to novel image instances. We observe that although object and background are very likely to contain similar colors in natural images, it is much less probable that they share similar color configurations. Motivated by this observation, we propose a local color pattern model to characterize the color configuration in a robust way. Additionally, we propose an edge profile model to modulate the contrast of the image, which enhances edges along object boundaries and attenuates edges inside object or background. The local color pattern model and edge model are integrated in a graph-cut framework. Higher accuracy and improved robustness of the proposed method are demonstrated through experimental comparison with state-of-the-art algorithms.


acm multimedia | 2008

IntentSearch: interactive on-line image search re-ranking

Jingyu Cui; Fang Wen; Xiaoou Tang

In this demo, we present IntentSearch, an interactive system for realtime web based image retrieval. IntentSearch works directly on top of Microsoft Live Image Search, and re-ranks its results according to user specified query image(s) and the automatically inferred user intention. Besides searching in the interface of Microsoft Live Image Search, we also design a more flexible interface to let users browse and play with all the images in the current search session, which makes web image search more efficient and interesting. Please visit http://mmlab.ie.cuhk.edu.hk/intentsearch for the experience.


IEEE Transactions on Medical Imaging | 2013

Distributed MLEM: An Iterative Tomographic Image Reconstruction Algorithm for Distributed Memory Architectures

Jingyu Cui; Guillem Pratx; Bowen Meng; Craig S. Levin

The processing speed for positron emission tomography (PET) image reconstruction has been greatly improved in recent years by simply dividing the workload to multiple processors of a graphics processing unit (GPU). However, if this strategy is generalized to a multi-GPU cluster, the processing speed does not improve linearly with the number of GPUs. This is because large data transfer is required between the GPUs after each iteration, effectively reducing the parallelism. This paper proposes a novel approach to reformulate the maximum likelihood expectation maximization (MLEM) algorithm so that it can scale up to many GPU nodes with less frequent inter-node communication. While being mathematically different, the new algorithm maximizes the same convex likelihood function as MLEM, thus converges to the same solution. Experiments on a multi-GPU cluster demonstrate the effectiveness of the proposed approach.


Entertainment Computing | 2009

Exercising at home: Real-time interaction and experience sharing using avatars

Jingyu Cui; Yasmin Aghajan; Joyca Lacroix; Aart van Halteren; Hamid K. Aghajan

Abstract This paper reports on the design of a vision-based exercise monitoring system. The system aims to promote well-being by making exercise sessions enjoyable experiences, either through real-time interaction and instructions proposed to the user, or via experience sharing or group gaming with peers in a virtual community. The use of avatars is explored as means of representation of the user’s exercise movements or appearance, and the system employs user-centric approaches in visual processing, behavior modeling via history data accumulation, and user feedback to learn the user’s appreciation. A preliminary user survey study has been conducted to explore the avatar appreciations across different types of social contexts.


acm multimedia | 2007

Combining stroke-based and selection-based relevance feedback for content-based image retrieval

Jingyu Cui; Changshui Zhang

We propose a flexible interaction mechanism for CBIR by enabling relevance feedback inside images through drawling strokes. Users interest is obtained from an easy-to-use user interface, and fused seamlessly with traditional feedback information in a semi-supervised learning framework. Retrieval performance is boosted due to more precise description of the query concept. Region segmentation is also improved based on the collected strokes, and further enhances the retrieval precision. We implement our system Flexible Image Search Tool (FIST) based on the ideas above. Experiments on two real world data sets demonstrate the effectiveness of our approach.


ieee nuclear science symposium | 2011

Measurement-based spatially-varying point spread function for list-mode PET reconstruction on GPU

Jingyu Cui; Guillem Pratx; Sven Prevrhal; Bin Zhang; Lingxiong Shao; Craig S. Levin

We present a novel method to accurately model the spatially-varying point spread function (PSF) of a PET system reformulated for list-mode reconstruction on the graphics processing unit (GPU). The spatially-varying PSF for each LOR is modeled as an asymmetric Gaussian function whose variance changes asymmetrically according to the orientation of the line of response (LOR) and the voxel geometry. To fit the PSF parameters, a point source is imaged at twelve locations in a Philips Gemini TF PET system. To avoid tedious mechanical calibrations, the accurate point source location is estimated directly from the list-mode data. We introduce canonical sinogram to enable reading out the sampled PSF directly from a stack of sinograms by exploring the rotational symmetry of the system matrix. The critical parameters for the PSF model are obtained by solving a convex optimization problem based on the measured point source data. The spatially-varying PSF is efficiently incorporated into the image reconstruction process on the GPU using the CUDA texture memory. The reconstruction algorithm incorporating the measurement-based shift-varying PSF takes 103 milliseconds per iteration to process a million LORs in a 75×75×26 image on a GeForce GTX 480 GPU, which is 190 times faster than a non-PSF implementation on a state-of-the-art central processing unit (CPU), and only 6.8% slower than a spatially-invariant fixed-width Gaussian kernel on the same GPU. Compared with no PSF modeling, this shift-varying PSF shows an average improvement of spatial resolution and contrast to noise ratio for point sources at the periphery of 2.95 ± 0.44% and 159.62 ± 31.54%, respectively. Improvements of the same parameters compared to the spatially-invariant PSF are 1.00 ± 0.26% and 41.11 ± 9.45%, respectively. These results indicate that the fast and accurate spatially-varying PSF reconstruction promises better resolution and contrast recovery with very small additional computational cost.


ieee nuclear science symposium | 2011

Point spread function for PET detectors based on the probability density function of the line segment

Eric Gonzalez; Jingyu Cui; Guillem Pratx; Matthew F Bieniosek; Peter D. Olcott; Craig S. Levin

We propose a new approach to calculate the Point Spread Function (PSF) for PET detectors based on the probability density function (PDF) of the line segment connecting two detector elements. Positron Emission Tomography (PET) events comprise the detection and positioning of pairs of oppositely directed 511 keV photons. The most significant blurring effect in PET is the considerable size of the detector elements, which causes uncertainty in the detected positions of photons. Typically this physical blurring is modeled in the forward direction, following photons from the source to the detectors. This work presents an analytical framework for calculating this physical blurring, from the inverse approach, that is from the detector to the source. The kernel is derived from the parameterization of the line segment whose endpoints are random variables described by the intrinsic detector response function distribution. This kernel is calculated in a first order approximation, and when compared against a measured PSF profile yields less than 8% root mean square (RMS) differences. Also, from this kernel a PSF-FWHM function of the distance to the center of the scanner is derived. The ratio between the PSF-FWHM and the intrinsic detector resolution (FWHM0) agrees with the Monte Carlo simulations. For detectors whose intrinsic response functions are described by Gaussian profiles we calculated ratios 1/√2 and √5/8 at the center (R=0) and halfway from the center at ( R=system-radius/2) respectively in agreement with published values of 1/√2 and 0.85; similarly for uniform (rectangular) profiles we get 1/2 and 3/4 which are equal to published values.

Collaboration


Dive into the Jingyu Cui's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xiaoou Tang

The Chinese University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge