Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Chung-Te Li is active.

Publication


Featured researches published by Chung-Te Li.


IEEE Transactions on Consumer Electronics | 2010

A novel 2Dd-to-3D conversion system using edge information

Chao-Chung Cheng; Chung-Te Li; Liang-Gee Chen

Although three-dimensional (3D) displays enhance visual quality more than two-dimensional (2D) displays do, the depth information required for 3D displays is unavailable in the conventional 2D content. Therefore, converting 2D videos into 3D ones has become an important issue in emerging 3D applications. This work presents a novel algorithm that automatically converts 2D videos into 3D ones. The proposed algorithm utilizes the edge information to segment the image into object groups. A depth map is then assigned based on a hypothesized depth gradient model. Next, the depth map is block-based assigned by cooperating with a cross bilateral filter to generate visually comfortable depth maps efficiently and also diminish the block artifacts. A multiview video can be readily generated by using a depth image-based rendering method.


international conference on consumer electronics | 2009

A block-based 2D-to-3D conversion system with bilateral filter

Chao-Chung Cheng; Chung-Te Li; Po-Sen Huang; Tsung-Kai Lin; Yi-Min Tsai; Liang-Gee Chen

The three-dimensional (3D) displays provide a dramatic improvement of visual quality over the 2D displays. The conversion of existing 2D videos to 3D videos is necessary for multimedia application. This paper presents an automatic and robust system to convert 2D videos to 3D videos. The proposed 2D-to-3D conversion combines two major depth generation modules, the depth from motion and depth from geometrical perspective. A block-based algorithm is applied and cooperates with the bilateral filter to diminish block effect and generate comfortable depth map. After generating the depth map, the multi-view video is rendered to 3D display.


symposium on vlsi circuits | 2008

An H.264/AVC scalable extension and high profile HDTV 1080p encoder chip

Yi-Hau Chen; Tzu-Der Chuang; Yu-Jen Chen; Chung-Te Li; Chia-Jung Hsu; Shao-Yi Chien; Liang-Gee Chen

The first single-chip H.264/AVC HDTV 1080 p encoder for scalable extension (SVC) with high profile is implemented on a 16.76 mm2 die with 90 nm process. It dissipates 349/439 mW at 120/166 MHz for high profile and SVC encoding. The proposed frame-parallel architecture halves external memory bandwidth and operating frequency. Moreover, the prediction architecture with inter-layer prediction tools are applied to further save 70% external memory bandwidth and 50% internal memory access.


international solid-state circuits conference | 2008

iVisual: An Intelligent Visual Sensor SoC with 2790fps CMOS Image Sensor and 205GOPS/W Vision Processor

Chih-Chi Cheng; Chia-Hua Lin; Chung-Te Li; Samuel C. Chang; Chia-Jung Hsu; Liang-Gee Chen

Visual sensors combined with video analysis algorithms can enhance applications in surveillance, healthcare, intelligent vehicle control, human-machine interfaces, etc. Hardware solutions exist for video analysis. Analog on-sensor processing solutions feature image sensor integration. However, the precision loss of analog signal processing prevents those solutions from realizing complex algorithms, and they lack flexibility. Vision processors realize high GOPS numbers by combining a processor array for parallel operations and a decision processor for other ones. Converting from parallel data in the processor array to scalar in the decision processor creates a throughput bottleneck. Parallel memory accesses also lead to high power consumption. Privacy is a critical issue in setting up visual sensors because of the danger of revealing video data from image sensors or processors. These issues exist with the above solutions because inputting or outputting video data is inevitable.


international conference on consumer electronics | 2011

A real-time 1080p 2D-to-3D video conversion system

Sung-Fang Tsai; Chao-Chung Cheng; Chung-Te Li; Liang-Gee Chen

In this paper, we demonstrate a 2D-to-3D video conversion system capable of real-time 1920×1080p conversion. The proposed system generates 3D depth information by fusing cues from edge feature-based global scene depth gradient and texture-based local depth refinement. By combining the global depth gradient and local depth refinement, generated 3D images have comfortable and vivid quality, and algorithm has very low computational complexity. Software is based on a system with a multi-core CPU and a GPU. To optimize performance, we use several techniques including unified streaming dataflow, multi-thread schedule synchronization, and GPU acceleration for depth image-based rendering (DIBR). With proposed method, real-time 1920×1080p 2Dto- 3D video conversion running at 30fps is then achieved.


international conference on consumer electronics | 2010

A 2D-to-3D conversion system using edge information

Chao-Chung Cheng; Chung-Te Li; Liang-Gee Chen

The three-dimensional (3D) displays provide a dramatic improvement of visual quality than the 2D displays do. However, for existed 2D contents, depth information is not recorded. Therefore, 2D-to-3D conversion is necessary. This paper presents an automatic system which convert 2D videos to 3D videos. The proposed system groups the blocks into regions using edge information. A prior hypothesis of depth gradient is used to assign depth of regions. Then, the bilateral filter is also applied to diminish block effect and to generate comfortable depth map for 3D visualization.


design automation conference | 2008

iVisual: an intelligent visual sensor SoC with 2790fps CMOS image sensor and 205GOPS/W vision processor

Chih-Chi Cheng; Chia-Hua Lin; Chung-Te Li; Samuel C. Chang; Liang-Gee Chen

iVisual, an intelligent visual sensor SoC integrating 2790fps CMOS image sensor and 76.8GOPS, 374 mW vision processor, is implemented on a 7.5 mm times 9.4 mm die with 0.18 mum CIS process. The light-in, answer-out SoC architecture avoids privacy problems of intelligent visual sensor. The feature processor and inter-processor synchronization scheme together increase 51% of average throughput. The 205GOPS/W power efficiency and 1.16GOPS/mm2 area efficiency are achieved.


international symposium on circuits and systems | 2010

Architecture design of stereo matching using belief propagation

Chao-Chung Cheng; Chung-Te Li; Chia-Kai Liang; Yen-Chieh Lai; Liang-Gee Chen

We propose a new architecture for stereo matching using belief propagation. The architecture combines our fast, fully-parallel processing element (PE) and memory-efficient tile-based BP (TBP) algorithm. On the architectural level, we develop several novel techniques, including a three stage pipeline, a message forwarding scheme, and a boundary message reuse scheme, which greatly reduce the required bandwidth and power consumption without sacrificing performance. The simulation shows that the architecture can generate HDTV720p results at 30 fps when operating at 227MHz. The high-quality depth maps enable real-time depth image based rendering and many other important applications in the 3D TV industry.


Proceedings of SPIE | 2009

Hybrid depth cueing for 2D-to-3D conversion system

Chao-Chung Cheng; Chung-Te Li; Yi-Min Tsai; Liang-Gee Chen

The three-dimensional (3D) displays provide a dramatic improvement of visual quality than the 2D displays do. The conversion of existing 2D videos to 3D videos is necessary for multimedia application. This paper presents a robust system to convert 2D videos to 3D videos. The main concepts are to extract the depth information from motion parallax of moving picture and to depth information from geometrical perspective in non-moving scene. In the first part, depthinduced motion information is reconstructed by motion vector to disparity mapping. By warping the consecutive video frames to parallel view angle with the current frame, the frame with suitable baseline is selected to generate depth using motion parallax information. However, video may not have the depth-induced motion information in every case. For scene without motion parallax, depth from geometrical perspective is applied to generate scene depth map. Scene depth map is assigned depending on the scene mode and analyzed line structure in the video. Combining these two depth cues, the stereo effect is enhanced and provide spectacular depth map. The depth map is then used to render the multi-view video for 3D display.


IEEE Transactions on Circuits and Systems for Video Technology | 2013

Brain-Inspired Framework for Fusion of Multiple Depth Cues

Chung-Te Li; Yen-Chieh Lai; Chien Wu; Sung-Fang Tsai; Tung-Chien Chen; Shao-Yi Chien; Liang-Gee Chen

2-D-to-3-D conversion is an important step for obtaining 3-D videos, as a variety of monocular depth cues have been explored to generate 3-D videos from 2-D videos. As in a human brain, a fusion of these monocular depth cues can regenerate 3-D data from 2-D data. By mimicking how our brains generate depth perception, we propose a reliability-based fusion of multiple depth cues for an automatic 2-D-to-3-D video conversion. A series of comparisons between the proposed framework and the previous methods is also presented. It shows that significant improvement is achieved in both subjective and objective experimental results. From the subjective viewpoint, the brain-inspired framework outperforms earlier conversion methods by preserving more reliable depth cues. Moreover, an enhancement of 0.70-3.14 dB and 0.0059-0.1517 in the perceptual quality of the videos is realized in terms of the objective-modified peak signal-to-noise ratio and disparity distortion model, respectively.

Collaboration


Dive into the Chung-Te Li's collaboration.

Top Co-Authors

Avatar

Liang-Gee Chen

National Taiwan University

View shared research outputs
Top Co-Authors

Avatar

Chao-Chung Cheng

National Taiwan University

View shared research outputs
Top Co-Authors

Avatar

Ling-Hsiu Huang

National Taiwan University

View shared research outputs
Top Co-Authors

Avatar

Yen-Chieh Lai

National Taiwan University

View shared research outputs
Top Co-Authors

Avatar

Chien Wu

National Taiwan University

View shared research outputs
Top Co-Authors

Avatar

Sung-Fang Tsai

National Taiwan University

View shared research outputs
Top Co-Authors

Avatar

Yi-Min Tsai

National Taiwan University

View shared research outputs
Top Co-Authors

Avatar

Chen-Han Chung

National Taiwan University

View shared research outputs
Top Co-Authors

Avatar

Cheng-Yuan Ko

National Taiwan University

View shared research outputs
Top Co-Authors

Avatar

Pei-Kuei Tsung

National Taiwan University

View shared research outputs
Researchain Logo
Decentralizing Knowledge