Jinwook Go
Yonsei University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jinwook Go.
IEEE Transactions on Consumer Electronics | 2000
Jinwook Go; Kwanghoon Sohn; Chulhee Lee
In this paper we present a color interpolation technique based on artificial neural networks for a single-chip CCD (charge-coupled device) camera with a Bayer color filter array (CFA). Single-chip digital cameras use a color filter array and an interpolation method in order to produce high quality color images from sparsely sampled images. We have applied 3-layer feedforward neural networks in order to interpolate a missing pixel from surrounding pixels. And we compare the proposed method with conventional interpolation methods such as the bilinear interpolation method and cubic spline interpolation method. Experiments show that the proposed interpolation algorithm based on neural networks provides a better performance than the conventional interpolation algorithms.
IEEE Transactions on Speech and Audio Processing | 2003
Chulhee Lee; Donghoon Hyun; Euisun Choi; Jinwook Go; Chungyong Lee
We propose a method to minimize the loss of information during the feature extraction stage in speech recognition by optimizing the parameters of the mel-cepstrum transformation, a transform which is widely used in speech recognition. Typically, the mel-cepstrum is obtained by critical band filters whose characteristics play an important role in converting a speech signal into a sequence of vectors. First, we analyze the performance of the mel-cepstrum by changing the parameters of the filters such as shape, center frequency, and bandwidth. Then we propose an algorithm to optimize the parameters of the filters using the simplex method. Experiments with Korean digit words show that the recognition rate improved by about 4-7%.
IEEE Transactions on Geoscience and Remote Sensing | 2001
Jinwook Go; Gunhee Han; Hagbae Kim; Chulhee Lee
The authors propose a new learning algorithm for multilayer feedforward neural networks, which converges faster and achieves a better classification accuracy than the conventional backpropagation learning algorithm for pattern classification. In the conventional backpropagation learning algorithm, weights are adjusted to reduce the error or cost function that reflects the differences between the computed and the desired outputs. In the proposed learning algorithm, the authors view each term of the output layer as a function of weights and adjust the weights directly so that the output neurons produce the desired outputs. Experiments with remotely sensed data show the proposed algorithm consistently performs better than the conventional backpropagation learning algorithm in terms of classification accuracy and convergence speed.
international geoscience and remote sensing symposium | 2000
Jinwook Go; Chulhee Lee
Recently, a feature extraction method based on decision boundary has been proposed for neural networks. The method is based on the fact that the vector normal to the decision boundary contains information useful for discriminating between classes. However, the normal vector was estimated numerically, resulting in inaccurate estimation and a long computational time. The authors propose a new method to calculate the normal vector analytically and derive all the necessary equations for 3 layer feedforward neural networks with a sigmoid function. Experiments show that the proposed method provides a noticeably improved performance.
systems man and cybernetics | 2000
Jinwook Go; Chulhee Lee
Although neural networks have been successfully applied for the recognition of unconstrained handwritten characters, there have been few efficient feature extraction algorithms, resulting in inefficient neural networks. We apply a decision boundary feature extraction algorithm to neural networks for the recognition of handwritten digits and reduce the computational cost and complexity of neural networks. Experiments show that the proposed feature extraction algorithm can reduce the number of features significantly without sacrificing the performance.
international symposium on neural networks | 1999
Jinwook Go; Chulhee Lee
We investigate weight distribution of neural networks in order to understand and improve training process of neural networks. Generally, it takes a long time to train neural networks. However, when a new problem is presented, neural networks have to be trained again. No matter how many times neural networks have been trained before, the neural networks must be trained again without any benefit from the previous training process. The training process can be viewed as finding a solution point in the weight space. In this paper, we investigate the distribution of the solution points in the weight space to understand and speed up the training process.
international conference on consumer electronics | 2000
Jinwook Go; Chulhee Lee
We propose a color interpolation method for single-chip digital cameras using artificial neural networks. Single-chip digital cameras use a color filter array and an interpolation method in order to produce high quality color images from sparsely sampled images. Experiments show that the proposed interpolation algorithm based on neural networks provides a better performance than the conventional interpolation algorithms.
international conference on intelligent computing | 2006
Chulhee Lee; Jinwook Go; Byungjoon Baek; Hyunsoo Choi
In this paper, we view equalization as a multi-class classification problem and use neural networks to detect binary signals in the presence of noise and interference. In particular, we compare the performance of a recently published training algorithm, a multi-gradient, with that of the conventional back-propagation. Then, we apply a feature extraction to obtain more efficient neural networks. Experiments show that neural network equalizers which view equalization as multi-class problems provide significantly improved performance compared to the conventional LMS algorithm while the decision boundary feature extraction method significantly reduces the complexity of the network.
international symposium on neural networks | 2004
Chulhee Lee; Jinwook Go; Byungjoon Baek
In this paper, we view equalization as a multi-class classification problem and use neural networks for classification. In particular, we use a recently published training algorithm, multi-gradient, to train neural networks. Then, we apply a feature extraction method to obtain more efficient neural networks. Experiments show that the neural network equalizers which view equalization as multi-class problems provide significantly improved performances compared to neural network equalizers trained by the conventional LMS algorithm, while the feature extraction method significantly reduces the complexity of the neural network equalizers.
Lecture Notes in Computer Science | 2004
Jinwook Go; Byungjoon Baek; Chulhee Lee
In this paper, we investigate and analyze the weight distribution of feedforward two-layer neural networks in order to understand and improve the time-consuming training process of neural networks. Generally, it takes a long time to train neural networks. However, when a new problem is presented, neural networks have to be trained again without any benefit from previous training. In order to address this problem, we view training process as finding a solution weight point in a weight space and analyze the distribution of solution weight points in the weight space. Then, we propose a weight initialization method that uses the information on the distribution of the solution weight points. Experimental results show that the proposed weight initialization method provides a better performance than the conventional method that uses a random generator in terms of convergence speed.