Che Jung Chang
National Cheng Kung University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Che Jung Chang.
Computers & Industrial Engineering | 2009
Der Chiang Li; Chun-Wu Yeh; Che Jung Chang
Global competition has shortened product life cycles and makes the trend of industrial demand not easily forecasted. Therefore, one of the key points that will enable enterprises to survive and succeed is the ability to adapt to this dynamic environment. However, the available data, such as demand and sales, are often limited in the early periods of product life cycles, making traditional forecasting techniques unreliable for decision making. Although various forecasting methods currently exist, their utility is often limited by insufficient data and indefinite data distribution. The grey prediction model is one of the potential approaches for small sample forecast, although its often hard to amend according to the sample characteristics in practice, owing to its fixed modeling method. This research tries to use the trend and potency tracking method (TPTM) to analyze sample behavior, extract the concealed information from data, and utilize the trend and potency value to construct an adaptive grey prediction model, AGM (1,1), based on grey theory. The experimental results show that the proposed model can improve the prediction accuracy for small samples.
Applied Mathematics and Computation | 2015
Che Jung Chang; Der Chiang Li; Yi Hsiang Huang; Chien Chih Chen
Efficiently controlling the early stages of a manufacturing system is an important issue for enterprises. However, the number of samples collected at this point is usually limited due to time and cost issues, making it difficult to understand the real situation in the production process. One of the ways to solve this problem is to use a small data set forecasting tool, such as the various gray approaches. The gray model is a popular forecasting technique for use with small data sets, and while it has been successfully adopted in various fields, it can still be further improved. This paper thus uses a box plot to analyze data features and proposes a new formula for the background values in the gray model to improve forecasting accuracy. The new forecasting model is called BGM(1,1). In the experimental study, one public dataset and one real case are used to confirm the effectiveness of the proposed model, and the experimental results show that it is an appropriate tool for small data set forecasting. Small-data-set forecasting problem is difficult for most manufacturing environments.A forecasting tool using limited data for engineers and managers is more effective and efficient.The proposed method base on the box plot can analyze data features to improve forecasting accuracy with small data sets.The proposed method is considered an appropriate procedure in general to forecast manufacturing outputs based on small samples.
International Journal of Production Research | 2012
Der Chiang Li; Chien Chih Chen; Che Jung Chang; Wen Chih Chen
Product life cycles are becoming shorter, especially in the optoelectronics industry. Shortening production cycle times using knowledge obtained in pilot runs, where sample sizes are usually very small, is thus becoming a core competitive ability for firms. Machine learning algorithms are widely applied to this task, but the number of training samples is always a key factor in determining their knowledge acquisition capability. Therefore, this study, based on box-and-whisker plots, systematically generates more training samples to help gain more knowledge in the early stages of manufacturing systems. A case study of a TFT-LCD manufacturer is taken as an example when a new product was phased-in in 2008. The experimental results show that it is possible to rapidly develop a production model that can provide more information and precise predictions with the limited data acquired from pilot runs.
Neurocomputing | 2014
Che Jung Chang; Der Chiang Li; Wen Li Dai; Chien Chih Chen
In the current highly competitive manufacturing environment, it is important to have effective and efficient control of manufacturing systems to obtain and maintain competitive advantages. However, developing appropriate forecasting models for such systems can be challenging in their early stages, as the sample sizes are usually very small, and thus there is limited data available for analysis. The technique of virtual sample generation is one way to address this issue, but this method is usually not directly applied to time series data. This research thus develops a Latent Information function to analyze data features and extract hidden information, in order to learn from small data sets considering timing factors. The experimental results obtained using the Synthetic Control Chart Time Series and aluminum price datasets show that the proposed method can significantly improve forecasting accuracy, and thus is considered an appropriate procedure to forecast manufacturing outputs based on small samples.
Mathematical Problems in Engineering | 2013
Che Jung Chang; Der Chiang Li; Wen Li Dai; Chien Chih Chen
The wafer-level packaging process is an important technology used in semiconductor manufacturing, and how to effectively control this manufacturing system is thus an important issue for packaging firms. One way to aid in this process is to use a forecasting tool. However, the number of observations collected in the early stages of this process is usually too few to use with traditional forecasting techniques, and thus inaccurate results are obtained. One potential solution to this problem is the use of grey system theory, with its feature of small dataset modeling. This study thus uses the AGM(1,1) grey model to solve the problem of forecasting in the pilot run stage of the packaging process. The experimental results show that the grey approach is an appropriate and effective forecasting tool for use with small datasets and that it can be applied to improve the wafer-level packaging process.
Journal of the Operational Research Society | 2015
Che Jung Chang; Wen-Li Dai; Chien Chih Chen
Small-data-set forecasting problems are a critical issue in various fields, with the early stage of a manufacturing system being a good example. Manufacturers require sufficient knowledge to minimize overall production costs, but this is difficult to achieve due to limited number of samples available at such times. This research was thus conducted to develop a modelling procedure to assist managers or decision makers in acquiring stable prediction results from small data sets. The proposed method is a two-stage procedure. First, we assessed some single models to determine whether the tendency of a real sequence can be reflected using grey incidence analysis, and we then evaluated their forecasting stability based on the relative ratio of error range. Second, a grey silhouette coefficient was developed to create an applicable hybrid forecasting model for small samples. Two real cases were analysed to confirm the effectiveness and practical value of the proposed method. The empirical results showed that the multimodel procedure can minimize forecasting errors and improve forecasting results with limited data. Consequently, the proposed procedure is considered a feasible tool for small-data-set forecasting problems.
Computers & Industrial Engineering | 2014
Che Jung Chang; Der Chiang Li; Chien Chih Chen; Chia Sheng Chen
In the early stages of manufacturing systems, it is often difficult to obtain sufficient data to make accurate forecasts. Grey system theory is one of the approaches to deal with this issue, as it uses fairly small sets to construct forecasting models. Among published grey models, the current non-equigap grey models can deal with data having unequal gaps, and have been applied in various fields. However, these models usually use fixed modeling procedures that do not consider data growth trend differences. This paper utilizes the trend and potency tracking method to determine the parameter @a of the background value to build an adaptive non-equigap grey model to improve forecasting performance. The experimental results indicate that the proposed method considers that data occurrence properties can obtain better forecasting results.
Journal of the Operational Research Society | 2016
Che Jung Chang; Liping Yu; Peng Jin
Accurate short-term demand forecasting is critical for developing effective production plans; however, a short forecasting period indicates that the product demands are unstable, rendering tracking of product development trends difficult. Determining the actual developing data patterns by using forecasting models generated using historical observations is difficult, and the forecasting performance of such models is unfavourable, whereas using the latest limited data for forecasting can improve management efficiency and maintain the competitive advantages of an enterprise. To solve forecasting problems related to a small data set, this study applied an adaptive grey model for forecasting short-term manufacturing demand. Experiments involving the monthly demand data for thin film transistor liquid crystal display panels and wafer-level chip-scale packaging process data showed that the proposed grey model produced favourable forecasting results, indicating its appropriateness as a short-term forecasting tool for small data sets.
Advanced Engineering Informatics | 2016
Che Jung Chang; Jan-Yan Lin; Meng-Jen Chang
Forecasting electricity consumption plays a vital role for policy makers.Short-term predictions using new limited data for managers are important.The proposed modeling procedure can extract hidden information for knowledge learning.The proposed method is an appropriate tool for forecasting short-term consumption. Effectively forecasting the overall electricity consumption is vital for policy makers in rapidly developing countries. It can provide guidelines for planning electricity systems. However, common forecasting techniques based on large historical data sets are not applicable to these countries because their economic growth is high and unsteady; therefore, an accurate forecasting technique using limited samples is crucial. To solve this problem, this study proposes a novel modeling procedure. First, the latent information function is adopted to analyze data features and acquire hidden information from collected observations. Next, the projected sample generation is developed to extend the original data set for improving the forecasting performance of back propagation neural networks. The effectiveness of the proposed approach is estimated using three cases. The experimental results show that the proposed modeling procedure can provide valuable information for constructing a robust model, which yields precise predictions with the limited time series data. The proposed modeling procedure is useful for small time series forecasting.
International Journal of Production Research | 2013
Der Chiang Li; Wen Ting Huang; Chien Chih Chen; Che Jung Chang
Machine learning algorithms are widely applied to extract useful information, but the sample size is often an important factor in determining their reliability. The key issue that makes small dataset learning tasks difficult is that the information that such datasets contain cannot fully represent the characteristics of the entire population. The principal approach of this study to overcome this problem is systematically adding artificial samples to fill the data gaps; this research employs the mega-trend-diffusion technique to generate virtual samples to extend the data size. In this paper, a real, small dataset learning task in the array process of a thin-film transistor liquid-crystal display (TFT-LCD) panel manufacturer is proposed, where there are only 20 samples used for learning the relationship between 15 inputs and 36 output attributes. The experiment results show that the approach is effective in building robust back-propagation neural network (BPN) and support vector regression (SVR) models. In addition, a sensitivity analysis is implemented with the 20 samples by using SVR to extract the relationship between the 15 factors and the 36 outputs to help engineers infer process knowledge.