Forecasting Stock Time-Series using Data Approximation and Pattern Sequence Similarity
R. H. Vishwanath, S. Leena, K. C. Srikantaiah, K. Shreekrishna Kumar, P. Deepa Shenoy, K. R. Venugopal, S. S. Iyengar, L. M. Patnaik
aa r X i v : . [ c s . D B ] S e p Forecasting Stock Time-Series using Data Approximation andPattern Sequence Similarity
Vishwanath R H a , Leena S a , Srikantaiah K C a , K Shreekrishna Kumar b , P Deepa Shenoy a , VenugopalK R a , S S Iyengar c and L M Patnaik da Department of Computer Science and Engineering, University Visvesvaraya College ofEngineering, Bangalore University, Bangalore. Contact: [email protected] b Director, All India Council for Technical Education, SWRO, Bangalore, India c Director and Ryder Professor, Florida International University, USA d Honorary Professor, Indian Institute of Science, Bangalore 560 001, India
Time series analysis is the process of building a model using statistical techniques to represent characteristicsof time series data. Processing and forecasting huge time series data is a challenging task. This paper presentsApproximation and Prediction of Stock Time-series data (
AP ST ), which is a two step approach to predict thedirection of change of stock price indices. First, performs data approximation by using the technique calledMultilevel Segment Mean (
MSM ). In second phase, prediction is performed for the approximated data usingEuclidian distance and Nearest-Neighbour technique. The computational cost of data approximation is O ( n ∗ n i )and computational cost of prediction task is O ( m ∗| NN | ). Thus, the accuracy and the time required for predictionin the proposed method is comparatively efficient than the existing Label Based Forecasting ( LBF ) method [1].
Keywords: Data Approximation, Nearest Neighbour, Pattern Sequence, Stock Time-Series.
1. INTRODUCTION
Data mining is the process of extracting knowl-edge, by dredging the data from huge database.Sequence database consists of sequence of orderedevents with or without notion of time. Time se-ries data is a sequence database which consistsof sequences of values or events obtained over re-peated measurements of time, which can be usedin prediction of any future events for user ap-plications. Forecasting is the prediction of forthcoming events based on historical events. The re-curring intervals for forecasting is based on theduration observed, i.e., it requires many years forlong term prediction, a year or more for mediumterm prediction and weeks or days for short termprediction.
The main motivation behind this work is that,it is very much crucial for the stock market in- vestors to estimate the behavior or trend of thestock market prices as precisely as possible in or-der to reach the best trading decisions for theirinvestments. On the other hand, the complexityof many financial market is based on the nonlin-earity and nonparametric nature of the variablesinfluencing the index movement directions includ-ing human psychology and political events. Theunpredictable volatile market index makes it ahighly challenging task to accurately forecast itspath of movement. In this context, it is requiredto build an efficient forcasting model, so that theinvestor can utilize the most accurate time seriesforecasting model to maximize the profit or tominimize the risk.
In this paper, we are using sliding windowmodel to analyze stock time-series data. The ba-sic idea is that rather than running computationson the entire data, we can make decisions based90 rediction of Stock Time-Series Data t , a new data element arrives. This element ex-pires at ( t + w ), where w is the window size orlength. The sliding window model is useful formoving object search, stock analysis or sensornetwork analysis, where only recent events maybe important and reduces memory requirementsbecause only a small window of data is used. In this paper, a new method called
AP ST hasbeen proposed, that generates the predicted val-ues for the original stock time series data. Here,we first perform preprocessing upon the historicalstock time series data to generate the sequence ofapproximated values using Multi scale SegmentMean approach [2]. Then, we use these approxi-mated sequence of values for the predicting pro-cess. To forecast, we use the Euclidian distanceapproach to find the nearest neighbor objects toidentify the similar set of objects as used in [3].The accuracy of
AP ST is estimated by comput-ing the percentage of error based on the differencebetween the predicted value and the actual knownvalue for each test samples.
The rest of the paper is organized as follows,Section 2 discusses briefly the Literature on stockprice time series forecasting. Section 3 presentsthe background work, Section 4 contains Prob-lem definition, Section 5 describes the SystemArchitecture, section 6 presents the Mathemati-cal model and Algorithm, Section 7 addresses theExperimental Results for proposed method andexisting
LBF technique. Concluding remarks aresummarized in the Conclusion.
2. LITERATURE SURVEY
Popular algorithms like Support Vector Ma-chine (
SV M ) and Reinforcement learning, are ef-fective in tracing the stock market and helps inmaximizing the profit of stock option purchasewhile keeping the risk low [4]-[5]. Nayak e t al.,[6] tested the predictive power of the clusteringtechnique on Australian stock market data usinga brute force method. This is based on the ideathat a cluster formed around an event could be used as a good predictor for the future event.Conejo e t al., [7] proposed a technique to fore-cast day-ahead electricity prices based on thewavelet transform and ARIM A models. The se-ries of prices is decomposed using the wavelettransform into a set of constitutive series. Then,the
ARIM A models are used to forecast thefuture values of this consecutive series. Inturn, through the inverse wavelet transform, the
ARIM A model reconstructs the future behav-ior of the price series and therefore to forecastprices. Akinwale e t al., in [8] used NN approachto predict the untranslated and translated Nige-ria Stock Market Price ( N SM P ). They used5- j -1 network topology to adopt the five inputvariables. The number of hidden neurons deter-mined the j variables during the network selec-tion. Both the untranslated and translated state-ments were analyzed and compared. The per-formance of translated N SM P using regressionanalysis or error propagation was more superiorto untranslated
N SM P . The result was showedon untranslated
N SM P ranged for 11.3% while2.7% for
N SM P .Kuang e t al., [9] used the M ARX (Mov-ing average AutoRegressive eXogenous predictionmodel) fusion with RS (Rough Set theory) and GS (Grey System theory) to create an automaticstock market forecasting and portfolio selectionmechanism. Financial data were collected au-tomatically every quarter and are input to an M ARX prediction model for forecasting the fu-ture trends. Clustered using a K means cluster-ing algorithm and then supplied to a RS classi-fication module which selects appropriate invest-ment stocks by a decision-making rules. The ad-vantages are combining different forecasting tech-niques to improve the efficiency and accuracy ofautomatic prediction. Efficacies of the combinedmodels are evaluated by comparing the forecast-ing accuracy of the
M ARX model with GM (1,1) model. The hybrid model provides a high ac-curacy.Suresh e t al., [10] used the data mining tech-niques to uncover the hidden pattern, predictfuture trends and behaviors in financial mar-2 Vishwanath R Hulipalled et al. ket. Martinez e t al., [11] proposed the near-est neighbor technique called Pattern Sequence-based Forecasting ( P SF ). This method uses clus-tering technique to generate labels and makes pre-dictions basing only on these labels. However, itis quite difficult to determine the suitable num-ber of clusters in the clustering step and in someanamoly cases, if samples are not in the trainingset. This method cannot predict events in thefuture even when the length of a label pattern isone. The proposed work is the extension of ourown work discussed in [12].
3. BACKGROUND
The Label Based Forecasting (
LBF ) [1] algo-rithm consists of two phases. In the first phase, aclustering technique is used to generate the labelsand in the second phase, forecasting is performedby using the information provided by clustering.In
LBF method, it is quite difficult to determinethe suitable number of clusters in clustering stepand they are not using the actual values of thedata set for the prediction, instead they use theset of labels created by clustering approach, thismay lead to errors in prediction.In the proposed method, we use real values ofthe input time series instead of labels for the pre-diction process.
AP ST algorithm first performsthe data approximation by using the techniquecalled Multilevel Segment Mean (
M SM ) and inthe second phase, prediction is performed for theapproximated data.
4. PROBLEM DEFINITION4.1. Problem Statement
Let, P(i) ∈ V d be a vector composed of thedaily closing stock prices of a particular company,corresponding to d number of days, which is givenby the equation, P ( i ) = [ p , p , . . . , p d ] (1)then, approximate the vector content V d to getthe approximated stream of data, i.e.,Ap = [ Ap , Ap , . . . , Ap n ] (2)The objective is to predict the ( d + 1) th day stockprice by searching the K nearest neighbour in Ap . i) We divide N days stock prices into equalnumber of K groups. In our example, we con-sider size of each K groups to be 27 consecutiveelements from the input data stream D .ii) We further divide each K groups into t num-ber of subsegments. In our example, we considersize of each t subsegments to be 3 consecutiveelements from each K groups.The obective is to forecast the Stock time se-ries data by finding similar patterns over a streamof stock time series data and reduce the process-ing cost and dimensionality of time series W i andpattern p j .
5. SYSTEM ARCHITECTURE
The system architecture consists of the fol-lowing components, (i) Data source, (ii) DataApproximation Process, (iii) Prediction Processand (iv) Predicted Data Set. The complete ar-chitecture is as shown in the Figure 1.
Data Source:
It is the collection of historicalstock price time series data. In this the closingstock price values of many companies for severalyears are collected and are stored in historicaldata base.
Data Approximation Process:
This is apreprocessing step for the prediction task. Inthis, the original input stock time series data isapproximated using the
M SM technique whichis discussed in detail further and an example isshown in Figure 2 and the data approximationsteps are discussed in detail, in
P hase − AP ST , as shown in the Table 1. Themain objective of this step is to condense thedata set.
Prediction Process:
The main goal of thispaper is to forecast the stock time series data.It involves the three steps: i)Finding K Near-est Neighbours, ii)Selecting K -elements followingeach Nearest Neighbours, lastly iii)Finding themean of the K -elements. Figure 3 shows theexample for the prediction process and the detail rediction of Stock Time-Series Data P hase − AP ST , as shown inthe Table 1.
Predicted Data Set:
This is the output ob-tained from the prediction process and is the col-lection of the predicted values which are latercompared with the origional stock time series val-ues to evaluate the prediction accuracy of the pro-posed model.
6. MATHEMATICAL MODEL AND AL-GORITHM6.1. Data Approximation Process
The given stock time series data D of size N isdivided into number of equal partitions, P , P ,. . . , P n and the total number of partitions is givenby n = N/K (3)where, K is size of each partition. For each par-tition P i , where 1 ≤ i ≤ K, segment P i into n i segments, S ,. . . , S n i and the total number of seg-ments is given by, n i = | P k | /t, (4)where, t is size of each segment. The set of thesegments for each partition P i is given by S p i = [ S , S , ..., S n i ] (5)The data approximation for the input stock timeseries data D is obtained by computing the seg-ments mean from level l to level n l = log t K (6)and at each level j we can form (3 l ) disjointsegments, for each segment S j ∈ S p i in the level l= log t K, mean of all the elements in S i is com-puted and stored in AP ij [ l − l −
1] levels are grouped as one segment,again considering each segment in [ l −
1] level,find the mean of all elements in that segmentand stored in AP ij [ l − AP ij [ l −
3] upto AP ij [ l − l ] i.e., AP ij [0]. AP ij [0] gives the approximated valueof level j for the partition P i . This process iscontinued for computing the approximation forall the levels, finally, the approximated values areas follows, Ap = [ Ap , Ap , . . . , Ap n ].Figure 1 shows the segment mean representationof stock time series data D of length N . Example:
This example shows the computationof the approximation values for the given inputstock time series data. In Figure 2, at level
2, 1and 0 we construct total of 9, 3 and 1 segmentsrespectively. The first segment AP at level st , 2 nd and 3 rd value in the inputstock time series data D . Similarly, the secondsegment AP at level th ,5 th and 6 th values in the input stock time seriesdata D . So we can construct the segments upto AP on level i.e., AP , AP , AP . . . , AP as shown in Figure 2.At level
1, we construct 3 segments AP , AP and AP as follows:The segment AP is computed by the mean of 3adjacent segments on level i.e. , AP =[ AP + AP + AP ]/3, AP =[ AP + AP + AP ]/3and AP =[ AP + AP + AP ]/3.Lastly, we can compute the only one segment AP at level AP =[ AP + AP + AP ]/3. Hence AP is the approximated value of the first group K in the input stock time series data D , i.e. , AP = AP . In the same manner, we can com-pute another approximated value AP K and next approximatd value AP K and so on. Theset of approximated values at level
0, are formedas follows, Ap = [ Ap , Ap , . . . , Ap n ] The Stock price time series values are predictedusing the data approximation values obtained inthe previous section.Given the set of approximated values as Ap = Ap , Ap , . . . , Ap n and we need to compute a setof predicted values as follows, P ′ = v , v , . . . , v m (7)4 Vishwanath R Hulipalled et al.
Figure 1. System ArchitectureFigure 2. Data Approximation using
M SM
Let W be the window of size w , and m be the sizeof the predicted values, in our case m =1. Nowconsider the last w elements in the input sequence Ap , i.e. , pattern set P S and is given by,
P S = AP n − w , AP n − w − , . . . , AP n − , AP n (8)Next, the nearest neighbour in AP is obtainedfor P S . Let k be number of nearest neighbours in AP . In P S , the set of nearest neighbour isgiven by,
N N = nn , nn , . . . , nn k (9)For each nearest neighbour, nn i ∈ N N , retrivesequence of m elements next to nn i i.e, . EL i = e i , e i , . . . , e im (10) rediction of Stock Time-Series Data nn i . Set ofsequence of m elements next to all the nearestneighbours in N N is given by the set
N S . N S = EL , EL , . . . EL k (11)The predicted value in the sequence, that consistsof average of corresponding elements in the set N S is given by, P ′ < v , v , . . . , v m > =1 k k X i =1 E i , k k X i =1 E i , . . . , k k X i =1 E im (12) Example:
Prediction process is shown in the Fig-ure 3. Table 1 shows the complete algorithm forData approximation and Prediction of Stock timeseries data.The aproximated values are extracted from thestock time series data D . Considering the pat-terns of length w in the aproximated values, wehave to predict a stock sequence of the next timestep. In the prediction process, search for k near-est neighbors stock values within the threshold ψ of that pattern and then the stock sequences nextto the found neighbors are extracted. The pre-dicted stock sequence is then estimated by taking the mean of the sequences found in the previousstep.The computational cost of our proposed AP ST method, for data approximation is O ( n ∗ n i ).Where, n is the number of partitions and n i isthe number of segments. The computational costof prediction task is O ( m ∗ | N N | ). Where, m isthe size of sequence to be predicted and | N N | isthe total size of the nearest neighbours. Thus,the accuracy and the time required for predictionin the proposed method is comparatively efficientthan the existing Label Based Forecasting ( LBF )method.
7. EXPERIMENTAL RESULTS
Experiments are conducted on two realdatasets, TAIiwan stock EXchange index dataset(TAIEX) and Bombay Stock EXchange indexdataset (BSEX) for different companies. The per-formance of our prediction approach is comparedwith that of
LBF method. We use Mean Er-ror Relative (
M ER ) and Mean Absolute Error(
M AE ) for evaluation which are defined as fol-6
Vishwanath R Hulipalled et al.
Table 1Algorithm :
AP ST : Approximation and Prediction of Stock Time-Series Data
Algorithm : APST ( D , K , , t , w , m ) Input: D : Stock time series data set of size Nk : Size of each partition t : Size of each segment w : Window size m : Size of sequence to predicted Output: P ′ : Predicted Stock values ———————————————————————————————-Phase-1: Data Approximation ———————————————————————————————-begin N = 1, n = N/K , l = log t k ; P = Partition D into p , p , . . . , p n of size k for each partition p i ∈ P do Segment p i into n i segments s , s , . . . , s ni S P i = s , s , . . . , s n for each Segment S j ∈ S P i do AP ij [ l − S j end forend forfor l = (log t k, l ≥ , l − − ) do Groups the elements in AP ij [ l −
1] into segments of t elementsfor each segment find the mean and store in AP ij [ l − AP ij [ l − l ] end forend //The final set of approximation values for all the levels are,// AP ij [ l − l ] = AP ij [0] = Ap = [ Ap , Ap , . . . , Ap n ]//These values are used in the Phase-2 for the Prediction. ———————————————————————————————Phase-2 : Data P rediction ———————————————————————————————-begin Ap = [ Ap , Ap , . . . , Ap n ] P S = < AP n − w , AP n − w − , . . . , AP n − , AP n >N N = Find the nearest neighbours for P S in AP < nn , nn , . . . , nn k > for each nn i ∈ N N do E i = Extract Sequence < e i , e i , . . . , e im > of m elements next to nn i end forfor each j =1 to m dofor each Element E i ∈ E do P ′ [ j ]= P ′ [ j ]+ e ij end forend for end rediction of Stock Time-Series Data M eanErrorRelative = 100 1 N N X d =1 | P ′ − P | ¯ P (13)Where, P ′ is the predicted stock prices at partic-ular day d . P is the current stock prices for par-ticular day d . ¯ P is the mean stock prices for theperiod of interest(day/week). N is the number ofpredicted days M eanAbsoluteError = 1 N N X d =1 | P ′ − P | (14)Using the above two equations, we compute M ER and
M AE for both the existing
LBF method and proposed
AP ST method for theTAIEX dataset and shown in the Table 2 andTable 3. From Table 2 and 3, it shows that theaverage MER is 6.89%, Average MAE is 0.47%in the existing
LBF method, whereas in the pro-posed
AP ST method, the average
M ER is 5.90%and Average
M AE is 0.37% .The proposed
AP ST method is ≈
1% moreefficient with respect to
M ER and 0.1 % moreefficient for
M AE compared to existing
LBF method.Table 3Performance of
LBF and
AP ST with Respect to
M ER and
M AE
AVG.MER AVG.MAE
LBF
AP ST
AP ST method is better than that of theexisting
LBF . The graph is plotted by takingthe actual stock price values against the predictedstock price values for both the methods.The graph shown in Figure 6, indicates that,the average CPU time required to forecast difer-ent stock timeseries data. We observed that, the average CPU time required for existing
LBF method is 0.61 miliseconds. Whereas in our pro-posed
AP ST method, the average CPU time re-quired is 0.5 miliseconds. Our technique
AP ST is 0.11% more efficient than the existing
LBF method because, in
LBF they consider the entiredata set N = | D | , whereas in AP ST we considerthe approximated data set, n = | AP | i.e., Size ofdata in
AP ST =( N/n ). Time complexity of
LBF is O ( N o.of days ∗ | E s d | ), whereas in AP ST timecomplexity is O ( m ∗ | N N | ), and number of sub-sequences in AP ST is less than the E s d in LBF . Day S t o ck P r i ce ForecastActual
Figure 4. Comparison between Actual value andForecasted value in
LBF method
8. CONCLUSIONS
The proposed mechanism
AP ST works in twophase process. In the first phase we performdata approximation using Multiscale SegmentMean(
M SM ) approach to get the approximatedvalues of the given stock time series data. In thesecond phase, the prediction of stock time series iscarried out using the Euclidian distance and theNearest Neighbour approach. The computationalcost of proposed method with respect to data ap-proximation is O ( n ∗ n i ) and for the predictiontask is O ( m ∗ | N N | ) respectively.8 Vishwanath R Hulipalled et al.
Table 2The Prediction Errors by
LBF and
AP ST methods on the TAIEX dataset for the financial year 2010Month MER(
LBF ) MER(
AP ST ) MAE(
LBF ) MAE(
AP ST )April 5.02 4.22 0.53 0.43May 8.30 7.22 0.56 0.46June 6.89 5.59 0.51 0.41July 7.41 6.21 0.47 0.37Aug 8.37 7.57 0.47 0.37Sep 7.30 6.40 0.45 0.35Oct 4.62 3.68 0.47 0.37Nov 7.26 6.28 0.44 0.34Dec 6.88 5.88 0.43 0.35Jan 7.20 8.26 0.44 0.36Feb 6.26 4.26 0.44 0.35Mar 7.26 5.26 0.44 0.38
Day S t o ck P r i ce ForecastActual
Figure 5. Comparison between Actual value andForecasted value in
AP ST methodFurther, our experimental results show that theaverage MER is 6.89%, average MAE is 0.47% inthe existing
LBF method, whereas in the pro-posed method, the average
M ER is 5.90% andaverage
M AE is 0.37%. Thus, the proposedmethod is ≈
1% more efficient with respect to
M ER and 0.1 % more efficient for
M AE com-pared to existing
LBF method.Also, the average CPU time required for exist-ing
LBF method is 0.61 miliseconds, whereas in C P U T i m e ( m s ) Comparison of LBF & APST LBFAPST
Figure 6. Comparison of Forecasting Time be-tween
LBF and
AP ST methodsthe proposed method, it is 0.5 miliseconds. Thus,proposed method is 0.11% more efficient than theexisting method. Future enhancement can be fo-cused on selecting the window size dynamicallyand fine tune the matching sequence.
REFERENCES
1. F Martnez-Alvarez, A Troncoso, J C Riquelmeand J S Aguilar Ruiz. LBF: A Labeled-BasedForecasting Algorithm and Its Application to rediction of Stock Time-Series Data Electricity Price Time Series, in Proceedings ofEighth IEEE Intl Conf. Data Mining , pages 453–461, 2008.2. Xiang Lian, Lei Chen, Jeffrey Xu Yu, JinsongHan, Jian Ma. Multiscale Representations forFast Pattern Matching in Stream Time Series, inIEEE Transaction on Knowledge and Data Engi-neering , 21(4):568–581, 2009.3. A Troncoso, J C Riquelme, J M Riquelme, J LMartnez and A Gomez. Electricity Market PriceForecasting Based on Weighted Nearest Neigh-bours Techniques, in IEEE Transaction on PowerSystems , 22(3):1294–1301, 2007.4. K J Kim. Financial Time Series Forecasting usingSupport Vector Machines, in Neuro-Computing ,55:307–319, 2003.5. Y Radhika and M Shashi. Atmospheric Temper-ature Prediction using Support Vector Machines, in International Journal of Computer Theory andEngineering , 1:55–58, 2009.6. R Nayak and te Braak. Temporal Pattern Match-ing for the Prediction of Stock Prices, in Pro-ceedings of 2nd International Workshop on In-tegrating Artificial Intelligence and Data Mining(AIDM2007) , pages 99–107, 2007.7. A J Conejo, M A Plazas, R Espnola and BMolina. Day-Ahead Electricity Price Forecast-ing Using the Wavelet Transform and
ARIMA
Models, in IEEE Transaction on Power Systems ,20:1035–1042, 2005.8. Akinwale Adio T, Arogundade O T and AdekoyaAdebayo F. Translated Nigeria Stock MarketPrice using Artificial Neural Network for EffectivePrediction, in Journal of Theoretical and AppliedInformation Technology , 2009.9. Kuang Yu Huang and Chuen-Jiuan Jane. A Hy-brid Model Stock Market Forecasting and Port-folio Selection Based on ARX, Grey System andRS Theories, in Expert Systems with Applications ,pages 5387–5392, 2009.10. M Suresh babu, N Geethanjali and B Sathya-narayana. Forecasting of Indian Stock MarketIndex Using Data Mining and Artificial NeuralNework, in International journal of Advance En-gineering and Application , 2011.11. F M lvarez, A Troncoso, J C Riquelme and J SA Ruiz. Energy Time Series Forecasting Based onPattern Sequence Similarity, in IEEE Transcationon Knowledge and Data Engineering , 23(8):1230–1243, 2011. 12. Vishwanath R H, Leena S V, Srikantaiah K C, KShreekrishna Kumar, P Deepa Shenoy, VenugopalK R, S S Iyengar and L M Patnaik.
AP ST : Ap-proximation and Prediction of Stock Time-SeriesData using Pattern Sequence, in Proceedings 8thInternational Multi Conference on InformationProcessing (ICIP2013) , 2013.
Vishwanath R Hulipalled is an Assistant Professor inthe Department of ComputerScience and Engineering atSambhram Institute of Tech-nology, Bangalore, India. Hereceived his Bachelors degree inComputer Science and Engineering from Karnataka University and Master of Engi-neering from UVCE, Bangalore University, Banga-lore. He is presently pursuing his Ph.D in the areaof Data Mining in JNTU Hyderabad. His researchinterest includes Time Series Mining and Data Anal-ysis.
Leena is pursuing B.E inDepartment of ComputerScience and Engineering,University VisveswarayaCollege of Engineering, Ban-galore. Her research interestis in the area of Data Miningand Time Series Mining.
Srikantaiah K C is anAssociate Professor in the De-partment of Computer Scienceand Engineering at S J B Insti-tute of Technology, Bangalore,India. He obtained his B.Eand M.E degrees in ComputerScience and Engineering fromBangalore University, Bangalor-e. He is presently pursuing his Ph.D programme inthe area of Web Mining in Bangalore University. Hisresearch interest is in the area of Data Mining, WebMining and Semantic Web. Vishwanath R Hulipalled et al.
K Shreekrishna Kumar iscurrently the Director of AllIndia Council for Technical Ed-ucation, SWRO, Bangalore. Heobtained his Master of Sciencefrom Bhopal University. Hereceived his Masters degree inInformation Technology fromPunjab University. He was awarded Ph.D in Physics (Glass Technology) from Ma-hatma Gandhi University. He was the member ofthe Jury Panel, Indian Journal of Pure and AppliedPhysics, CSIR (New Delhi), He was the Collaborativeresearcher, Nuclear Science Centre, New Delhi.
P Deepa Shenoy was bornin India, on May 9, 1961. Shegraduated form UVCE, com-pleted her M.E. from UVCE.,has done her MS(Systemsand information) from BITS.,Pilani, and has obtained herPh.D in CSE from BangaloreUniversity. She is presently employed as a Professorin department of CSE at UVCE. Her research in-terests include Computer Networks, Wireless SensorNetworks, Parallel and Distributed Systems, DigitalSignal Processing and Data Mining.
Venugopal K R is currentlythe Principal, University Visves-varaya College of Engineering,Bangalore University, Banga-lore. He obtained his Bachelorof Engineering from UniversityVisvesvaraya College of Engi-neering. He received his Mastersdegree in Computer Science andAutomation from Indian Institute of Science Ban-galore. He was awarded Ph.D in Economics fromBangalore University and Ph.D in Computer Sciencefrom Indian Institute of Technology, Madras. He hasa distinguished academic career and has degrees inElectronics, Economics, Law, Business Finance, Pub-lic Relations, Communications, Industrial Relations,Computer Science and Journalism. He has authoredand edited 39 books on Computer Science and Eco-nomics, which include Petrodollar and the WorldEconomy, C Aptitude, Mastering C, MicroprocessorProgramming, Mastering C++ and Digital Circuitsand Systems etc. . During his three decades of ser-vice at UVCE he has over 350 research papers tohis credit. His research interests include Computer Networks, Wireless Sensor Networks, Parallel andDistributed Systems, Digital Signal Processing andData Mining.
S S Iyengar is currently theRoy Paul Daniels Professor andChairman of the Computer Sci-ence Department at LouisianaState University. He headsthe Wireless Sensor NetworksLaboratory and the RoboticsResearch Laboratory at LSU.He has been involved with research in High Perfor-mance Algorithms, Data Structures, Sensor Fusionand Intelligent Systems, since receiving his Ph.D de-gree in 1974 from MSU, USA. He is Fellow of IEEEand ACM. He has directed over 40 Ph.D studentsand 100 Post Graduate students, many of whom arefaculty at Major Universities worldwide or Scientistsor Engineers at National Labs/Industries around theworld. He has published more than 500 researchpapers and has authored/co-authored 6 books andedited 7 books. His books are published by JohnWiley & Sons, CRC Press, Prentice Hall, SpringerVerlag, IEEE Computer Society Press etc. . One ofhis books titled Introduction to Parallel Algorithmshas been translated to Chinese.
L M Patnaik is currently Hon-orary Professor, Indian Instituteof Science, Bangalore, India.He was a Vice Chancellor,Defense Institute of AdvancedTechnology, Pune, India andwas a Professor since 1986 withthe Department of ComputerScience and Automation, IndianInstitute of Science, Bangalore. During the past35 years of his service at the Institute he has over700 research publications in refereed InternationalJournals and refereed International Conference Pro-ceedings. He is a Fellow of all the four leading Scienceand Engineering Academies in India; Fellow of theIEEE and the Academy of Science for the DevelopingWorld. He has received twenty national and inter-national awards; notable among them is the IEEETechnical Achievement Award for his significant con-tributions to High Performance Computing and SoftComputing. His areas of research interest have beenParallel and Distributed Computing, Mobile Com- rediction of Stock Time-Series Data101