Libraries & Information Technology eJournal | 2021

Review of Knowledge Management in Optical Networks, Lambda Architecture using Database Technologies in Cloud Settings

 

Abstract


In this paper, we present concepts, theories, and overview of knowledge management in an autonomous optical networks and Lamba Architecture in cloud related environment. This study presents some illustrative cases that has been used to illustrate the potential application of KM architecture and to evaluate the various policies for the knowledge sharing and integration algorithm. Here the knowledge is used at the optical transponder system level, sharing, and integration implemented at the node level and supervising and data analytics (SDA) controller level. The KM process has been evaluated by integrating on a metro-network situation in terms of model error convergence time and the data shared among agents. Indeed, the propagation and reinforcement actions illustrated by similar convergence time than data-based policies at various phases of the network learning process without compromising the convergence accuracy of the model prediction. The Lambda Architecture is the new model for Big Data and database research focus, that helps in data processing with a balance on throughput, latency, and fault-tolerance. To provide a complete solution and better accuracy, low latency, and high throughput, there exists no single tool. This introduced the idea to use a set of tools and methods to build a comprehensive Big Data approach. Although this paper does not provide a developed and working tool, however, provides an outline and the methods used by researchers to overcome some of the shortcomings of Lambda Architecture. The Lambda Architecture defines a set of layers to fit in a set of tools and methods rightly for constructing a comprehensive Big Data scheme: Speed Layer, Serving Layer, Batch Layer. Each layer satisfies a set of features and builds upon the functionality delivered by the layers beneath it. The Batch Layer is the place where the master dataset is warehoused, which is an unchangeable and add-only set of raw data. Also, the batch layer computes before the results using a distributed processing system like Hadoop, Apache Spark that can manage large amounts of data. The Speed Layer encapsulates new data coming in real-time and processes it. The Serving Layer comprises a parallel processing query steam engine, that takes results from both Batch and Speed Layers and responds to questions and requests in real-time with low latency. Stack Overflow is a Question-and-Answer forum with an enormous user community, millions of posts with rapid growth over the years. This paper demonstrates the Lambda Architecture by constructing a data pipeline, to add a new “Recommended Questions” section in the Stack Overflow user profile and update the questions suggested in real-time. Additionally, various indicators such as trending tags, user performance numbers such as are shown in user dashboard by querying through batch processing layer. Finally, this paper provides a seamless search of the various methods or techniques used to help solve complex databases which are provided by Stack Overflow platform infrastructure.

Volume None
Pages None
DOI 10.29322/ijsrp.11.08.2021.p11663
Language English
Journal Libraries & Information Technology eJournal

Full Text