Ruma Dutta
West Bengal University of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Ruma Dutta.
parallel computing technologies | 2007
Anirban Kundu; Ruma Dutta; Debajyoti Mukhopadhyay
Web Search Engine uses forward indexing and inverted indexing as a part of its functional design. This indexing mechanism helps retrieving data from the database based on user query. In this paper, an efficient solution to handle the indexing problem is proposed with the introduction of Nonlinear Single Cycle Multiple Attractor Cellular Automata (SMACA). This work simultaneously shows generation of SMACA by using specific rule sequence. Searching mechanism is done with linear time complexity.
International Journal of Intelligent Information and Database Systems | 2008
Anirban Kundu; Ruma Dutta; Debajyoti Mukhopadhyay
Web search engine uses indexing for management of web-pages in a mannered way. Web-pages are well distributed within the database of server. Both forward and inverted indexing is employed to tackle web-pages as a part of its functional design. This indexing mechanism helps in retrieving data from the database based on user query. In this paper, an efficient solution to handle the indexing problem is proposed with the introduction of non-linear single cycle multiple attractor cellular automata (SMACA). This paper also reports an analysis on SMACA using rule vector graph (RVG). This work simultaneously shows generation of SMACA by using specific rule sequence. Searching mechanism is done with O(n) complexity. SMACA provides an implicit memory to store the patterns. Search operation to identify the class of a pattern out of several classes boils down to running a cellular automata (CA) for one time step. This demands storage of the CA rule vector (RV) and the seed values. SMACA is based on sound theoretical foundation of CA technology.
International Journal of Knowledge and Web Intelligence | 2011
Ruma Dutta; Anirban Kundu; Debajyoti Mukhopadhyay
Web page prediction plays an important role by predicting and fetching probable web page of next request in advance, resulting in reducing the user latency. The users surf the internet either by entering URL or search for some topic or through link of same topic. For searching and for link prediction, clustering plays an important role. Besides the topic, navigational behaviour is not ignored. This paper proposes a web page prediction model giving significant importance to the users interest using the clustering technique and the navigational behaviour of the user through Markov model. The clustering technique is used for the accumulation of the similar web pages. Similar web pages of same type reside in the same cluster, the cluster containing web pages have the similarity with respect to topic of the session. The clustering algorithms considered are K-means and K-mediods, where K is determined by HITS algorithm. Finally, the predicted web pages are stored in form of cellular automata to make the system more memory efficient.
International Journal of Intelligent Information and Database Systems | 2009
Anirban Kundu; Ruma Dutta; Rana Dattagupta; Debajyoti Mukhopadhyay
An important component of any web search engine is its crawler, which is also known as robot or spider. An efficient set of crawlers make any search engine more powerful, apart from its other measures of performance, such as its ranking algorithm, storage mechanism, indexing techniques, etc. In this paper, we have proposed an extended technique for crawling over the World Wide Web (WWW) on behalf of a search engine. This is an approach with multiple crawlers working in parallel combined with the mechanism of focused crawling (Chakrabarti et al., 1999a, 2002; Mukhopadhyay et al., 2006). In this approach, the total structure of any website is divided into several number of levels based on the hyperlink-structure for downloading web pages from that website (Chakrabarti et al., 1999b; Mukhopadhyay and Singh, 2004). The number of crawlers of each level is not fixed, rather dynamic in this context. It is determined at execution time on demand basis using threaded program based on the number of hyperlinks of a specific web page. This paper also proposes a focused hierarchical crawling technique, where crawlers are created dynamically at runtime for different domains to crawl the web pages with the essence of resource sharing.
international symposium on information technology convergence | 2007
Anirban Kundu; Ruma Dutta; Debajyoti Mukhopadhyay
Search for any topic or a word using Web search engine depends on its ranking mechanism. Generally, search engine sorts through the millions of Web pages and then present the significant Web-pages that match search topic of the user. These matches will be further ranked, so that the most relevant ones come first. This paper proposes an alternate way to rank hyper-linked Web page through analysis of link structure. This link structure is stored in an efficient way to minimize the storage space using Galois Extension Field GF (2P).
Journal of Convergence Information Technology | 2009
Ruma Dutta; Indranil Ghosh; Anirban Kundu; Debajyoti Mukhopadhyay
Journal of Convergence Information Technology | 2009
Ruma Dutta; Anirban Kundu; Rana Dattagupta; Debajyoti Mukhopadhyay
Journal of Computing and Information Technology | 2006
Anirban Kundu; Ruma Dutta; Debajyoti Mukhopadhyay
Journal of Computing and Information Technology | 2007
Ruma Dutta; Anirban Kundu; Debajyoti Mukhopadhyay