Bhashyam Ramesh
Teradata
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Bhashyam Ramesh.
international conference on management of data | 2009
Ahmad Ghazal; Dawit Seid; Bhashyam Ramesh; Alain Crolotte; Manjula Koppuravuri; Vinod G
Query processing in a DBMS typically involves two distinct phases: compilation, which generates the best plan and its corresponding execution steps, and execution, which evaluates these steps against database objects. For some queries, considerable resource savings can be achieved by skipping the compilation phase when the same query was previously submitted and its plan was already cached. In a number of important applications the same query, called a Parameterized Query (PQ), is repeatedly submitted in the same basic form but with different parameter values. PQs are extensively used in both data update (e.g. batch update programs) and data access queries. There are tradeoffs associated with caching and re-using query plans such as space utilization and maintenance cost. Besides, pre-compiled plans may be suboptimal for a particular execution due to various reasons including data skew and inability to exploit value-based query transformation like materialized view rewrite and unsatisfiable predicate elimination. We address these tradeoffs by distinguishing two types of plans for PQs: generic and specific plans. Generic plans are pre-compiled plans that are independent of the actual parameter values. Prior to execution, parameter values are plugged in to generic plans. In specific plans, parameter values are plugged prior to the compilation phase. This paper provides a practical framework for dynamically deciding between specific and generic plans for PQs based on a mix of rule and cost based heuristics which are implemented in the Teradata 12.0 DBMS.
Archive | 2015
Bhashyam Ramesh
Big data is a broad descriptive term for non-transactional data that are user generated and machine generated. Data generation evolved from transactional data to first interaction data and then sensor data. Web log was the first step in this evolution. These machines generated logs of internet activity caused the first growth of data. Social media pushed data production higher with human interactions. Automated observations and wearable technologies make the next phase of big data. Data volumes have been the primary focus of most big data discussions. Architecture for big data often focuses on storing large volumes of data. Dollars per TB (Terabyte) becomes the metric for architecture discussions. We argue this is not the right focus. Big data is about deriving value. Therefore, analytics should be the goal behind investments in storing large volumes of data. The metric should be dollars per analytic performed. There are three functional aspects to big data—data capture, data RD (b) Data complexity, not volume, is the primary concern of big data analytics; (c) Measure of goodness of a big data analytic architecture is dollars per analytics and not dollars per TB.
statistical and scientific database management | 2018
Bhashyam Ramesh; C Jaiprakash; Naveen Sankaran; Jitendra Yasaswi
Predicting the amount of time a SQL query takes to execute can help in prioritizing, optimizing and scheduling the query execution. This also helps inoptimal utilization of hardware resources. The total execution time of a query can be split into the time taken for parsing/optimizing a query and the time taken for the actual execution. In this work, we focus on solving the first part of the problem, that is predicting the optimization time of a query. Predicting optimization time can hint the optimizer not to spend too much time optimizing the query in case when parse time is much higher than execution time. Such query execution plans can be cached to speed-up execution in future. If optimization time is much lower than execution time, we can choose not to cache the plan and can make better utilization of execution plan cache. If optimization time is relatively low compared to execution time, optimizer can be hinted to spend more time in parsing to produce better optimized plan which can reduce the execution time. One method towards predicting the parse time is to use some heuristic information by looking at query text. In this work, we take advantage of machine learning techniques by designing a set of features from the SQL query text and use a neural network to predict the parse time. We have tried both regression as well as classification based approaches in this work. We report high accuracy while predicting the elapsed parse time for SQL queries.
Archive | 2004
Douglas P. Brown; Bhashyam Ramesh; Anita Richards
Archive | 2003
Douglas P. Brown; Anita Richards; Bhashyam Ramesh; Caroline M. Ballinger; Richard Glick
Archive | 2005
Douglas P. Brown; Bhashyam Ramesh; Anita Richards
Archive | 2004
Douglas P. Brown; Anita Richards; Bhashyam Ramesh
Archive | 2005
Douglas P. Brown; Anita Richards; Bhashyam Ramesh
Archive | 2006
John Mark Morris; Bhashyam Ramesh
Archive | 2003
Bhashyam Ramesh; Michael W. Watzke