K. Thammi Reddy
Gandhi Institute of Technology and Management
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by K. Thammi Reddy.
Journal of Biomolecular Structure & Dynamics | 2015
S.V.G. Reddy; K. Thammi Reddy; V. Valli Kumari; Syed Hussain Basha
Indoleamine 2,3-dioxygenase (IDO) is emerging as an important new therapeutic drug target for the treatment of cancer characterized by pathological immune suppression. IDO catalyzes the rate-limiting step of tryptophan degradation along the kynurenine pathway. Reduction in local tryptophan concentration and the production of immunomodulatory tryptophan metabolites contribute to the immunosuppressive effects of IDO. Presence of IDO on dentritic cells in tumor-draining lymph nodes leading to the activation of T cells toward forming immunosuppressive microenvironment for the survival of tumor cells has confirmed the importance of IDO as a promising novel anticancer immunotherapy drug target. On the other hand, Withaferin A (WA) – active constituent of Withania Somnifera ayurvedic herb has shown to be having a wide range of targeted anticancer properties. In the present study conducted here is an attempt to explore the potential of WA in attenuating IDO for immunotherapeutic tumor arresting activity and to elucidate the underlying mode of action in a computational approach. Our docking and molecular dynamic simulation results predict high binding affinity of the ligand to the receptor with up to −11.51 kcal/mol of energy and 3.63 nM of IC50 value. Further, de novo molecular dynamic simulations predicted stable ligand interactions with critically important residues SER167; ARG231; LYS377, and heme moiety involved in IDO’s activity. Conclusively, our results strongly suggest WA as a valuable small ligand molecule with strong binding affinity toward IDO.
workshop on information security applications | 2015
Venkateswara Rao Pallipamu; K. Thammi Reddy; P. Suresh Varma
Cryptographic hash function plays a pivotal role in many parts of cryptographic algorithms and protocols, especially in authentication, non-repudiation and data integrity services. A cryptographic hash function takes an input of arbitrary large size message and produces a fixed small size hash code as an output. In the proposed hash algorithm ASH-160 (Algorithm for Secure Hashing-160), each 512-bit block of a message is first reduced to 480-bit block and then divided into ten equal blocks and each one is further divided into three sub-blocks of 16-bits each. These three sub-blocks act as three points of a triangle, which are used in Area calculation. The calculated area values are in turn processed to generate message digest. ASH-160 is simple to construct, easy to implement and exhibits strong avalanche effect, when compared to SHA1 and RIPEMD160.
Journal of Advances in Information Technology | 2017
P. Srinivasa Rao; M. H. M. Krishna Prasad; K. Thammi Reddy
With the arrival of the data deluge, traditional and centralized tools used to extract knowledge from data become obsolete due to their limited ability to handle massive data. To cope with the need for scalable solutions, a new framework has emerged: Hadoop, an open-source ecosystem designed for storage and large-scale processing work on a cluster of commodity hardware. In order to overcome the limitations in key word based information retrieval systems, an efficient methodology has been designed. A system with the new approach mimics the real world, where every task is laced with certain indexing as this is basic idea behind knowledge processing. Hadoop and R: open source frame works for storing and processing large datasets, are used for preprocessing the text documents. First, a set of text documents are considered. Preprocessing is performed on a large domain of data using R. This includes the removal of the stop words along with stemming and excluding less frequency words. Despite this preprocessing, owing to the colossal number of index terms still floating in the considered domain data, the problem of high dimensionality is encountered. Therefore the dimensionality of such a group of terms is reduced by incorporating a keyword based methodology in Hadoop MapReduce Framework. The developed Model is useful for processing the query which gives us the relevant information with low response time from the data pool
Archive | 2015
S.V.G. Reddy; K. Thammi Reddy; V. Valli Kumari
The binding energy is the significant factor which elucidates the efficiency of docking between protein-ligand and protein-protein. To perform the Docking process, Genetic algorithm with its standard parameters is been used very often which will furnish the docking conformations, binding energies, interactions etc. In this proposed work, the parameters of genetic algorithm are variably changed for the docking process and we have observed the enhancement of binding energy, number of interactions etc. which plays a substantial role in the drug design.
Archive | 2015
D. S. R. Naveenkumar; M. Kranthi Kiran; K. Thammi Reddy; V. Sreenivas Raju
Information retrieval in a semantic desktop environment is an important aspect that is to be stressed on. There are many works in the past which proposed many improved information retrieval techniques in this environment. But in most of them there is one thing that is lacking, is the involvement of user in the retrieval process. In other words it should be an interactive information retrieval system. This paper proposes an interaction based information retrieval system which interacts with the user to find out the hints and the suggestions from his/her side to get the best of the results which might satisfy the user.
Archive | 2015
P. Srinivasa Rao; M. H. M. Krishna Prasad; K. Thammi Reddy
In Bigdata applications, providing security to massive data is an important challenge because working with such data requires large scale resources that must be provided by cloud service provider. Here, this paper demonstrates a cloud implementation and technologies using big data and discusses how to protect such data using hashing and how users can be authenticated. In particular, technologies using big data such as the Hadoop project of Apache are discussed, which provides parallelized and distributed data analyzing and processing of petabyte of data, along with a summarized view of monitoring and usage of Hadoop cluster. In this paper, an algorithm called FNV hashing is introduced to provide integrity of the data that has been outsourced to cloud by the user. The data within Hadoop cluster can be accessed and verified using hashing. This approach brings out to enable many new security challenges over the cloud environment using Hadoop distributed file system. The performance of the cluster can be monitored by using ganglia monitoring tool. This paper designs an evaluation cloud model which will provide quantity related results for regularly checking accuracy and cost. From the results of the experiment found out that this model is more accurate, cheaper and can respond in real time.
International journal of database theory and application | 2015
P. Srinivasa Rao; M. H. M. Krishna Prasad; K. Thammi Reddy
Machine Graphics & Vision International Journal archive | 2011
R. Todmal Satish; K. Thammi Reddy
International Journal of Information Technology and Computer Science | 2014
P. Srinivasa Rao; K. Thammi Reddy; Mhm. Krishna Prasad
International Journal of Information Technology and Computer Science | 2013
P. Srinivasa Rao; K. Thammi Reddy; Mhm. Krishna Prasad