Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Victor S. Lobanov is active.

Publication


Featured researches published by Victor S. Lobanov.


Journal of Computational Chemistry | 2001

Multidimensional scaling and visualization of large molecular similarity tables

Dimitris K. Agrafiotis; Dmitrii N. Rassokhin; Victor S. Lobanov

Multidimensional scaling (MDS) is a collection of statistical techniques that attempt to embed a set of patterns described by means of a dissimilarity matrix into a low‐dimensional display plane in a way that preserves their original pairwise interrelationships as closely as possible. Unfortunately, current MDS algorithms are notoriously slow, and their use is limited to small data sets. In this article, we present a family of algorithms that combine nonlinear mapping techniques with neural networks, and make possible the scaling of very large data sets that are intractable with conventional methodologies. The method employs a nonlinear mapping algorithm to project a small random sample, and then “learns” the underlying transform using one or more multilayer perceptrons. The distinct advantage of this approach is that it captures the nonlinear mapping relationship in an explicit function, and allows the scaling of additional patterns as they become available, without the need to reconstruct the entire map. A novel encoding scheme is described, allowing this methodology to be used with a wide variety of input data representations and similarity functions. The potential of the algorithm is illustrated in the analysis of two combinatorial libraries and an ensemble of molecular conformations. The method is particularly useful for extracting low‐dimensional Cartesian coordinate vectors from large binary spaces, such as those encountered in the analysis of large chemical data sets.


Journal of Chemical Information and Computer Sciences | 1999

AN EFFICIENT IMPLEMENTATION OF DISTANCE-BASED DIVERSITY MEASURES BASED ON K-D TREES

Dimitris K. Agrafiotis; Victor S. Lobanov

The problem of quantifying molecular diversity continues to attract significant interest among computational chemists. Most algorithms reported to date are distance-based and scale to the square of...


Journal of Computational Chemistry | 2001

Nonlinear Mapping of Massive Data Sets by Fuzzy Clustering and Neural Networks

Dmitrii N. Rassokhin; Victor S. Lobanov; Dimitris K. Agrafiotis

Producing good low‐dimensional representations of high‐dimensional data is a common and important task in many data mining applications. Two methods that have been particularly useful in this regard are multidimensional scaling and nonlinear mapping. These methods attempt to visualize a set of objects described by means of a dissimilarity or distance matrix on a low‐dimensional display plane in a way that preserves the proximities of the objects to whatever extent is possible. Unfortunately, most known algorithms are of quadratic order, and their use has been limited to relatively small data sets. We recently demonstrated that nonlinear maps derived from a small random sample of a large data set exhibit the same structure and characteristics as that of the entire collection, and that this structure can be easily extracted by a neural network, making possible the scaling of data set orders of magnitude larger than those accessible with conventional methodologies. Here, we present a variant of this algorithm based on local learning. The method employs a fuzzy clustering methodology to partition the data space into a set of Voronoi polyhedra, and uses a separate neural network to perform the nonlinear mapping within each cell. We find that this local approach offers a number of advantages, and produces maps that are virtually indistinguishable from those derived with conventional algorithms. These advantages are discussed using examples from the fields of combinatorial chemistry and optical character recognition.


Journal of Alzheimer's Disease | 2011

Quantifying the Pathophysiological Timeline of Alzheimer's Disease

Eric Yang; Michael Farnum; Victor S. Lobanov; Tim Schultz; Nandini Raghavan; Mahesh N. Samtani; Gerald Novak; Vaibhav A. Narayan; Allitia DiBernardo

Hypothetical models of AD progression typically relate clinical stages of AD to sequential changes in CSF biomarkers, imaging, and cognition. However, quantifying the continuous trajectories proposed by these models over time is difficult because of the difficulty in relating the dynamics of different biomarkers during a clinical trial that is significantly shorter than the duration of the disease. We seek to show that through proper synchronization, it is possible to de-convolve these trends and quantify the periods of time associated with different pathophysiological changes associated with Alzheimers disease (AD). We developed a model that replicated the observed progression of ADAS-Cog 13 scores and used this as a more precise estimate of disease-duration and thus pathologic stage. We then synchronized cerebrospinal fluid (CSF) and imaging biomarkers according to our new disease timeline. By de-convolving disease progression via ADAS-Cog 13, we were able to confirm the predictions of previous hypothetical models of disease progression as well as establish concrete timelines for different pathobiological events. Specifically, our work supports a sequential pattern of biomarker changes in AD in which reduction in CSF Aβ(42) and brain atrophy precede the increases in CSF tau and phospho-tau.


Combinatorial Chemistry & High Throughput Screening | 2002

Scalable Methods for the Construction and Analysis of Virtual Combinatorial Libraries

Victor S. Lobanov; Dimitris K. Agrafiotis

One can distinguish between two kinds of virtual combinatorial libraries: viable and accessible . Viable libraries are relatively small in size, are assembled from readily available reagents that have been filtered by the medicinal chemist, and often have a physical counterpart. Conversely, accessible libraries can encompass millions or billions of structures, typically include all possible reagents that are in principle compatible with a particular reaction scheme, and they can never be physically synthesized in their entirety. Although the analysis of viable virtual libraries is relatively straightforward, the handling of large accessible libraries requires methods that scale well with respect to library size. In this work, we present novel, efficient and scalable techniques for the construction, analysis, and in silico screening of massive virtual combinatorial libraries.


Journal of Computational Chemistry | 2001

Multidimensional scaling of combinatorial libraries without explicit enumeration

Dimitris K. Agrafiotis; Victor S. Lobanov

A novel approach for the multidimensional scaling of large combinatorial libraries is presented. The method employs a multilayer perceptron, which is trained to predict the coordinates of the products on the nonlinear map from pertinent features of their respective building blocks. This method limits the expensive enumeration and descriptor generation to only a small fraction of products and, in addition, relieves the enormous computational effort required for the low‐dimensional embedding by conventional iterative multidimensional scaling algorithms. In effect, the method provides an explicit mapping function from reagents to products, and allows the vast majority of compounds to be projected without constructing their connection tables. The advantages of this approach are demonstrated using two combinatorial libraries based on the reductive amination and Ugi reactions, and three descriptor sets that are commonly used in similarity searching, diversity profiling and structure–activity correlation.


Journal of Alzheimer's Disease | 2012

A Novel Subject Synchronization Clinical Trial Design for Alzheimer's Disease

Tim Schultz; Eric Yang; Michael Farnum; Victor S. Lobanov; Rudi Verbeeck; Nandini Raghavan; Mahesh N. Samtani; Gerald Novak; Yingqi Shi; Vaibhav A. Narayan; Allitia DiBernardo

One of the challenges in developing a viable therapy for Alzheimers disease has been demonstrating efficacy within a clinical trial. Using this as motivation, we sought to re-examine conventional clinical trial practices in order to determine whether efficacy can be better shown through alternative trial designs and novel analysis methods. In this work, we hypothesize that the confounding factors which hamper the ability to discern a treatment signal are the variability in observations as well as the insidious nature of the disease. We demonstrate that a two-phase trial design in which drug dosing is administered after a certain level of disease severity has been reached, coupled with a method to account more accurately for the progression of the disease, may allow us to compensate for these factors, and thus enable us to make treatment effects more apparent. Utilizing data from two previously failed trials which involved the evaluation of galantamine for indication in mild cognitive impairment, we were able to demonstrate that a clear treatment effect can be realized through both visual and statistical means, and propose that future trials may be more likely to show success if similar methods are utilized.


bioinformatics and biomedicine | 2011

Uniting Data for Memory: Building Informatics against Alzheimer's

Michael Farnum; Victor S. Lobanov; Eric Yang; Tim Schultz; Rudi Verbeeck

In the absence of revolutionary therapeutics, the number of people afflicted with Alzheimers disease is expected to grow dramatically from an estimated 26 million today to more than 100 million world-wide in the next 40 years. This huge unmet need has spurred a large investment from the medical community, including both naturalistic, public studies and clinical trials within industry. Since it is widely accepted that the underlying disease pathology can start more than 10 years before a patients clinical determination of Alzheimers, these trials can include a large variety of biomarkers, including genetic, proteomic, and imaging outputs, in additional to a spectrum of clinical and functional scales. Leveraging the potential of all these studies requires a significant informatics effort to ingest, curate, store, and manipulate the data, such that they can be used effectively for data mining. One aspect of our own efforts in this area has centered on loading to a common database schema, from which tool sets and analyses routines can be standardized. This has allowed us to greatly accelerate hypothesis generation, making the selection of potential covariates a data driven, rather than theoretical, exercise. A second part of the effort has focused on the mapping of terms found in individual studies to public ontologies, which provides a framework for translating results across trials.


bioinformatics and biomedicine | 2011

Opening the Door to Electronic Medical Records: Using Informatics to Overcome Terabytes

Michael Farnum; Victor S. Lobanov; Frank Defalco; Soledad Cepeda

Databases of medical records contain a wealth of information critical to many areas of research including drug safety, health outcomes, clinical epidemiology and translational medicine. Through commercially available databases, researchers can gain a better understanding of the impact of exposure to drugs and medical devices, identify populations at risk for adverse effects, estimate the prevalence and natural history of medical conditions, and assess drug utilization across different demographic groups. However, the daunting size and complexity of these databases as well as lack of convenient tools to mine them have made this information largely inaccessible to all but a few experts with advanced data management and statistical programming skills. Using a combination of a relational data management strategy and a graphical user front-end, we have developed an approach that allows any medical researcher to perform a number of common searches and analyses in a consistent, intuitive and interactive manner, without the assistance of an expert programmer. Moreover, the optimization work done on the database and application sides have dramatically reduced the time needed to analyze the data and, thus, increased the number of studies that can be performed. A crucial part of any such study is the selection of code lists for diseases, procedures, medications, etc., and we have supported this effort by allowing definitions to be queried using common ontologies and shared conveniently across the organization.


Journal of Chemical Information and Computer Sciences | 2002

On the use of neural network ensembles in QSAR and QSPR.

Dimitris K. Agrafiotis; Walter Cedeño; Victor S. Lobanov

Collaboration


Dive into the Victor S. Lobanov's collaboration.

Researchain Logo
Decentralizing Knowledge