Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jivko Sinapov is active.

Publication


Featured researches published by Jivko Sinapov.


BMC Bioinformatics | 2007

Glycosylation site prediction using ensembles of Support Vector Machine classifiers

Cornelia Caragea; Jivko Sinapov; Adrian Silvescu; Drena Dobbs; Vasant G. Honavar

BackgroundGlycosylation is one of the most complex post-translational modifications (PTMs) of proteins in eukaryotic cells. Glycosylation plays an important role in biological processes ranging from protein folding and subcellular localization, to ligand recognition and cell-cell interactions. Experimental identification of glycosylation sites is expensive and laborious. Hence, there is significant interest in the development of computational methods for reliable prediction of glycosylation sites from amino acid sequences.ResultsWe explore machine learning methods for training classifiers to predict the amino acid residues that are likely to be glycosylated using information derived from the target amino acid residue and its sequence neighbors. We compare the performance of Support Vector Machine classifiers and ensembles of Support Vector Machine classifiers trained on a dataset of experimentally determined N-linked, O-linked, and C-linked glycosylation sites extracted from O-GlycBase version 6.00, a database of 242 proteins from several different species. The results of our experiments show that the ensembles of Support Vector Machine classifiers outperform single Support Vector Machine classifiers on the problem of predicting glycosylation sites in terms of a range of standard measures for comparing the performance of classifiers. The resulting methods have been implemented in EnsembleGly, a web server for glycosylation site prediction.ConclusionEnsembles of Support Vector Machine classifiers offer an accurate and reliable approach to automated identification of putative glycosylation sites in glycoprotein sequences.


IEEE Transactions on Robotics | 2011

Vibrotactile Recognition and Categorization of Surfaces by a Humanoid Robot

Jivko Sinapov; Vladimir Sukhoy; Ritika Sahai; Alexander Stoytchev

This paper proposes a method for interactive surface recognition and surface categorization by a humanoid robot using a vibrotactile sensory modality. The robot was equipped with an artificial fingernail that had a built-in three-axis accelerometer. The robot interacted with 20 different surfaces by performing five different exploratory scratching behaviors on them. Surface-recognition models were learned by coupling frequency-domain analysis of the vibrations detected by the accelerometer with machine learning algorithms, such as support vector machine (SVM) and k-nearest neighbors (k -NN). The results show that by applying several different scratching behaviors on a test surface, the robot can recognize surfaces better than with any single behavior alone. The robot was also able to estimate a measure of similarity between any two surfaces, which was used to construct a grounded hierarchical surface categorization.


international conference on development and learning | 2008

Detecting the functional similarities between tools using a hierarchical representation of outcomes

Jivko Sinapov; Alexadner Stoytchev

The ability to reason about multiple tools and their functional similarities is a prerequisite for intelligent tool use. This paper presents a model which allows a robot to detect the similarity between tools based on the environmental outcomes observed with each tool. To do this, the robot incrementally learns an adaptive hierarchical representation (i.e., a taxonomy) for the types of environmental changes that it can induce and detect with each tool. Using the learned taxonomies, the robot can infer the similarity between different tools based on the types of outcomes they produce. The results show that the robot is able to learn accurate outcome models for six different tools. In addition, the robot was able to detect the similarity between tools using the learned outcome models.


IEEE Transactions on Autonomous Mental Development | 2012

A Behavior-Grounded Approach to Forming Object Categories: Separating Containers From Noncontainers

Shane Griffith; Jivko Sinapov; Vladimir Sukhoy; Alexander Stoytchev

This paper introduces a framework that allows a robot to form a single behavior-grounded object categorization after it uses multiple exploratory behaviors to interact with objects and multiple sensory modalities to detect the outcomes that each behavior produces. Our robot observed acoustic and visual outcomes from six different exploratory behaviors performed on 20 objects (containers and noncontainers). Its task was to learn 12 different object categorizations (one for each behavior-modality combination), and then to unify these categorizations into a single one. In the end, the object categorization acquired by the robot matched closely the object labels provided by a human. In addition, the robot acquired a visual model of containers and noncontainers based on its unified categorization, which it used to label correctly 29 out of 30 novel objects.


international conference on robotics and automation | 2009

Interactive learning of the acoustic properties of household objects

Jivko Sinapov; Mark Wiemer; Alexander Stoytchev

Human beings can perceive object properties such as size, weight, and material type based solely on the sounds that the objects make when an action is performed on them. In order to be successful, the household robots of the near future must also be capable of learning and reasoning about the acoustic properties of everyday objects. Such an ability would allow a robot to detect and classify various interactions with objects that occur outside of the robots field of view. This paper presents a framework that allows a robot to infer the object and the type of behavioral interaction performed with it from the sounds generated by the object during the interaction. The framework is evaluated on a 7-d.o.f. Barrett WAM robot which performs grasping, shaking, dropping, pushing and tapping behaviors on 36 different household objects. The results show that the robot can learn models that can be used to recognize objects (and behaviors performed on objects) from the sounds generated during the interaction. In addition, the robot can use the learned models to estimate the similarity between two objects in terms of their acoustic properties.


The International Journal of Robotics Research | 2011

Interactive object recognition using proprioceptive and auditory feedback

Jivko Sinapov; Taylor Bergquist; Connor Schenck; Ugonna Ohiri; Shane Griffith; Alexander Stoytchev

In this paper we propose a method for interactive recognition of household objects using proprioceptive and auditory feedback. In our experiments, the robot observed the changes in its proprioceptive and auditory sensory streams while performing five exploratory behaviors (lift, shake, drop, crush, and push) on 50 common household objects (e.g. bottles, cups, balls, toys, etc.). The robot was tasked with recognizing the objects it was manipulating by feeling them and listening to the sounds that they make without using any visual information. The results show that both proprioception and audio, coupled with exploratory behaviors, can be used successfully for object recognition. Furthermore, the robot was able to integrate feedback from the two modalities, to achieve even better recognition accuracy. Finally, the results show that the robot can boost its recognition rate even further by applying multiple different exploratory behaviors on the object.


Robotics and Autonomous Systems | 2014

Grounding semantic categories in behavioral interactions: Experiments with 100 objects

Jivko Sinapov; Connor Schenck; Kerrick Staley; Vladimir Sukhoy; Alexander Stoytchev

Abstract From an early stage in their development, human infants show a profound drive to explore the objects around them. Research in psychology has shown that this exploration is fundamental for learning the names of objects and object categories. To address this problem in robotics, this paper presents a behavior-grounded approach that enables a robot to recognize the semantic labels of objects using its own behavioral interaction with them. To test this method, our robot interacted with 100 different objects grouped according to 20 different object categories. The robot performed 10 different behaviors on them, while using three sensory modalities (vision, proprioception and audio) to detect any perceptual changes. The results show that the robot was able to use multiple sensorimotor contexts in order to recognize a large number of object categories. Furthermore, the category recognition model presented in this paper was able to identify sensorimotor contexts that can be used to detect specific categories. Most importantly, the robot’s model was able to reduce exploration time by half by dynamically selecting which exploratory behavior should be applied next when classifying a novel object.


international conference on robotics and automation | 2011

Object category recognition by a humanoid robot using behavior-grounded relational learning

Jivko Sinapov; Alexander Stoytchev

The ability to form and recognize object categories is fundamental to human intelligence. This paper proposes a behavior-grounded relational classification model that allows a robot to recognize the categories of household objects. In the proposed approach, the robot initially explores the objects by applying five exploratory behaviors (lift, shake, drop, crush and push) on them while recording the proprioceptive and auditory sensory feedback produced by each interaction. The sensorimotor data is used to estimate multiple measures of similarity between the objects, each corresponding to a specific coupling between an exploratory behavior and a sensory modality. A graph-based recognition model is trained by extracting features from the estimated similarity relations, allowing the robot to recognize the category memberships of a novel object based on the objects similarity to the set of familiar objects. The framework was evaluated on an upper-torso humanoid robot with two large sets of household objects. The results show that the robots model is able to recognize complex object categories (e.g., metal objects, empty bottles, etc.) significantly better than chance.


international conference on development and learning | 2009

Toward interactive learning of object categories by a robot: A case study with container and non-container objects

Shane Griffith; Jivko Sinapov; Matthew Miller; Alexander Stoytchev

This paper proposes an interactive approach to object categorization that is consistent with the principle that a robots object representations should be grounded in its sensorimotor experience. The proposed approach allows a robot to: 1) form object categories based on the movement patterns observed during its interaction with objects, and 2) learn a perceptual model to generalize object category knowledge to novel objects. The framework was tested on a container/non-container categorization task. The robot successfully separated the two object classes after performing a sequence of interactive trials. The robot used the separation to learn a perceptual model of containers, which, which, in turn, was used to categorize novel objects as containers or non-containers.


BMC Bioinformatics | 2009

Mixture of experts models to exploit global sequence similarity on biomolecular sequence labeling

Cornelia Caragea; Jivko Sinapov; Drena Dobbs; Vasant G. Honavar

BackgroundIdentification of functionally important sites in biomolecular sequences has broad applications ranging from rational drug design to the analysis of metabolic and signal transduction networks. Experimental determination of such sites lags far behind the number of known biomolecular sequences. Hence, there is a need to develop reliable computational methods for identifying functionally important sites from biomolecular sequences.ResultsWe present a mixture of experts approach to biomolecular sequence labeling that takes into account the global similarity between biomolecular sequences. Our approach combines unsupervised and supervised learning techniques. Given a set of sequences and a similarity measure defined on pairs of sequences, we learn a mixture of experts model by using spectral clustering to learn the hierarchical structure of the model and by using bayesian techniques to combine the predictions of the experts. We evaluate our approach on two biomolecular sequence labeling problems: RNA-protein and DNA-protein interface prediction problems. The results of our experiments show that global sequence similarity can be exploited to improve the performance of classifiers trained to label biomolecular sequence data.ConclusionThe mixture of experts model helps improve the performance of machine learning methods for identifying functionally important sites in biomolecular sequences.

Collaboration


Dive into the Jivko Sinapov's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Peter Stone

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Jesse Thomason

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Raymond J. Mooney

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Matteo Leonetti

University of Texas at Austin

View shared research outputs
Top Co-Authors

Avatar

Maxwell Svetlik

University of Texas at Austin

View shared research outputs
Researchain Logo
Decentralizing Knowledge