Jan Sedmidubský
Masaryk University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jan Sedmidubský.
Multimedia Tools and Applications | 2018
Jan Sedmidubský; Petr Elias; Pavel Zezula
Motion capture data describe human movements in the form of spatio-temporal trajectories of skeleton joints. Intelligent management of such complex data is a challenging task for computers which requires an effective concept of motion similarity. However, evaluating the pair-wise similarity is a difficult problem as a single action can be performed by various actors in different ways, speeds or starting positions. Recent methods usually model the motion similarity by comparing customized features using distance-based functions or specialized machine-learning classifiers. By combining both these approaches, we transform the problem of comparing motions of variable sizes into the problem of comparing fixed-size vectors. Specifically, each rather-short motion is encoded into a compact visual representation from which a highly descriptive 4,096-dimensional feature vector is extracted using a fine-tuned deep convolutional neural network. The advantage is that the fixed-size features are compared by the Euclidean distance which enables efficient motion indexing by any metric-based index structure. Another advantage of the proposed approach is its tolerance towards an imprecise action segmentation, the variance in movement speed, and a lower data quality. All these properties together bring new possibilities for effective and efficient large-scale retrieval.
advances in databases and information systems | 2017
Jan Sedmidubský; Pavel Zezula; Jan Švec
Motion capture data digitally represent human movements by sequences of body configurations in time. Subsequence matching in such spatio-temporal data is difficult as query-relevant motions can vary in lengths and occur arbitrarily in a very long motion. To deal with these problems, we propose a new subsequence matching approach which (1) partitions both short query and long data motion into fixed-size segments that overlap only partly, (2) uses an effective similarity measure to efficiently retrieve data segments that are the most similar to query segments, and (3) localizes the most query-relevant subsequences within extended and merged retrieved segments in a four-step postprocessing phase. The whole retrieval process is effective and fast in comparison with related work. A real-life 68-minute data motion can be searched in about 1 s with the average precision of \(87.98\%\) for 5-NN queries.
similarity search and applications | 2016
Jan Sedmidubský; Petr Elias; Pavel Zezula
Motion capture data digitally represent human movements by sequences of body configurations in time. Searching in such spatio-temporal data is difficult as query-relevant motions can vary in lengths and occur arbitrarily in the very long data sequence. There is also a strong requirement on effective similarity comparison as the specific motion can be performed by various actors in different ways, speeds or starting positions. To deal with these problems, we propose a new subsequence matching algorithm which uses a synergy of elastic similarity measure and multi-level segmentation. The idea is to generate a minimum number of overlapping data segments so that there is at least one segment matching an arbitrary subsequence. A non-partitioned query is then efficiently evaluated by searching for the most similar segments in a single level only, while guaranteeing a precise answer with respect to the similarity measure. The retrieval process is efficient and scalable which is confirmed by experiments executed on a real-life dataset.
database and expert systems applications | 2018
Jan Sedmidubský; Pavel Zezula
Automatic classification of 3D skeleton sequences of human motions has applications in many domains, ranging from entertainment to medicine. The classification is a difficult problem as the motions belonging to the same class needn’t be well segmented and can be performed by subjects of various body sizes in different styles and speeds. The state-of-the-art recognition approaches commonly solve this problem by training recurrent neural networks to learn the contextual dependency in both spatial and temporal domains. In this paper, we employ a distance-based similarity measure, based on deep convolutional features, to search for the k-nearest motions with respect to a query motion being classified. The retrieved neighbors are analyzed and re-ranked by additional measures that are automatically chosen for individual queries. The combination of deep features, dynamism in the similarity-measure selection, and a new kNN classifier brings the highest classification accuracy on a challenging dataset with 130 classes. Moreover, the proposed approach can promptly react to changing training data without any need for a retraining process.
SEBD | 2008
Michal Batko; Fabrizio Falchi; Claudio Lucchese; David Novak; Raffaele Perego; Fausto Rabitti; Jan Sedmidubský; Pavel Zezula
Archive | 2008
Jan Sedmidubský; Vlastislav Dohnal; Pavel Zezula
Archive | 2015
Jan Sedmidubský; Jakub Valcik; Pavel Zezula
Archive | 2014
Jan Sedmidubský; Vladimir Mic; Pavel Zezula
Archive | 2014
Jan Sedmidubský; Jakub Valcik; Pavel Zezula
Archive | 2013
Jan Sedmidubský; Michal Batko; Pavel Zezula