Dmitry N. Budnikov
Intel
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Dmitry N. Budnikov.
international conference on multimedia and expo | 2007
Trista Pei-chun Chen; Dmitry N. Budnikov; Christopher J. Hughes; Yen-Kuang Chen
The recent emergence of multi-core processors enables a new trend in the usage of computers. Computer vision applications, which require heavy computation and lots of bandwidth, usually cannot run in real-time. Recent multi-core processors can potentially serve the needs of such workloads. In addition, more advanced algorithms can be developed utilizing the new computation paradigm. In this paper, we study the performance of an articulated body tracker on multi-core processors. The articulated body tracking workload encapsulates most of the important aspects of a computer vision workload. It takes multiple camera inputs of a scene with a single human object, extracts useful features, and performs statistical inference to find the body pose. We show the importance of properly parallelizing the workload in order to achieve great performance: speedups of 26 on 32 cores. We conclude that: (1) data-domain parallelization is better than function-domain parallelization for computer vision applications; (2) data-domain parallelism by image regions and particles is very effective; (3) reducing serial code in edge detection brings significant performance improvements; (4) domain knowledge about low/mid/high level of vision computation is helpful in parallelizing the workload.
international conference on acoustics, speech, and signal processing | 2004
Dmitry N. Budnikov; Igor Chikalov; Sergey A. Egorychev; Igor Kozintsev; Rainer Lienhart
We propose a novel synchronization scheme for distributed audio-video input and output on heterogeneous general purpose computers (GPC) such as laptops, tablets, PDA, smart telephones, audio recorders, and camcorders. These devices typically possess sensors such as microphones and possibly cameras, and actuators such as loudspeakers and displays. In order to combine them wirelessly into a distributed array signal processing system, it is necessary to provide relative time synchronization to sensors and actuators. In this work we propose a setup and an algorithm to synchronize input and output for a network of distributed multichannel audio sensors and actuators connected to GPC. An IEEE 802.11 wireless network is used to deliver the global clock to distributed GPC, while the interrupt mechanism is employed to distribute the clock between I/O devices. Experimental results demonstrate a precision in A/D D/A synchronization precision better than 50 /spl mu/s (a couple of samples at 48 kHz).
Archive | 2005
Rainer Lienhart; Igor Kozintsev; Dmitry N. Budnikov; Igor Chikalov; Vikas C. Raykar
Array audio-visual signal processing algorithms require time-synchronized capture of AV-data on distributed platforms. In addition, the geometry of the array of cameras, microphones, speakers and displays is often required. In this chapter we present a novel setup involving network of wireless computing platforms with sensors and actuators onboard, and algorithms that can provide both synchronized I/O and self-localization of the I/O devices in 3D space. The proposed algorithms synchronize input and output for a network of distributed multi-channel audio sensors and actuators connected to general purpose computing platforms (GPCs) such as laptops, PDAs and tablets. IEEE 802.11 wireless network is used to deliver the global clock to distributed GPCs, while the interrupt timestamping mechanism is employed to distribute the clock between I/O devices. Experimental results demonstrate a precision in A/D D/A synchronization precision better than 50 μs (a couple of samples at 48 kHz). We also present a novel algorithm to automatically determine the relative 3D positions of the sensors and actuators connected to GPCs. A closed form approximate solution is derived using the technique of metric multidimensional scaling, which is further refined by minimizing a non-linear error function. Our formulation and solution account for the errors in localization, due to lack of temporal synchronization among different platforms. The performance limit for the sensor positions is analyzed with respect to the number of sensors and actuators as well as their geometry. Simulation results are reported together with a discussion of the practical issues in a real-time system.
Archive | 2003
Dmitry N. Budnikov; Igor Chikalov; Sergey A. Egorychev
Archive | 2003
Rainer Lienhart; Igor Kozintsev; Dmitry N. Budnikov; Igor Chikalov; Sergey A. Egorychev
Archive | 2012
Dmitry N. Budnikov; Igor Igor Chikalov; Sergey N. Zheltov
Archive | 2003
Rainer Lienhart; Igor Kozintsev; Dmitry N. Budnikov; Igor Chikalov; Sergey A. Egorychev
Archive | 2003
Igor Igor Chikalov; Sergei Zheltov; Dmitry N. Budnikov
european signal processing conference | 2004
Dmitry N. Budnikov; Igor Chikalov; Igor Kozintsev; Rainer Lienhart
Archive | 2003
Rainer Lienhart; Igor Kozintsev; Dmitry N. Budnikov; Igor Chikalov; Sergey A. Egorychev