Dan Lelescu
NTT DoCoMo
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Dan Lelescu.
IEEE Transactions on Multimedia | 2003
Dan Lelescu; Dan Schonfeld
The increased availability and usage of multimedia information have created a critical need for efficient multimedia processing algorithms. These algorithms must offer capabilities related to browsing, indexing, and retrieval of relevant data. A crucial step in multimedia processing is that of reliable video segmentation into visually coherent video shots through scene change detection. Video segmentation enables subsequent processing operations on video shots, such as video indexing, semantic representation, or tracking of selected video information. Since video sequences generally contain both abrupt and gradual scene changes, video segmentation algorithms must be able to detect a large variety of changes. While existing algorithms perform relatively well for detecting abrupt transitions (video cuts), reliable detection of gradual changes is much more difficult. A novel one-pass, real-time approach to video scene change detection based on statistical sequential analysis and operating on a compressed multimedia bitstream is proposed. Our approach models video sequences as stochastic processes, with scene changes being reflected by changes in the characteristics (parameters) of the process. Statistical sequential analysis is used to provide an unified framework for the detection of both abrupt and gradual scene changes.
acm/ieee international conference on mobile computing and networking | 2005
Ravi Jain; Dan Lelescu; Mahadevan Balakrishnan
We derive an empirical model for spatial registration patterns of mobile users as they move within a campus wireless local area network (WLAN) environment and register at different access points. Such a model can be very useful in a variety of simulation studies of the performance of mobile wireless systems, such as that of resource management and mobility management protocols. We base the model on extensive experimental data from a campus WiFi LAN installation, representing traces from about 6000 users over a period of about 2 years. We divide the empirical data available to us into training and test data sets, develop the model based on the training set, and evaluate it against the test set.The model shows that user registration patterns exhibit a distinct hierarchy, and that WLAN access points (APs) can be clustered based on registration patterns. Cluster size distributions are highly skewed, as are intra-cluster transition probabilities and trace lengths, which can all be modeled well by the heavy-tailed Weibull distribution. The fraction of popular APs in a cluster, as a function of cluster size, can be modeled by exponential distributions. There is general similarity across hierarchies, in that inter-cluster registration patterns tend to have the same characteristics and distributions as intra-cluster patterns.We generate synthetic traces for intra-cluster transitions, inter-cluster transitions, and complete traces, and compare them against the corresponding traces from the test set. We define a set of metrics that evaluate how well the model captures the empirical features it is trying to represent. We find that the synthetic traces agree very well with the test set in terms of the metrics. We also compare the model to a simple modified random waypoint model as a baseline, and show the latter is not at all representative of the real data.The user of the model has the opportunity to use it as is, or can modify model parameters, such as the degree of randomness in registration patterns. We close with a brief discussion of further work to refine and extend the model.
mobile ad hoc networking and computing | 2006
Dan Lelescu; Ulas C. Kozat; Ravi Jain; Mahadevan Balakrishnan
We present an empirical registration model derived from the WLAN registration patterns of the mobile users. There exist models that accurately describe individually the spatial and temporal aspects of user registration, and demonstrate the importance of this modeling. The main distinction of the new model from the previous empirical models is that we are able to formulate the inter-dependence of space and time explicitly by a set of few equations. Our extensive studies of the WLAN traces indicate that a simple but proper notion of popularity radient suffices to capture the correlation across space and time. Indeed, when locations (i.e., AP coverage area) are differentiated with respect to the number of visits they are receiving (i.e., AP popularity), the time spent at each location i before user moves from i to k turns out to be closely related to the difference of popularity between locations i and k This observation led to the design of a joint time-space registration model (referred to as ModelT++) that builds upon the Model T, which itself models only the space aspect of the registration, but is derived from the same campus WiFi network. As part of the process of generating a joint space-time model, we further extend spatial aspects of the Model T. We evaluate our model using various metrics against a random walk model as well as the Model T by superimposing location independent time series on these space-only registration models. Our results suggest that with a slight increase in the model complexity, our joint time-space registration model is able to better capture the real network registration than the independent time models. Model T++ can be easily integrated into both WLAN and multi-hop wireless mesh network simulations that require realistic registration models.
Mobile Computing and Communications Review | 2004
Ravi Jain; Anupama Shivaprasad; Dan Lelescu; Xiaoning He
The evaluation of a great deal of research on ad hoc networks, as well as cellular networks, depends on models of user mobility. Many models have been developed and utilized, such as the random walk and random waypoint models. These are simple to implement and analyze but unlikely to be realistic. We develop a model based on extensive experimental data from a campus Wi-Fi LAN installation, representing traces from about 6000 users over a period of about 2 years. This data does not enable us to develop a user mobility model directly. However, as a first step, we develop a model of the time and sequence of locations at which user devices register. Note that this can be very useful, for instance to evaluate protocols that attempt to manage routing or resource allocations at different nodes. This paper reports work in progress on developing a user registration model. It shows the key time domain as well as space domain features we have extracted from the data. In particular, we show that the time features indicate heavy-tailed, although not power-law, distributions. The spatial features strongly indicate registration localization and hierarchy. The model itself can be represented as a set of probability distributions for various parameters. The modeler, for example a protocol designer, can then generate traces that conform to these distributions while varying the scale of the model in terms of the number of users. We close with a brief discussion of further work to refine and extend the model.
Graphical Models \/graphical Models and Image Processing \/computer Vision, Graphics, and Image Processing | 2004
Dan Lelescu; Frank Jan Bossen
Light Fields and Lumigraphs represent 4D parameterizations of the plenoptic function. Given the large amount of data and the nature of such representations, there are two main requirements for the effective processing of light fields. The light field data must be compressed efficiently for storage or communication purposes. Also, the coded light field representation should provide random access to the data for rendering purposes. Various techniques have been proposed to enable a more efficient representation and coding of the data, such as using vector quantization and Lempel-Ziv entropy coding of data, JPEG coding, or extensions of predictive coding schemes. Predictive coding provides very good compression efficiency but also introduces referencing-related dependencies in the coded data that hinder the data access required for view synthesis. In this paper, we present a new approach for representing and coding Light Field data by using a statistical representation based on Principal Components Analysis. The proposed approach offers an efficient representation and coding in the rate-distortion sense, enables random access to pixels in the images containing information required for virtual view synthesis, and provides straightforward scalability.
Wireless Networks | 2007
Ravi Jain; Dan Lelescu; Mahadevan Balakrishnan
We discuss the derivation of an empirical model for spatial registration patterns of mobile users in a campus wireless local area network (WLAN). Such a model can be very useful in a variety of simulation studies of the performance of mobile wireless systems, such as that of resource management and mobility management protocols. We base the model on extensive experimental data from a campus WiFi LAN installation. We divide the empirical data available to us into training and test data sets, develop the model based on the training set, and evaluate it against the test set.The model shows that user registration patterns exhibit a distinct hierarchy, and that WLAN access points (APs) can be clustered based on registration patterns. Cluster size distributions are highly skewed, as are intra-cluster transition probabilities and trace lengths, which can all be modeled well by the heavy-tailed Weibull distribution. The fraction of popular APs in a cluster, as a function of cluster size, can be modeled by exponential distributions. There is general similarity across hierarchies, in that inter-cluster registration patterns tend to have the same characteristics and distributions as intra-cluster patterns. In this context, we also introduce and discuss the modeling of the disconnected state as an integral part of real traffic characteristics.We generate synthetic traffic traces based on the model we derive. We then compare these traces against the real traces from the test set using a set of metrics we define. We find that the synthetic traces agree very well with the test set in terms of the metrics. We compare the derived model to a simple modified random waypoint model, and show that the latter is not at all representative of the real data. We also show how the model parameters can be varied to allow designers to consider ‘what-if’ scenarios easily. Finally we develop an extended version of Model T that uses an alternative modeling of relative popularity of APs and clusters, with certain generalization advantages, and evaluate its fidelity to the real data also, with positive results.
asilomar conference on signals, systems and computers | 2005
Dan Lelescu; Frank Jan Bossen
In this paper we present a class of bounded- uncertainty estimators as the solution of an estimation problem involving unknown statistics. The estimators are derived under the assumption of correlated signal and noise. The bounded- uncertainty framework gives an additional degree of freedom for estimator design that can benefit its performance. It also provides an indirect way of verifying hypotheses regarding unknown statistics for an application domain by examining the behavior of the estimator as a function of bound(s) placed on unknown statistics. If the unknown statistics are within a lower bound than the worst-case limit assumed by a minimax estimator, the quality of the estimation is increased. The derived estimators are applied to the filtering of quantization noise in coded and reconstructed video frames. improve the performance of estimators for the application of interest. Therefore, in this paper we propose an approach for de- signing a more general class of estimators that achieve a better balance between performance and robustness conditions. The estimation problem is formulated and solved under the assumption of correlation between signal and noise (quan- tization noise in our application). Rather than follow the minimax formulation in this context, we describe a more general approach to the determination of an optimal linear estimator. This approach is based on the placement of a bound in the acceptable interval of the unknown statistic (e.g., signal- noise correlation), and the derivation of the estimator as a function of the bound and of estimates of the signal and noise power. Various formulations can be used for deriving these types of estimators. For example, an estimator formulation is also considered where the estimation criterion is designed to express a degree of confidence in a particular assumption about the unknowns, counter-balanced by a term that handles violations of that assertion.
international conference on multimedia and expo | 2006
Dan Lelescu
The use of block transforms for coding intra-frames in video coding may preclude higher coding performance due to residual correlation across block boundaries and insufficient energy compaction, which translates into unrealized rate-distortion gains. Subjectively, the occurrence of blocking artifacts is common. Post-filters and lapped transforms offer good solutions to these problems. Lapped transforms offer a more general framework which can incorporate coordinated pre- and post-filtering operations. Most common are fixed lapped transforms (such as lapped orthogonal transforms), and also transforms with adaptive basis function length. In contrast, in this paper we determine a lapped transform that non-linearly adapts its basis functions to local image statistics and the quantization regime. This transform was incorporated into the H.264/AVC codec, and its performance evaluated. As a result, significant rate-distortion gains of up to 0.45 dB (average 0.35dB) PSNR were obtained compared to the H.264/AVC codec alone
visual communications and image processing | 2004
Dan Lelescu; Frank Jan Bossen
Transform coding plays a central role in image and video coding technologies. Various transforms, whether fixed or adaptive, have been utilized for video compression. However, these transforms are inherently sub-optimal in a coding-efficiency sense, given that their design does not explicitly take into account the entire transform coding model. Thus, commonly used transforms such as the DCT have desirable properties, but were designed without the full consideration of the other transform coding elements including quantization and entropy coding. Although these transforms have good performance in a rate-distortion sense, as demonstrated by todays image and video coding standards, we show that superior transforms can be designed based on the complete transform coding model. We present a new class of transforms called coding-adaptive transforms, that are derived under an optimality criterion which incorporates the elements of transform coding and is formulated for coding efficiency. Additionally, characteristics of the new transforms such as adaptivity, generalization, and robustness are discussed.
Archive | 2013
Florian Ciurea; Kartik Venkataraman; Gabriel Molina; Dan Lelescu