Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Barlas Oguz is active.

Publication


Featured researches published by Barlas Oguz.


ieee automatic speech recognition and understanding workshop | 2013

Accelerating recurrent neural network training via two stage classes and parallelization

Zhiheng Huang; Geoffrey Zweig; Michael Levit; Benoit Dumoulin; Barlas Oguz; Shawn Chang

Recurrent neural network (RNN) language models have proven to be successful to lower the perplexity and word error rate in automatic speech recognition (ASR). However, one challenge to adopt RNN language models is due to their heavy computational cost in training. In this paper, we propose two techniques to accelerate RNN training: 1) two stage class RNN and 2) parallel RNN training. In experiments on Microsoft internal short message dictation (SMD) data set, two stage class RNNs and parallel RNNs not only result in equal or lower WERs compared to original RNNs but also accelerate training by 2 and 10 times respectively. It is worth noting that two stage class RNN speedup can also be applied to test stage, which is essential to reduce the latency in real time ASR applications.


IEEE ACM Transactions on Networking | 2015

Stable distributed P2P protocols based on random peer sampling

Barlas Oguz; Venkat Anantharam; Ilkka Norros

Peer-to-peer protocols that rely on fully random peer and chunk selection have recently been shown to suffer from instability. The culprit is referred to as the missing piece syndrome, whereby a single chunk is driven to near extinction, leading to an accumulation of peers having almost complete files, but waiting for the missing chunk. We investigate three distributed random peer sampling protocols that tackle this issue, and present proofs of their stability using Lyapunov function techniques. The first two protocols are based on the sampling of multiple peers and a rare chunk selection rule. The last protocol incorporates an incentive mechanism to prevent free riding. It is shown that this incentive mechanism interacts well with the rare chunk selection protocol and stability is maintained. Besides being stable for all arrival rates of peers, all three protocols are scalable in that the mean upload rate of each peer is bounded uniformly independent of the arrival rate.


international conference on acoustics, speech, and signal processing | 2009

Structured least squares with bounded data uncertainties

Mert Pilanci; Orhan Arikan; Barlas Oguz; Mustafa Ç. Pınar

In many signal processing applications the core problem reduces to a linear system of equations. Coefficient matrix uncertainties create a significant challenge in obtaining reliable solutions. In this paper, we present a novel formulation for solving a system of noise contaminated linear equations while preserving the structure of the coefficient matrix. The proposed method has advantages over the known Structured Total Least Squares (STLS) techniques in utilizing additional information about the uncertainties and robustness in ill-posed problems. Numerical comparisons are given to illustrate these advantages in two applications: signal restoration problem with an uncertain model and frequency estimation of multiple sinusoids embedded in white noise.


international symposium on information theory | 2010

Compressing a long range dependent renewal process

Barlas Oguz; Venkat Anantharam

Analysis of variable bit-rate video data has shown that long range dependence persists across a wide variety of codecs. While codecs are generally lossy, one may conjecture, as a partial explanation for this fact, that there exist information sources for which any lossless code results in a bit-rate process that eventually dominates a long range dependent random process. We prove this to be true for discrete time long range dependent renewal processes under a mild technical assumption.


allerton conference on communication, control, and computing | 2012

Stable, distributed P2P protocols based on random peer sampling

Barlas Oguz; Venkat Anantharam; Ilkka Norros

In a peer-to-peer file sharing system based on random contacts where the upload capacity of the seed is small, a single chunk of the file may become rare, causing an accumulation of peers who lack the rare chunk. To prevent this from happening, we propose a protocol where each peer samples a small population of peers and makes an intelligent decision to pick which chunk to download based on this sample. We prove that the resulting system is stable under any arrival rate of peers even if the seed has small, bounded upload capacity.


signal processing and communications applications conference | 2009

A novel technique for a linear system of equations applied to channel equalization

Mert Pilanci; Orhan Arikan; Barlas Oguz; Mustafa Ç. Pınar

In many inverse problems of signal processing the problem reduces to a linear system of equations. Accurate and robust estimation of the solution with errors in both measurement vector and coefficient matrix is a challenging task. In this paper a novel formulation is proposed which takes into account the structure (e.g. Toeplitz, Hankel) and uncertainties of the system. A numerical algorithm is provided to obtain the solution. The proposed technique and other methods are compared in a channel equalization example which is a fundamental necessity in communication.


information theory and applications | 2012

Long range dependent Markov chains with applications

Barlas Oguz; Venkat Anantharam

We discuss functions of long range dependent Markov chains. We state sufficient conditions under which an instantaneous function of a long range dependent Markov chain has the same Hurst index as the underlying chain. We discuss several applications of the theorem in the fields of information theory, queuing networks, and finance.


international symposium on information theory | 2012

Pointwise lossy source coding theorem for sources with memory

Barlas Oguz; Venkat Anantharam

We investigate the minimum pointwise redundancy of variable length lossy source codes operating at fixed distortion for sources with memory. The redundancy is defined by ln(X1n) - nR(D), where ln(X1n) is the code length at block size n and R(D) is the rate distortion function. We restrict ourselves to the case where R(D) can be calculated, namely the cases where the Shannon lower bound to R(D) holds with equality. In this case, for balanced distortion measures, we provide a pointwise lower bound to the code length sequence in terms of the entropy density process. We show that the minimum coding variance with distortion is lower bounded by the minimum lossless coding variance, and is non-zero unless the entropy density is deterministic. We also examine lossy coding in the presence of long range dependence, showing the existence of information sources for which long range dependence persists under any codec operating at the Shannon lower bound with fixed distortion.


conference of the international speech communication association | 2016

Entropy Based Pruning for Non-Negative Matrix Based Language Models with Contextual Features.

Barlas Oguz; Issac Alphonso; Shuangyu Chang

Non-negative matrix based language models have been recently introduced [1] as a computationally efficient alternative to other feature-based models such as maximum-entropy models. We present a new entropy based pruning algorithm for this class of language models, which is fast and scalable. We present perplexity and word error rate results and compare these against regular n-gram pruning. We also train models with location and personalization features and report results at various pruning thresholds. We demonstrate that contextual features are helpful over the vanilla model even after pruning to a similar size.


ieee automatic speech recognition and understanding workshop | 2015

Discriminative training of context-dependent language model scaling factors and interpolation weights

Shuangyu Chang; Abhik Lahiri; Issac Alphonso; Barlas Oguz; Michael Levit; Benoit Dumoulin

We demonstrate how context-dependent language model scaling factors and interpolation weights can be unified in a single formulation where free parameters are discriminatively trained using linear and non-linear optimization. Objective functions of the optimization are defined based on pairs of superior and inferior recognition hypotheses and correlate well with recognition error metrics. Experiments on a large, real world application demonstrated the effectiveness of the solution in significantly reducing recognition errors, by leveraging the benefits of both context-dependent weighting and discriminative training.

Collaboration


Dive into the Barlas Oguz's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ilkka Norros

VTT Technical Research Centre of Finland

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mert Pilanci

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge