Ashkan Zeinalzadeh
University of Notre Dame
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Ashkan Zeinalzadeh.
advances in computing and communications | 2016
Ashkan Zeinalzadeh; Reza Ghorbani; James Yee
In a grid with high penetration of solar photovoltaic (PV) systems, voltage rises can occur in the distribution system. With the rising demand for solar energy installation, there is a pressing need for utilities to regulate the voltages at low voltage distribution grids. We present a linear model for voltage rise versus PV output power. The voltage rise versus injected power is modeled as a linear combination of gamma random variables. This is achieved by finding sparse bases and clustering the data into subsets based on correlation with those bases. Then fitting a gamma distribution within each subset. We are interested in modeling the voltage rise without ignoring the sparse events in the voltage. We use sparse SVD with ℓ1 penalty to model sparse voltage rises.
conference on decision and control | 2015
Ashkan Zeinalzadeh; Tom Wenska; Gordon Okimoto
In this work, we developed an algorithm for the integrated analysis of multiple high-dimensional data matrices based on sparse rank-one matrix approximations. The algorithm approximates multiple data matrices with rank one outer products composed of sparse left singular-vectors that are unique to each matrix and a right singular-vector that is shared by all of the data matrices. The right-singular vector represents a signal we wish to detect in the row-space of each matrix. The non-zero components of the resulting left-singular vectors identify rows of each matrix that in aggregate provide a sparse linear representation of the shared right-singular vector. This sparse representation facilitates downstream interpretation and validation of the resulting model based on the rows selected from each matrix. False discovery rate is used to select an appropriate ℓ1 penalty parameter that imposes sparsity on the left singular-vector but not the common right singular-vector of the joint approximation. Since a given multi-modal data set (MMDS) may contain multiple signals of interest the algorithm is iteratively applied to the residualized version of original data to sequentially capture and model each distinct signal in terms of rows from the different matrices. We show that the algorithm outperforms standard singular value decomposition over a wide range of simulation scenarios in terms of detection accuracy. Analysis of real data for ovarian and liver cancer resulted in compact gene expression signatures that were predictive of clinical outcomes and highly enriched for cancer related biology.
advances in computing and communications | 2017
Ashkan Zeinalzadeh; Vijay Gupta
We consider a microgrid with random load realization, stochastic renewable energy production, and an energy storage unit. The grid controller provides the total net load trajectory that the microgrid should present to the main grid and the microgrid must impose load shedding and renewable energy curtailment if necessary to meet that net load trajectory. The microgrid controller seeks to operate the local energy storage unit to minimize the risk of load shedding, and renewable energy curtailment over a finite time horizon. We formulate the problem of optimizing the operation of the storage unit as a finite stage dynamic programming problem. We prove that the multi-stage objective function of the energy storage is strictly convex in the state of charge of the battery at each stage. The uniqueness of the optimal decision is proven under some additional assumptions. The optimal strategy is then obtained. The effectiveness of the energy storage in decreasing load shedding and renewable energy (RE) curtailment is illustrated in simulations.
advances in computing and communications | 2017
Ashkan Zeinalzadeh; Tom Wenska; Gordon Okimoto
We develop a neural network model to classify liver cancer patients into high-risk and low-risk groups using genomic data. Our approach provides a novel technique to classify big data sets using neural network models. We preprocess the data before training the neural network models. We first expand the data using wavelet analysis. We then compress the wavelet coefficients by mapping them onto a new scaled orthonormal coordinate system. Then the data is used to train a neural network model that enables us to classify cancer patients into two different classes of high-risk and low-risk patients. We use the leave-one-out approach to build a neural network model. This neural network model enables us to classify a patient using genomic data without any information about the survival time of the patient. The results from genomic data analysis are compared with survival time analysis. It is shown that the expansion and compression of data using wavelet analysis and singular value decomposition (SVD) is essential to train the neural network model.
Biodata Mining | 2016
Gordon Okimoto; Ashkan Zeinalzadeh; Tom Wenska; Michael Loomis; J. B. Nation; Tiphaine Fabre; Maarit Tiirikainen; Brenda Y. Hernandez; Owen Chan; Linda Wong; Sandi Kwee
BackgroundTechnological advances enable the cost-effective acquisition of Multi-Modal Data Sets (MMDS) composed of measurements for multiple, high-dimensional data types obtained from a common set of bio-samples. The joint analysis of the data matrices associated with the different data types of a MMDS should provide a more focused view of the biology underlying complex diseases such as cancer that would not be apparent from the analysis of a single data type alone. As multi-modal data rapidly accumulate in research laboratories and public databases such as The Cancer Genome Atlas (TCGA), the translation of such data into clinically actionable knowledge has been slowed by the lack of computational tools capable of analyzing MMDSs. Here, we describe the Joint Analysis of Many Matrices by ITeration (JAMMIT) algorithm that jointly analyzes the data matrices of a MMDS using sparse matrix approximations of rank-1.MethodsThe JAMMIT algorithm jointly approximates an arbitrary number of data matrices by rank-1 outer-products composed of “sparse” left-singular vectors (eigen-arrays) that are unique to each matrix and a right-singular vector (eigen-signal) that is common to all the matrices. The non-zero coefficients of the eigen-arrays identify small subsets of variables for each data type (i.e., signatures) that in aggregate, or individually, best explain a dominant eigen-signal defined on the columns of the data matrices. The approximation is specified by a single “sparsity” parameter that is selected based on false discovery rate estimated by permutation testing. Multiple signals of interest in a given MDDS are sequentially detected and modeled by iterating JAMMIT on “residual” data matrices that result from a given sparse approximation.ResultsWe show that JAMMIT outperforms other joint analysis algorithms in the detection of multiple signatures embedded in simulated MDDS. On real multimodal data for ovarian and liver cancer we show that JAMMIT identified multi-modal signatures that were clinically informative and enriched for cancer-related biology.ConclusionsSparse matrix approximations of rank-1 provide a simple yet effective means of jointly reducing multiple, big data types to a small subset of variables that characterize important clinical and/or biological attributes of the bio-samples from which the data were acquired.
advances in computing and communications | 2012
Ashkan Zeinalzadeh; Aydin Alptekinoglu; Gurdal Arslan
We consider single-period and infinite-horizon inventory competition between two firms that replenish their inventories as in the well-known newsvendor model. Normally customers have a preference for shopping in one firm or the other. A fixed percentage of them who encounter a stockout in the firm of their first choice, though, visits the other firm. This substitution behavior makes the firms replenishment decisions strategically related. Our main contribution is to introduce a simple learning algorithm to inventory competition. The learning algorithm requires each firm (a) to have the knowledge of its own critical fractile, which the firm can calculate using the values of its own per unit revenue, order cost, and holding cost; and (b) to observe its own total demand realizations. They do not necessarily know their true demand distributions. The firms need not even have any information about each other, beyond the implicit information encoded in their own total demand realizations affected by their competitors inventory decisions. In fact, the firms need not even be aware that they are engaged in inventory competition. We prove that the inventory decisions generated by the learning algorithm converge, with probability one, to certain threshold values that constitute an equilibrium in pure Markov strategies for an infinite-horizon discounted-reward inventory competition game.
Proceedings of the ISCIE International Symposium on Stochastic Systems Theory and its Applications | 2014
Ashkan Zeinalzadeh; Reza Ghorbani; Ehsan Reihani
Archive | 2010
Ashkan Zeinalzadeh; Aydin Alptekino ˘ glu; Gurdal Arslan
advances in computing and communications | 2018
Ashkan Zeinalzadeh; Donya Ghavidel; Vijay Gupta
conference on decision and control | 2017
Ashkan Zeinalzadeh; Nayara Aguiar; Stefanos Baros; Anuradha M. Annaswamy; Indraneel Chakraborty; Vijay Gupta