Philipp Baumann
University of Bern
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Philipp Baumann.
International Journal of Production Research | 2013
Philipp Baumann; Norbert Trautmann
In process industries, make-and-pack production is used to produce food and beverages, chemicals, and metal products, among others. This type of production process allows the fabrication of a wide range of products in relatively small amounts using the same equipment. In this article, we consider a real-world production process (cf. Honkomp et al. 2000. The curse of reality – why process scheduling optimization problems are diffcult in practice. Computers & Chemical Engineering, 24, 323–328.) comprising sequence-dependent changeover times, multipurpose storage units with limited capacities, quarantine times, batch splitting, partial equipment connectivity, and transfer times. The planning problem consists of computing a production schedule such that a given demand of packed products is fulfilled, all technological constraints are satisfied, and the production makespan is minimised. None of the models in the literature covers all of the technological constraints that occur in such make-and-pack production processes. To close this gap, we develop an efficient mixed-integer linear programming model that is based on a continuous time domain and general-precedence variables. We propose novel types of symmetry-breaking constraints and a preprocessing procedure to improve the model performance. In an experimental analysis, we show that small- and moderate-sized instances can be solved to optimality within short CPU times.
Mathematical Methods of Operations Research | 2013
Philipp Baumann; Norbert Trautmann
Since 2010, the client base of online-trading service providers has grown significantly. Such companies enable small investors to access the stock market at advantageous rates. Because small investors buy and sell stocks in moderate amounts, they should consider fixed transaction costs, integral transaction units, and dividends when selecting their portfolio. In this paper, we consider the small investor’s problem of investing capital in stocks in a way that maximizes the expected portfolio return and guarantees that the portfolio risk does not exceed a prescribed risk level. Portfolio-optimization models known from the literature are in general designed for institutional investors and do not consider the specific constraints of small investors. We therefore extend four well-known portfolio-optimization models to make them applicable for small investors. We consider one nonlinear model that uses variance as a risk measure and three linear models that use the mean absolute deviation from the portfolio return, the maximum loss, and the conditional value-at-risk as risk measures. We extend all models to consider piecewise-constant transaction costs, integral transaction units, and dividends. In an out-of-sample experiment based on Swiss stock-market data and the cost structure of the online-trading service provider Swissquote, we apply both the basic models and the extended models; the former represent the perspective of an institutional investor, and the latter the perspective of a small investor. The basic models compute portfolios that yield on average a slightly higher return than the portfolios computed with the extended models. However, all generated portfolios yield on average a higher return than the Swiss performance index. There are considerable differences between the four risk measures with respect to the mean realized portfolio return and the standard deviation of the realized portfolio return.
annual conference on computers | 2009
Norbert Trautmann; Philipp Baumann
When project managers determine schedules for resource-constrained projects, they commonly use commercial project management software packages. Which resource-allocation methods are implemented in these packages is proprietary information. The resource-allocation problem is in general computationally difficult to solve to optimality. Hence, the question arises if and how various project management software packages differ in quality with respect to their resource-allocation capabilities. None of the few existing papers on this subject uses a sizeable data set and recent versions of common software packages. We experimentally analyze the resource-allocation capabilities of Acos Plus.1, AdeptTracker Professional, CS Project Professional, Microsoft Office Project 2007, Primavera P6, Sciforma PS8, and Turbo Project Professional. Our analysis is based on 1560 instances of the precedence- and resource-constrained project scheduling problem RCPSP. The experiment shows that using the resource-allocation feature of these packages may lead to a project duration increase of almost 115% above the best known feasible schedule. The increase gets larger with increasing resource scarcity and with increasing number of activities. We investigate the impact of different complexity scenarios and priority rules on the project duration obtained by the software packages. We provide a decision table to support managers in selecting a software package and a priority rule.
European Journal of Operational Research | 2014
Philipp Baumann; Norbert Trautmann
Due to the ongoing trend towards increased product variety, fast-moving consumer goods such as food and beverages, pharmaceuticals, and chemicals are typically manufactured through so-called make-and-pack processes. These processes consist of a make stage, a pack stage, and intermediate storage facilities that decouple these two stages. In operations scheduling, complex technological constraints must be considered, e.g., non-identical parallel processing units, sequence-dependent changeovers, batch splitting, no-wait restrictions, material transfer times, minimum storage times, and finite storage capacity. The short-term scheduling problem is to compute a production schedule such that a given demand for products is fulfilled, all technological constraints are met, and the production makespan is minimised. A production schedule typically comprises 500–1500 operations. Due to the problem size and complexity of the technological constraints, the performance of known mixed-integer linear programming (MILP) formulations and heuristic approaches is often insufficient.
IEEE Transactions on Big Data | 2016
Dorit S. Hochbaum; Philipp Baumann
Leading machine learning techniques rely on inputs in the form of pairwise similarities between objects in the data set. The number of pairwise similarities grows quadratically in the size of the data set which poses a challenge in terms of scalability. One way to achieve practical efficiency for similarity-based techniques is to sparsify the similarity matrix. However, existing sparsification approaches consider the complete similarity matrix and remove some of the non-zero entries. This requires quadratic time and storage and is thus intractable for large-scale data sets. We introduce here a method called sparse computation that generates a sparse similarity matrix which contains only relevant similarities without computing first all pairwise similarities. The relevant similarities are identified by projecting the data onto a low-dimensional space in which groups of objects that share the same grid neighborhood are deemed of potential high similarity whereas pairs of objects that do not share a neighborhood are considered to be dissimilar and thus their similarities are not computed. The projection is performed efficiently even for massively large data sets. We apply sparse computation for the
industrial engineering and engineering management | 2013
Philipp Baumann; Norbert Trautmann
K
industrial engineering and engineering management | 2010
Philipp Baumann; Norbert Trautmann
-nearest neighbors algorithm (KNN), for graph-based machine learning techniques of supervised normalized cut and K-supervised normalized cut (SNC and KSNC) and for support vector machines with radial basis function kernels (SVM), on real-world classification problems. Our empirical results show that the approach achieves a significant reduction in the density of the similarity matrix, resulting in a substantial reduction in tuning and testing times, while having a minimal effect (and often none) on accuracy. The low-dimensional projection is of further use in massively large data sets where the grid structure allows to easily identify groups of “almost identical” objects. Such groups of objects are then replaced by representatives, thus reducing the size of the matrix. This approach is effective, as illustrated here for data sets comprising up to 8.5 million objects.
international conference on pattern recognition applications and methods | 2016
Philipp Baumann; Dorit S. Hochbaum; Quico Spaen
The execution of a project requires resources that are generally scarce. Classical approaches to resource allocation assume that the usage of these resources by an individual project activity is constant during the execution of that activity; in practice, however, the project manager may vary resource usage over time within prescribed bounds. This variation gives rise to the project scheduling problem which consists in allocating the scarce resources to the project activities over time such that the project duration is minimized, the total number of resource units allocated equals the prescribed work content of each activity, and various work-content-related constraints are met. We formulate this problem for the first time as a mixed-integer linear program. Our computational results for a standard test set from the literature indicate that this model outperforms the state-of-the-art solution methods for this problem.
industrial engineering and engineering management | 2010
Norbert Trautmann; Philipp Baumann
We develop an MILP formulation for the short-term scheduling of an industrial make-and-pack plant producing consumer goods. We address a case study which includes sequence-dependent changeover times, heterogeneous storage units with finite capacities, quality release times, batch splitting, partial equipment connectivity, and transfer times. The planning problem consists in minimizing the production makespan while meeting given end-product demands. Our computational results show that for the first time, small instances of the case study can be solved to optimality within short CPU times.
Archive | 2015
Philipp Baumann; Cord-Ulrich Fündeling; Norbert Trautmann
Machine learning techniques that rely on pairwise similarities have proven to be leading algorithms for classification. Despite their good and robust performance, similarity-based techniques are rarely chosen for largescale data mining because the time required to compute all pairwise similarities grows quadratically with the size of the data set. To address this issue of scalability, we introduced a method called sparse computation, which efficiently generates a sparse similarity matrix that contains only significant similarities. Sparse computation achieves significant reductions in running time with minimal and often no loss in accuracy. However, for massively-large data sets even such a sparse similarity matrix may lead to considerable running times. In this paper, we propose an extension of sparse computation called sparse-reduced computation that not only avoids computing very low similarities but also avoids computing similarities between highly-similar or identical objects by compressing them to a single object. Our computational results show that sparse-reduced computation allows highly-accurate classification of data sets with millions of objects in seconds.