Martin Oberhofer
IBM
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Martin Oberhofer.
International Journal of Information Quality | 2014
Philip Woodall; Martin Oberhofer; Alexander Borek
Data quality (DQ) assessment and improvement in larger information systems would often not be feasible without using suitable ‘DQ methods’, which are algorithms that can be automatically executed by computer systems to detect and/or correct problems in datasets. Currently, these methods are already essential, and they will be of even greater importance as the quantity of data in organisational systems grows. This paper provides a review of existing methods for both DQ assessment and improvement and classifies them according to the DQ problem and problem context. Six gaps have been identified in the classification, where no current DQ methods exist, and these show where new methods are required as a guide for future research and DQ tool development.
Information Technology | 2012
Albert Maier; Martin Oberhofer; Thomas J. E. Schwarz
Abstract Data integration is essential for the success of many enterprise business initiatives, but also a very significant contributor to the costs and risks of the IT projects supporting these initiatives. Highly skilled consultants and data stewards re-design the usage of data in business processes, define the target landscape and its data models, and map the current information landscape into the target landscape. Still, the largest part of a typical data integration effort is dedicated to the implementation of transformation, cleansing, and data validation logic in robust and highly performing commercial systems. This effort is simple and doesn´t demand skills beyond commercial product knowledge, but it is very labour-intensive and error prone. In this paper we describe a new commercial approach to data integration that helps to “industrialize” data integration projects and significantly lowers the amount of simple, but labour-intensive work. The key idea is that the target landscape for a data integration project has pre-defined data models and associated meta data which can be leveraged for building and automating the data integration process. This approach has been implemented in the context of the support of SAP consolidation projects and is used in some of the largest data integration projects world-wide. Zusammenfassung Bei vielen Umstrukturierungsprojekten in Unternehmen spielt die Datenintegration eine entscheidende Rolle. In den zugehörigen IT Projekten sind ein signifikanter Teil der Kosten sowie des Projektrisikos auf Datenintegration zurückzuführen. Hochdotierte Berater und Datenverantwortliche gestalten die Verwendung der Daten in Geschäftsprozessen neu, definieren die zukünftige IT Landschaft und deren Datenmodelle, und erstellen Abbildungsvorschriften zwischen alten und neuen Anwendungssystemen. Trotzdem steckt ein Großteil des Aufwands von Datenintegrationsprojekten immer noch in der Implementierung von Transformationsvorschriften, Datenaufbereitungs- und Validierungslogik in hochperformanten kommerziellen Systemen. Diese Tätigkeiten sind relativ einfach und verlangen nur Kenntnisse in der eingesetzten Basissoftware. Jedoch sind diese Tätigkeiten arbeitsintensiv und fehleranfällig. In diesem Artikel beschreiben wir einen neuen kommerziellen Ansatz für Datenintegrationsprojekte, welcher diese “industrialisiert” und dabei die einfachen, aber fehleranfälligen Arbeitsschritte signifikant reduziert. Der Ansatz basiert auf der Ausnutzung von Datenmodellen und Metadaten der neuen Anwendungssysteme zur Automatisierung der Datenintegrationsprozesse und zur Generierung der den Prozessschritten zu Grunde liegenden Artefakte. Dieser Ansatz wurde zur Unterstützung von SAP Konsolidierungsprojekten entwickelt und wird derzeit in einigen der weltweit größten Datenintegrationsprojekten eingesetzt.
International Journal of Organizational and Collective Intelligence | 2017
Sushain Pandit; Ivan Matthew Milman; Martin Oberhofer; Yinle Zhou
Most large enterprises requiring operational business processes utilize several thousand instances of legacy, upgraded, cloud-based, and/or acquired information management applications. With the advent of Big Data, Business Intelligence BI systems, receive unconsolidated data from a wide-range of data sources with no overarching governance procedures to ensure quality and consistency. Although different applications deal with their own flavor of data, reference data is found in all of them. Given the critical role that BI plays in ensuring business success, the fact that BI relies heavily on the quality of data to ensure that the intelligence being provided is trustworthy, and the prevalence of reference data in the information integration landscape, a principled approach towards management, stewardship and governance of reference data becomes necessary to ensure quality and operational excellence across BI systems. The authors discuss this approach in context of typical reference data management concepts and features, leading to a comprehensive solution architecture for BI integration.
Archive | 2013
Dan J. Mandelstein; Ivan Matthew Milman; Martin Oberhofer; Sushain Pandit; Daniel C. Wolfson
Archive | 2008
Martin Oberhofer; Albert Maier; Thomas Schwarz; Sebastian Krebs; Dirk Nowak
Archive | 2012
Bhavani K. Eshwar; Martin Oberhofer; Sushain Pandit
Archive | 2011
Anja Gruenheid; Albert Maier; Martin Oberhofer; Thomas Schwarz; Manfred Vodegel
Archive | 2010
Mario Godinez; Eberhard Hechler; Klaus Koenig; Steve Lockwood; Martin Oberhofer; Michael Schroeck
Archive | 2009
Geetika T. Lakshmanan; Martin Oberhofer
Archive | 2011
Martin Oberhofer; Yannick Saillet; Jens Seifert