Wolfgang Hommel
Bundeswehr University Munich
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Wolfgang Hommel.
international conference on communications | 2005
Wolfgang Hommel
With Federated Identity Management (FIM) protocols, service providers can request user attributes, such as the billing address, from the users identity provider. Access to this information is managed using so-called Attribute Release Policies (ARPs). In this paper, we first analyze various shortcomings of existing ARP implementations; then, we demonstrate that the eXtensible Access Control Markup Language (XACML) is very suitable for the task. We present an architecture for the integration of XACML ARPs into SAML-based identity providers and specify the policy evaluation workflows. We also introduce our implementation and its integration into the Shibboleth architecture.
2011 Sixth International Conference on IT Security Incident Management and IT Forensics | 2011
Stefan Metzger; Wolfgang Hommel; Helmut Reiser
We present a holistic, process-oriented approach to ISO/IEC 27001 compliant security incident management that integrates multiple state-of-the-art security tools and has been applied to a real-world scenario very successfully for one year so far. The computer security incident response team, CSIRT, is enabled to correlate IT security related events across multiple communication channels and thus to classify any incidents consistently. Depending on an incidents classification, manual intervention or even fully automated reaction steps can be triggered, this starts with simple email notifications of system and network administrators, and scales up to quarantining compromised systems and sub networks automatically. A formally specified security incident response (SIR) process serves as the basis that clearly defines responsibilities, workflows, and interfaces. It has been designed to enable quick reactions to IT security events in a very resource-conserving manner.
computational science and engineering | 2009
Latifa Boursas; Wolfgang Hommel
Large-scale distributed systems that are provided in an inter-organizational manner require scalable, dynamic access control paradigms for dealing with known, but also with new users. This paper presents an innovative, trust management based approach for such scenarios. It aggregates and evaluates trust levels from several trust dimensions and has been implemented as a trust based access control (TBAC) framework using several standard components.
ieee international conference on services computing | 2010
Mark Yampolskiy; Wolfgang Hommel; Patricia Marcu; Matthias K. Hamm
The growing significance of international collaborations in research, education, and business fields has raised the demand for the assurance of the quality of the network connections which the projects and applications are realized upon. A large spectrum of examples with diverse requirements is found in areas such as GRID- and Cloud computing, eLearning, and video-conferencing. The consequences of these diverse project and application requirements culminate in the urgent necessity to provide an End-to-End (E2E) guarantee for any customer-specific or user-tailored combination of service-specific Quality of Service (QoS) parameters. The quality of the overall network connections provided to users obviously directly depends on the quality of the involved connection parts. This means that already during the setup negotiation process the quality of the available connection parts has to be considered. Especially for international connections it is common that multiple independent service providers (SPs) realize different connection segments. This means in turn, that during the information exchange about available connection parts not only the technical challenges have to be solved, but also preferences and restrictions of the involved provider domains must be considered. In this paper we present a novel information model for the description of such connections. In the proposed model, a multi-domain view is derived from the single-domain perspectives of each considered SP. This model serves as a pro-found basis for an end-to-end routing algorithm which considers multiple user specific QoS parameters in parallel. The proposed model also accounts for the typically very restrictive SP information policies.
IEEE Internet Computing | 2010
Mark Yampolskiy; Wolfgang Hommel; Bernhard Lichtinger; Wolfgang Fritz; Matthias K. Hamm
The growing amount of international collaborations in research, education, and business fields has raised once again the demand for quality assurance of network connections, which the projects and applications are realized upon. A large spectrum of examples with diverse requirements is found in areas such as \emph{GRID} and \emph{cloud computing}, eLearning, video on demand, and video-conferencing. The consequences of these diverse project and application requirements culminate in the urgent necessity to provide an End-to-End (E2E) guarantee for any user-tailored combination of service-specific Quality of Service (QoS) parameters. The quality of the overall network connections provided to users directly depends on the quality of the involved connection parts. This means that already during the routing process the quality of available connection parts has to be considered. Especially for international connections it is common that multiple service providers (SPs) realize different connection segments. At the same time the inter-domain routing is driven mostly by the combination of business interests and restrictive information policies. This means that during the routing not only the optimality of the available connection parts has to be considered, but also the preferences and restrictions of the involved provider domains. In this paper, we present an inter-domain routing algorithm for distinguishing the E2E route for dedicated point-to-point connections. The proposed algorithm considers both the E2E user requirements for service quality and the service provider constraints. The proposed algorithm is not restricted to consider a sole quality parameter and can therefore be used for the establishment of connections with the user-tailored combination of connection properties, including service quality as well as connection management functionality.
Archive | 2010
Latifa Boursas; Ralf Ebner; Wolfgang Hommel; Silvia Knittl; Daniel Pluta
Als zentrales technisches Teilprojekt konzipiert, implementiert und betreibt Teilprojekt (TP) Verzeichnisdienst ein hochschulweites Identity & Access Management (I&AM) System, das eine Vielzahl daran angebundener Systeme und IT-Dienste mit aktuellen, autoritativen Daten uber alle fur sie relevanten Benutzer der TUM versorgt. Dabei wurden sowohl auf andere Hochschulen ubertragbare Architekturen und Werkzeuge geschaffen als auch eine sehr prazise auf die Prozesse und Anforderungen der TUM abgestimmte Instanz realisiert, die auf Basis der im sehr erfolgreichen praktischen Betrieb gewonnenen Erfahrungen kontinuierlich verbessert und weiterentwickelt wurde. In diesem Artikel werden die Zielsetzung des Teilprojekts, die technische Architektur des I&AMSystems, ausgewahlte Aspekte der Hochschul-Prozessintegration, Implementierungs-, Migrations- und Betriebsaspekte sowie die umfassenden Aktivitaten zum Know-How-Transfer von TP Verzeichnisdienst vorgestellt.
International Journal of Network Management | 2012
Mark Yampolskiy; Wolfgang Hommel; Feng Liu; Ralf König; Martin G. Metzker; Michael Schiffers
The Internet is a platform providing connection channels for various services. Whereas for services like email the best-effort nature of the Internet can be considered sufficient, other services strongly depend on service-specific connection quality parameters. This quality dependence has led to dedicated content distribution networks as a workaround solution for services like YouTube. Such workarounds are applicable to a small number of services only. With the global application of the Internet, the impact of quality of service varies from annoyance due to jitter in VoIP communication to endangering human lives in telemedicine applications. Thus network connections with end-to-end quality guarantees are indispensable for various existing and evolving services. In this paper we consider point-to-point multi-domain network connections for which the end-to-end quality has to be assured. Our contribution includes the classification of fault cases in general and countermeasures against end-to-end performance degradation. By correlating events and reasonable countermeasures, this work provides the foundation for quality assurance during the operation phase of end-to-end connections. We put our contribution in the context of a vision of global-goal-aware self-adaptation in computer networks and outline further research areas that require a similar classification to the work provided here. Copyright
Archive | 2010
Stephan Graf; Wolfgang Hommel
In service-orientierten Architekturen wird die herkommliche web-service-basierte Punkt-zu-Punkt-Kommunikation zunehmend durch den Einsatz eines Enterprise Service Bus (ESB) abgelost, der den sicheren und zuverlassigen Nachrichtentransport realisiert. Der Einsatzbereich eines ESB endet jedoch an den Grenzen der ihn einsetzenden Institution. In diesem Artikel analysieren wir aktuelle Herausforderungen bei der organisationsubergreifenden Verwaltung von Sicherheitsmetadaten, zu denen insbesondere Serverzertifikate und Privacy Policies gehoren. Als konkretes Szenario wird dabei das Federated Identity Management im Rahmen der deutschen Hochschulfoderation DFN-AAI aufgegriffen. Als standardbasierte, einheitliche Losung, die proprietare sowie metadatentyp-spezifische Ansatze integriert und den damit verbundenen Administrationsaufwand reduziert, schlagen wir einen organisationsubergreifenden ESB vor, den wir als Federation Service Bus (FedSB) bezeichnen. Wir diskutieren seine technischen Eigenschaften, das zugrunde liegende Kommunikationsmodell und die organisatorischen Schritte zur Einfuhrung.
International Journal of Computer Networks & Communications | 2011
Patricia Marcu; Wolfgang Hommel
Outsourcing – successful, and sometimes painful – has become one of the hottest topics in IT service management discussions over the past decade. IT services are outsourced to external service provider in order to reduce the effort required for and overhead of delivering these services within the own organization. More recently also IT services providers themselves started to either outsource service parts or to deliver those services in a non-hierarchical cooperation with other providers. Splitting a service into several service parts is a non-trivial task as they have to be implemented, operated, and maintained by different providers. One key aspect of such inter-organizational cooperation is fault management, because it is crucial to locate and solve problems, which reduce the quality of service, quickly and reliably. In this article we present the results of a thorough use case based requirements analysis for an architecture for inter-organizational fault management (ioFMA). Furthermore, a concept of the organizational respective functional model of the ioFMA is given.
International Journal of Computer Networks & Communications | 2011
Patricia Marcu; David Schmitz; Wolfgang Fritz; Mark Yampolskiy; Wolfgang Hommel
Novel large scale research projects often require cooperation between various different project partners that are spread among the entire world. They do not only need huge computing resources, but also a reliable network to operate on. The Large Hadron Collider (LHC) at CERN is a representative example for such a project. Its experiments result in a vast amount of data, which is interesting for researchers around the world. For transporting the data from CERN to 11 data processing and storage sites, an optical private network (OPN) has been constructed. As the experiment data is highly valuable, LHC defines very high requirements to the underlying network infrastructure. In order to fulfil those requirements, the connections have to be managed and monitored permanently. In this paper, we present the integrated monitoring solution developed for the LHCOPN. We first outline the requirements and show how they are met on the single network layers. After that, we describe, how those single measurements can be combined into an integrated view. We cover design concepts as well as tool implementation highlights.