Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Manuel Oriol is active.

Publication


Featured researches published by Manuel Oriol.


automated software engineering | 2007

Efficient unit test case minimization

Andreas Leitner; Manuel Oriol; Andreas Zeller; Ilinca Ciupa; Bertrand Meyer

Randomized unit test cases can be very effective in detecting defects. In practice, however, failing test cases often comprise long sequences of method calls that are tiresome to reproduce and debug. We present a combination of static slicing and delta debugging that automatically minimizes the sequence of failure-inducing method calls. In a case study on the EiffelBase library, the strategy minimizes failing unit test cases on average by 96%. This approach improves on the state of the art by being far more efficient: in contrast to the approach of Lei and Andrews, who use delta debugging alone, our case study found slicing to be 50 times faster, while providing comparable results. The combination of slicing and delta debugging gives the best results and is 11 times faster.


international symposium on software testing and analysis | 2007

Experimental assessment of random testing for object-oriented software

Ilinca Ciupa; Andreas Leitner; Manuel Oriol; Bertrand Meyer

Progress in testing requires that we evaluate the effectiveness of testing strategies on the basis of hard experimental evidence, not just intuition or a priori arguments. Random testing, the use of randomly generated test data, is an example of a strategy that the literature often deprecates because of such preconceptions. This view is worth revisiting since random testing otherwise offers several attractive properties: simplicity of implementation, speed of execution, absence of human bias. We performed an intensive experimental analysis of the efficiency of random testing on an existing industrial-grade code base. The use of a large-scale cluster of computers, for a total of 1500 hours of CPU time, allowed a fine-grain analysis of the individual effect of the various parameters involved in the random testing strategy, such as the choice of seed for a random number generator. The results provide insights into the effectiveness of random testing and a number of lessons for testing researchers and practitioners.


Science of Computer Programming | 2003

Coordinating processes with secure spaces

Jan Vitek; Ciarán Bryce; Manuel Oriol

The Linda shared space model and its derivatives provide great flexibility for building parallel and distributed applications composed of independent processes. However, the shared space model does not provide protection against untrustworthy processes. Linda processes communicate by reading and writing messages in a globally visible data space, so a malicious process can launch any number of security attacks. This paper presents the design of a new coordination model which extends Linda with fine-grained access control. The semantics of the model is presented in the context of a process calculus. A prototype of our model, called SECOS, has been implemented in JAVA.


international conference on software testing, verification, and validation | 2008

On the Predictability of Random Tests for Object-Oriented Software

Ilinca Ciupa; Alexander Pretschner; Andreas Leitner; Manuel Oriol; Bertrand Meyer

Intuition suggests that random testing of object-oriented programs should exhibit a significant difference in the number of faults detected by two different runs of equal duration. As a consequence, random testing would be rather unpredictable. We evaluate the variance of the number of faults detected by random testing over time. We present the results of an empirical study that is based on 1215 hours of randomly testing 27 Eiffel classes, each with 30 seeds of the random number generator. Analyzing over 6 million failures triggered during the experiments, the study provides evidence that the relative number of faults detected by random testing over time is predictable but that different runs of the random test case generator detect different faults. The study also shows that random testing quickly finds faults: the first failure is likely to be triggered within 30 seconds.


international conference on coordination models and languages | 1999

A Coordination Model Agents Based on Secure Spaces

Ciarán Bryce; Manuel Oriol; Jan Vitek

Shared space coordination models such as Linda are ill-suited for structuring applications composed of erroneous or insecure components. This paper presents the Secure Object Space model. In this model, a data element can be locked with a key and is only visible to a process that presents a matching key to unlock the element. We give a precise semantics for Secure Object Space operations and discuss an implementation in JAVA for a mobile agent system. An implementation of the semantics that employs encryption is also outlined for use in untrusted environments.


international conference on software testing, verification and validation workshops | 2010

YETI on the Cloud

Manuel Oriol; Faheem Ullah

The York Extensible Testing Infrastructure (YETI) is an automated random testing tool that allows to test programs written in various programming languages. While YETI is one of the fastest random testing tools with over a million method calls per minute on fast code, testing large programs or slow code -- such as libraries using intensively the memory -- might benefit from parallel executions of testing sessions. This paper presents the cloud-enabled version of YETI. It relies on the Hadoop package and its map/reduce implementation to distribute tasks over potentially many computers. This would allow to distribute the cloud version of YETI over Amazons Elastic Compute Cloud (EC2).


technical symposium on computer science education | 2007

Open source projects in programming courses

Michela Pedroni; Till G. Bay; Manuel Oriol; Andreas Pedroni

One of the main shortcomings of programming courses is the lack of practice with real-world systs. As a result, students feel unprepared for industry jobs. In parallel, open source software is accepting contributions even from inexperienced programmers and achieves software that competes both in quality and functionality with industrial systs. This article describes: first, a setting in which students were required to contribute to existing open source software; second, the evaluation of this experience using a motivation measuring technique; and third, an analysis of the efficiency and commitment of students over the time. The study shows that students are at first afraid of failing the assignment, but end up having the impression of a greater achievent. It ses also that students are inclined to keep working on the project to which they contributed after the end of the course.


ieee international conference on cloud computing technology and science | 2012

Security risks and their management in cloud computing

Afnan Ullah Khan; Manuel Oriol; Mariam Kiran; Ming Jiang; Karim Djemame

Cloud computing provides outsourcing of resources bringing economic benefits. The outsourcing however does not allow data owners to outsource the responsibility of confidentiality, integrity and access control, as it still is the responsibility of the data owner. As cloud computing is transparent to both the programmers and the users, it induces challenges that were not present in previous forms of distributed computing. Furthermore, cloud computing enables its users to abstract away from low-level configuration such as configuring IP addresses and routers. It creates an illusion that this entire configuration is automated. This illusion is also true for security services, for instance automating security policies and access control in cloud, so that individuals or end-users using the cloud only perform very high-level (business oriented) configuration. This paper investigates the security challenges posed by the transparency of distribution, abstraction of configuration and automation of services by performing a detailed threat analysis of cloud computing across its different deployment scenarios (private, bursting, federation or multi-clouds). This paper also presents a risk inventory which documents the security threats identified in terms of availability, integrity and confidentiality for cloud infrastructures in detail for future security risks. We also propose a methodology for performing security risk assessment for cloud computing architectures presenting some of the initial results.


international workshop on hot topics in software upgrades | 2009

Dynamic software updates for real-time systems

Michael Wahler; Stefan Richter; Manuel Oriol

Seamlessly updating software in running systems has recently gained momentum. Dynamically updating the software of real-time embedded systems, however, still poses numerous challenges: such systems must meet hard deadlines, cope with limited resources, and adhere to high safety standards. This paper presents a solution for updating component-based cyclic embedded systems without violating real-time constraints. In particular, it investigates how to identify points in time at which updates can be performed and how to transfer the state of a component to a new version of the same component. We also present experimental results to validate the proposed solution.


Software Testing, Verification & Reliability | 2011

On the number and nature of faults found by random testing

Ilinca Ciupa; Alexander Pretschner; Manuel Oriol; Andreas Leitner; Bertrand Meyer

Intuition suggests that random testing should exhibit a considerable difference in the number of faults detected by two different runs of equal duration. As a consequence, random testing would be rather unpredictable. This article first evaluates the variance over time of the number of faults detected by randomly testing object‐oriented software that is equipped with contracts. It presents the results of an empirical study based on 1215 h of randomly testing 27 Eiffel classes, each with 30 seeds of the random number generator. The analysis of over 6 million failures triggered during the experiments shows that the relative number of faults detected by random testing over time is predictable, but that different runs of the random test case generator detect different faults. The experiment also suggests that the random testing quickly finds faults: the first failure is likely to be triggered within 30 s. The second part of this article evaluates the nature of the faults found by random testing. To this end, it first explains a fault classification scheme, which is also used to compare the faults found through random testing with those found through manual testing and with those found in field use of the software and recorded in user incident reports. The results of the comparisons show that each technique is good at uncovering different kinds of faults. None of the techniques subsumes any of the others; each brings distinct contributions. This supports a more general conclusion on comparisons between testing strategies: the number of detected faults is too coarse a criterion for such comparisons—the nature of faults must also be considered. Copyright

Collaboration


Dive into the Manuel Oriol's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gunter Saake

Otto-von-Guericke University Magdeburg

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge