I.S.W.B. Prasetya
Utrecht University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by I.S.W.B. Prasetya.
Proceedings of the Eighth International Workshop on Search-Based Software Testing | 2015
Urko Rueda; Tanja E. J. Vos; I.S.W.B. Prasetya
This paper describes the third round of the Java Unit Testing Tool Competition. This edition of the contest evaluates no less than seven automated testing tools! And, like during the second round, test suites written by human testers are also used for comparison. This paper contains the full results of the evaluation.
tools and algorithms for construction and analysis of systems | 1997
I.S.W.B. Prasetya
This paper investigates self-stabilization on hierarchically divided networks. An underlying theory of self-stabilizing systems will be briefly exposed and a generic example will be given. The example and the theory have been mechanically verified using a general purpose theorem prover HOL. Three issues inherent to the problem, namely self-stabilization, concurrency, and hierarchy, can be factored out and treated one separately — something which has considerably simplified our mechanical proof (proof economy is an important issue in mechanical verification, even more than it is in the pencil and paper realm as what misleadingly appears as a few lines there may easily become a few hundreds in the mechanical world).
international conference on software testing verification and validation workshops | 2013
I.S.W.B. Prasetya
T2 is a light-weight, on-the-fly random-based automated testing tool to test Java classes. This paper presents its recent benchmarking result against SBST 2013 Contests test suite.
2016 IEEE/ACM 9th International Workshop on Search-Based Software Testing (SBST) | 2016
I.S.W.B. Prasetya
Random testing has the advantage that it is usually fast. An interesting use case is to use it for bulk smoke testing, e.g. to smoke test a whole project. However, on a large project, even with random testing it may still take hours to complete. To optimise this, we have adapted an automated random testing tool called T3 so that it becomes aware of the time budget we set for a given target class. Test suites are now generated incrementally, and their refinements are adaptively scheduled towards maximising the coverage, given the remaining time. This paper presents an evaluation of the performance of this adaptation, using the benchmark provided by the SBST 2016 Java Unit Testing Tool Contest.
Proceedings of the Eighth International Workshop on Search-Based Software Testing | 2015
I.S.W.B. Prasetya
T3 is a light weight automated unit testing tool for Java. This paper presents the result of benchmarking of T3 at the 3rd Java Unit Testing Tool Contest organized at the 8th International Workshop on Search-Based Software Testing (SBST) in 2015.
international conference on testing software and systems | 2013
Tanja E. J. Vos; Paolo Tonella; I.S.W.B. Prasetya; Peter M. Kruse; Onn Shehory; Alessandra Bagnato; Mark Harman
Future Internet applications are expected to be much more complex and powerful, by exploiting various dynamic capabilities For testing, this is very challenging, as it means that the range of possible behavior to test is much larger, and moreover it may at the run time change quite frequently and significantly with respect to the assumed behavior tested prior to the release of such an application. The traditional way of testing will not be able to keep up with such dynamics. The Future Internet Testing (FITTEST) project (http://crest.cs.ucl.ac.uk/fittest/), a research project funded by the European Commission (grant agreement n. 257574) from 2010 till 2013, was set to explore new testing techniques that will improve our capacity to deal with the challenges of testing Future Internet applications. Such techniques should not be seen as replacement of the traditional testing, but rather as a way to complement it. This paper gives an overview of the set of tools produced by the FITTEST project, implementing those techniques.
Theoretical Computer Science | 2003
I.S.W.B. Prasetya; S.D. Swierstra
This paper presents a theory of component based development for exception-handling in fault tolerant systems. The theory is based on a general theory of composition, which enables us to factorize the temporal specification of a system into the specifications of its components. This is a new development because in the past efforts to set up such a theory have always been hindered by the problem of composing progress properties.
Concurrency and Computation: Practice and Experience | 2018
M. H. Jiang; Otto W. Visser; I.S.W.B. Prasetya; Alexandru Iosup
Mobile gaming is already a popular and lucrative market. However, the low performance and reduced power capacity of mobile devices severely limit the complexity of mobile games and the duration of their game sessions. To mitigate these issues, in this article, we explore using computation‐offloading, that is, allowing the compute‐intensive parts of mobile games to execute on remote infrastructure. Computation‐offloading raises the combined challenge of addressing the trade‐offs between performance and power‐consumption while also keeping the game playable. We propose Mirror, a system for computation‐offloading that supports the demanding performance requirements of sophisticated mobile games. Mirror proposes several conceptual contributions: support for fine‐grained partitioning, both offline (set by developers) and dynamic (policy‐based), and real‐time asynchronous offloading and user‐input synchronization protocols that enable Mirror‐based systems to bound the delays introduced by offloading and thus to achieve adequate performance. Mirror is compatible with all games that are tick‐based and user‐input deterministic. We implement a real‐world prototype of Mirror and apply it to the real‐world, complex, popular game OpenTTD. The experimental results show that, in comparison with the non‐offloaded OpenTTD, Mirror‐ed OpenTTD can significantly improve performance and power consumption while also delivering smooth gameplay. As a trade‐off, Mirror introduces acceptable delay on user inputs.
international conference on simulation and modeling methodologies technologies and applications | 2016
Nikolaos Bezirgiannis; I.S.W.B. Prasetya; Ilias Sakellariou
Agent-based Modeling (ABM) has become quite popular to the simulation community for its usability and wide area of applicability. However, speed is not usually a trait that ABM tools are characterized of attaining. This paper presents HLogo, a parallel variant of the NetLogo ABM framework, that seeks to increase the performance of ABM by utilizing Software Transactional Memory and multi-core CPUs, all the while maintaining the user friendliness of NetLogo. HLogo is implemented as a Domain Specific Language embedded in the functional language Haskell, which means that it also inherits Haskells features, such as its static typing.
asia information retrieval symposium | 2015
Diyah Puspitaningrum; Fauzi; Boko Susilo; Jeri Apriansyah Pagua; Aan Erlansari; Desi Andreswari; Rusdi Efendi; I.S.W.B. Prasetya
In this research we propose a technique of frequent itemset hierarchical clustering (FIHC) using an MDL-based algorithm, viz KRIMP. Different from the FIHC technique, in this proposed method we define clustering as a rank sequence problem of the top-3 ranked list of each itemsets-of-keywords clusters in web documents search results of a given query to a search engine. The key idea of an MDL compression based approach is the code table. Only frequent and representative keywords as those in a KRIMP code table can be used as candidates, instead of using all important keywords from keywords extractor such as RAKE. To simulate information needs in the real world, the web documents are originated from the search results of a multi domain query. By starting in a meta-search engine environment to grab many relevant documents, we set up k = {50, 100, 200} for k-toplist retrieved documents of each search engine to build a dataset for automatic relevance judgement. We implement a clustering technique to the best individual search engine the MDL-based FIHC algorithm with setting of k = {50, 100, 200} for k-toplist of retrieved documents of each search engine, minimum support = 5 for itemset KRIMP compression, and minimum cluster support = 0.1 for FIHC clustering. Our results show that the MDL-based FIHC clustering can improve the relevance scores of web search results on an individual search engine significantly (until 39.2 % at precision P@10, k-toplist = 50).