Jan Bækgaard Pedersen
University of Nevada, Las Vegas
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jan Bækgaard Pedersen.
International Journal of Systems Assurance Engineering and Management | 2011
Renée C. Bryce; Sreedevi Sampath; Jan Bækgaard Pedersen; Schuyler Manchester
Test suite prioritization techniques modify the order in which tests within a test suite run. The goal is to order tests such that they detect faults as early as possible in the test execution cycle. Prioritization by combinatorial interaction coverage is a recent criterion that has been useful for prioritizing test suites for GUI and web applications. While studies show that this prioritization criterion can be valuable, previous studies compute the interaction coverage without considering the cost of individual tests. This paper proposes a new cost-based combinatorial interaction coverage metric, an algorithm to compute the new metric, and an empirical study with three subject web applications. Two of our studies show that prioritization by the new metric improves the rate at which faults are detected in relation to cost. A third study reveals an interesting result that the success of the cost-based metric is influenced by the distribution of t-tuples in the selected test cases.
international conference on software engineering | 2016
Phillip Merlin Uesbeck; Andreas Stefik; Stefan Hanenberg; Jan Bækgaard Pedersen; Patrick M. Daleiden
Lambdas have seen increasing use in mainstream programming languages, notably in Java 8 and C++ 11. While the technical aspects of lambdas are known, we conducted the first randomized controlled trial on the human factors impact of C++ 11 lambdas compared to iterators. Because there has been recent debate on having students or professionals in experiments, we recruited undergraduates across the academic pipeline and professional programmers to evaluate these findings in a broader context. Results afford some doubt that lambdas benefit developers and show evidence that students are negatively impacted in regard to how quickly they can write correct programs to a test specification and whether they can complete a task. Analysis from log data shows that participants spent more time with compiler errors, and have more errors, when using lambdas as compared to iterators, suggesting difficulty with the syntax chosen for C++. Finally, experienced users were more likely to complete tasks, with or without lambdas, and could do so more quickly, with experience as a factor explaining 45.7% of the variance in our sample in regard to completion time.
Journal of Parallel and Distributed Computing | 2005
Alex Brodsky; Jan Bækgaard Pedersen; Alan Wagner
Message passing programs commonly use message buffers to avoid unnecessary synchronizations and to improve performance by overlapping communication with computation. Unfortunately, using buffers can introduce portability problems and can lead to deadlock problems on systems without a sufficient number of message buffers. We explore a variety of problems related to buffer allocation for safe and efficient execution of message passing programs. We show that determining the minimum number of message buffers or verifying that each process has a sufficient number of message buffers are intractable problems. However, we give a polynomial time algorithm to determine the minimum number of message buffers needed to ensure that no send operation is unnecessarily delayed due to lack of message buffers. We extend these results to several different buffering schemes, which in some cases make the problems tractable.
ACM Transactions on Programming Languages and Systems | 2010
Peter H. Welch; Jan Bækgaard Pedersen
With the commercial development of multicore processors, the challenges of writing multithreaded programs to take advantage of these new hardware architectures are becoming more and more pertinent. Concurrent programming is necessary to achieve the performance that the hardware offers. Traditional approaches present concurrency as an advanced topic: they have proven difficult to use, reason about with confidence, and scale up to high levels of concurrency. This article reviews process-oriented design, based on Hoares algebra of Communicating Sequential Processes (CSP), and proposes that this approach to concurrency leads to solutions that are manageable by novice programmers; that is, they are easy to design and maintain, that they are scalable for complexity, obviously correct, and relatively easy to verify using formal reasoning and/or model checkers. These solutions can be developed in conventional programming languages (through CSP libraries) or specialized ones (such as occam-π) in a manner that directly reflects their formal expression. Systems can be developed without needing specialist knowledge of the CSP formalism, since the supporting mathematics is burnt into the tools and languages supporting it. We illustrate these concepts with the Santa Claus problem, which has been used as a challenge for concurrency mechanisms since 1994. We consider this problem as an example control system, producing external signals reporting changes of internal state (that model the external world). We claim our occam-π solution is correct-by-design, but follow this up with formal verification (using the FDR model checker for CSP) that the system is free from deadlock and livelock, that the produced control signals obey crucial ordering constraints, and that the system has key liveness properties.
IEEE Symposium on Parallel and Large-Data Visualization and Graphics, 2003. PVG 2003. | 2003
Dmitry Brodsky; Jan Bækgaard Pedersen
As polygonal models rapidly grow to sizes orders of magnitudes bigger than the memory of commodity workstations, a viable approach to simplifying such models is parallel mesh simplification algorithms. A naive approach that divides the model into a number of equally sized chunks and distributes them to a number of potentially heterogeneous workstations is bound to fail. In severe cases the computation becomes virtually impossible due to significant slow downs because of memory thrashing. We present a general parallel framework for simplification of very large meshes. This framework ensures a near optimal utilization of the computational resources in a cluster of workstations by providing an intelligent partitioning of the model. This partitioning ensures a high quality output, low runtime due to intelligent load balancing, and high parallel efficiency by providing total memory utilization of each machine, thus guaranteeing not to trash the virtual memory system. To test the usability of our framework we have implemented a parallel version of R-Simp [Brodsky and Watson 2000].
Formal Aspects of Computing | 2018
Jan Bækgaard Pedersen; Peter H. Welch
Concurrency is beginning to be accepted as a core knowledge area in the undergraduate CS curriculum—no longer isolated, for example, as a support mechanism in a module on operating systems or reserved as an advanced discipline for later study. Formal verification of system properties is often considered a difficult subject area, requiring significant mathematical knowledge and generally restricted to smaller systems employing sequential logic only. This paper presents materials, methods and experiences of teaching concurrency and verification as a unified subject, as early as possible in the curriculum, so that they become fundamental elements of our software engineering tool kit—to be used together every day as a matter of course. Concurrency and verification should live in symbiosis. Verification is essential for concurrent systems as testing becomes especially inadequate in the face of complex non-deterministic (and, therefore, hard to repeat) behaviours. Concurrency should simplify the expression of most scales and forms of computer system by reflecting the concurrency of the worlds in which they operate (and, therefore, have to model); simplified expression leads to simplified reasoning and, hence, verification. Our approach lets these skills be developed without requiring students to be trained in the underlying formal mathematics. Instead, we build on the work of those who have engineered that necessary mathematics into the concurrency models we use (CSP,
Journal of Parallel and Distributed Computing | 2008
Jan Bækgaard Pedersen; Alex Brodsky; Jeffrey Sampson
parallel and distributed processing techniques and applications | 2002
Dmitry Brodsky; Jan Bækgaard Pedersen
{\pi}
Lecture Notes in Computer Science | 2001
Jan Bækgaard Pedersen; Alan Wagner
communicating process architectures | 2009
Jan Bækgaard Pedersen; Brian Kauke
π -calculus), the model checker (FDR) that lets us explore and verify those systems, and the programming languages/libraries (occam-