Jervis Pinto
Oregon State University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jervis Pinto.
Ai Magazine | 2011
David J. Stracuzzi; Alan Fern; Kamal Ali; Robin Hess; Jervis Pinto; Nan Li; Tolga Könik; Daniel G. Shapiro
Automatic transfer of learned knowledge from one task or domain to another offers great potential to simplify and expedite the construction and deployment of intelligent systems. In practice however, there are many barriers to achieving this goal. In this article, we present a prototype system for the real-world context of transferring knowledge of American football from video observation to control in a game simulator. We trace an example play from the raw video through execution and adaptation in the simulator, highlighting the systems component algorithms along with issues of complexity, generality, and scale. We then conclude with a discussion of the implications of this work for other applications, along with several possible improvements.
nasa formal methods symposium | 2015
Alex Groce; Jervis Pinto
The difficulty of writing test harnesses is a major obstacle to the adoption of automated testing and model checking. Languages designed for harness definition are usually tied to a particular tool and unfamiliar to programmers; moreover, such languages can limit expressiveness. Writing a harness directly in the language of the software under test (SUT) makes it hard to change testing algorithms, offers no support for the common testing idioms, and tends to produce repetitive, hard-to-read code. This makes harness generation a natural fit for the use of an unusual kind of domain-specific language (DSL). This paper defines a template scripting testing language, TSTL, and shows how it can be used to produce succinct, readable definitions of state spaces. The concepts underlying TSTL are demonstrated in Python but are not tied to it.
International Journal on Software Tools for Technology Transfer | 2018
Josie Holmes; Alex Groce; Jervis Pinto; Pranjal Mittal; Pooria Azimi; Kevin Kellar; James O’Brien
A test harness, in automated test generation, defines the set of valid tests for a system, as well as their correctness properties. The difficulty of writing test harnesses is a major obstacle to the adoption of automated test generation and model checking. Languages for writing test harnesses are usually tied to a particular tool and unfamiliar to programmers, and often limit expressiveness. Writing test harnesses directly in the language of the software under test (SUT) is a tedious, repetitive, and error-prone task, offers little or no support for test case manipulation and debugging, and produces hard-to-read, hard-to-maintain code. Using existing harness languages or writing directly in the language of the SUT also tends to limit users to one algorithm for test generation, with little ability to explore alternative methods. In this paper, we present TSTL, the template scripting testing language, a domain-specific language (DSL) for writing test harnesses. TSTL compiles harness definitions into an interface for testing, making generic test generation and manipulation tools for all SUTs possible. TSTL includes tools for generating, manipulating, and analyzing test cases, including simple model checkers. This paper motivates TSTL via a large-scale testing effort, directed by an end-user, to find faults in the most widely used geographic information systems tool. This paper emphasizes a new approach to automated testing, where, rather than focus on developing a monolithic tool to extend, the aim is to convert a test harness into a language extension. This approach makes testing not a separate activity to be performed using a tool, but as natural to users of the language of the system under test as is the use of domain-specific libraries such as ArcPy, NumPy, or QIIME, in their domains. TSTL is a language and tool infrastructure, but is also a way to bring testing activities under the control of an existing programming language in a simple, natural way.
international conference on machine learning and applications | 2010
Jervis Pinto; Alan Fern; Tim Bauer; Martin Erwig
We study how to effectively integrate reinforcement learning (RL) and programming languages via adaptation-based programming, where programs can include non-deterministic structures that can be automatically optimized via RL. Prior work has optimized adaptive programs by defining an induced sequential decision process to which standard RL is applied. Here we show that the success of this approach is highly sensitive to the specific program structure, where even seemingly minor program transformations can lead to failure. This sensitivity makes it extremely difficult for a non-RL-expert to write effective adaptive programs. In this paper, we study a more robust learning approach, where the key idea is to leverage information about program structure in order to define a more informative decision process and to improve the SARSA(\lambda) RL algorithm. Our empirical results show significant benefits for this approach.
international symposium on software testing and analysis | 2015
Alex Groce; Jervis Pinto; Pooria Azimi; Pranjal Mittal
Writing a test harness is a difficult and repetitive program- ming task, and the lack of tool support for customized auto- mated testing is an obstacle to the adoption of more sophis- ticated testing in industry. This paper presents TSTL, the Template Scripting Testing Language, which allows users to specify the general form of valid tests for a system in a simple but expressive language, and tools to support testing based on a TSTL definition. TSTL is a minimalist template- based domain-specific language, using the source language of the Software Under Test (SUT) to support most operations, but adding declarative idioms for testing. TSTL compiles to a common testing interface that hides the details of the SUT and provides support for logging, code coverage, delta debugging, and other core testing functionality, making it easy to write universal testing tools such as random testers or model checkers that apply to all TSTL-defined harnesses. TSTL is currently available for Python, but easily adapted to other languages as well.
Electronic Proceedings in Theoretical Computer Science | 2011
Tim Bauer; Martin Erwig; Alan Fern; Jervis Pinto
We present an embedded DSL to support adaptation-based programming (ABP) in Haskell. ABP is an abstract model for defining adaptive values, called adaptives, which adapt in response to some associated feedback. We show how our design choices in Haskell motivate higher-level combinators and constructs and help us derive more complicated compositional adaptives. We also show an important specialization of ABP is in support of reinforcement learning constructs, which optimize adaptive values based on a programmer-specified objective function. This permits ABP users to easily define adaptive values that express uncertainty anywhere in their programs. Over repeated executions, these adaptive values adjust to more efficient ones and enable the users programs to self optimize. The design of our DSL depends significantly on the use of type classes. We will illustrate, along with presenting our DSL, how the use of type classes can support the gradual evolution of DSLs.
international performance computing and communications conference | 2011
Pingan Zhu; Jervis Pinto; Alan Fern; Thinh P. Nguyen
The design of network protocols is a complicated and tedious endeavor. For instance, designing a MAC layer protocol for the 802.11 standard typically involves a number of high-level decisions (e.g., conditions for backoff steps) followed by an individual tuning of numeric parameters (e.g., backoff factors), for a variety of network conditions. A different way to view this design process is that of a designer being forced to fully specify a solution to a complex problem. At the other extreme of the programming spectrum lie Reinforcement Learning techniques which only require a minimal problem specification from the programmer.
global communications conference | 2012
Pingan Zhu; Jervis Pinto; Thinh P. Nguyen; Alan Fern
Designing network protocols that work well under a variety of network conditions typically involves a large amount of manual tuning and guesswork, particularly when choosing dynamic update strategies for numeric parameters. The situation is made more complex by adding the Quality of Service (QoS) requirements to a network protocol. A fundamentally different approach for designing protocols is via Reinforcement Learning (RL) algorithms which allow protocols to be automatically optimized through network simulation. However, getting RL to work well in practice requires considerable expertise and carries a significant implementation overhead. To help overcome this challenge, recent work has developed the programming paradigm of Adaptation-Based Programming (ABP), which allows programmers who are not RL-experts to write self-optimizing “adaptive programs”. In this work, we study the potential of applying ABP to the problem of designing network protocols via simulation. We demonstrate the flexibility of our design method via a number of case studies, each of which investigates the performance of an adaptive program written for the backoff mechanism of the MAC layer in the 802.11 standard. Our results show that the learned protocols typically outperform 802.11 on a number of evaluation metrics and network conditions.
generative programming and component engineering | 2012
Tim Bauer; Martin Erwig; Alan Fern; Jervis Pinto
In the adaptation-based programming (ABP) paradigm, programs may contain variable parts (function calls, parameter values, etc.) that can be take a number of different values. Programs also contain reward statements with which a programmer can provide feedback about how well a program is performing with respect to achieving its goals (for example, achieving a high score on some scale). By repeatedly running the program, a machine learning component will, guided by the rewards, gradually adjust the automatic choices made in the variable program parts so that they converge toward an optimal strategy. ABP is a method for semi-automatic program generation in which the choices and rewards offered by programmers allow standard machine-learning techniques to explore a design space defined by the programmer to find an optimal instance of a program template. ABP effectively provides a DSL that allows non-machine-learning experts to exploit machine learning to generate self-optimizing programs. Unfortunately, in many cases the placement and structuring of choices and rewards can have a detrimental effect on how an optimal solution to a program-generation problem can be found. To address this problem, we have developed a dataflow analysis that computes influence tracks of choices and rewards. This information can be exploited by an augmented machine-learning technique to ignore misleading rewards and to generally attribute rewards better to the choices that have actually influenced them. Moreover, this technique allows us to detect errors in the adaptive program that might arise out of program maintenance. Our evaluation shows that the dataflow analysis can lead to improvements in performance.
international symposium on software reliability engineering | 2012
Alex Groce; Alan Fern; Jervis Pinto; Tim Bauer; Amin Alipour; Martin Erwig; Camden Lopez