Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jingyi Wang is active.

Publication


Featured researches published by Jingyi Wang.


fundamental approaches to software engineering | 2017

Should We Learn Probabilistic Models for Model Checking? A New Approach and An Empirical Study

Jingyi Wang; Jun Sun; Qixia Yuan; Jun Pang

Many automated system analysis techniques e.g., model checking, model-based testing rely on first obtaining a model of the system under analysis. System modeling is often done manually, which is often considered as a hindrance to adopt model-based system analysis and development techniques. To overcome this problem, researchers have proposed to automatically learn models based on sample system executions and shown that the learned models can be useful sometimes. There are however many questions to be answered. For instance, how much shall we generalize from the observed samples and how fast would learning converge? Or, would the analysis result based on the learned model be more accurate than the estimation we could have obtained by sampling many system executions within the same amount of time? In this work, we investigate existing algorithms for learning probabilistic models for model checking, propose an evolution-based approach for better controlling the degree of generalization and conduct an empirical study in order to answer the questions. One of our findings is that the effectiveness of learning may sometimes be limited.


formal methods | 2016

Towards Concolic Testing for Hybrid Systems

Pingfan Kong; Yi Li; Xiaohong Chen; Jun Sun; Meng Sun; Jingyi Wang

Hybrid systems exhibit both continuous and discrete behavior. Analyzing hybrid systems is known to be hard. Inspired by the idea of concolic testing (of programs), we investigate whether we can combine random sampling and symbolic execution in order to effectively verify hybrid systems. We identify a sufficient condition under which such a combination is more effective than random sampling. Furthermore, we analyze different strategies of combining random sampling and symbolic execution and propose an algorithm which allows us to dynamically switch between them so as to reduce the overall cost. Our method has been implemented as a web-based checker named HyChecker. HyChecker has been evaluated with benchmark hybrid systems and a water treatment system in order to test its effectiveness.


International Journal on Software Tools for Technology Transfer | 2018

Learning probabilistic models for model checking: an evolutionary approach and an empirical study

Jingyi Wang; Jun Sun; Qixia Yuan; Jun Pang

Many automated system analysis techniques (e.g., model checking, model-based testing) rely on first obtaining a model of the system under analysis. System modeling is often done manually, which is often considered as a hindrance to adopt model-based system analysis and development techniques. To overcome this problem, researchers have proposed to automatically “learn” models based on sample system executions and shown that the learned models can be useful sometimes. There are however many questions to be answered. For instance, how much shall we generalize from the observed samples and how fast would learning converge? Or, would the analysis result based on the learned model be more accurate than the estimation we could have obtained by sampling many system executions within the same amount of time? Moreover, how well does learning scale to real-world applications? If the answer is negative, what are the potential methods to improve the efficiency of learning? In this work, we first investigate existing algorithms for learning probabilistic models for model checking and propose an evolution-based approach for better controlling the degree of generalization. Then, we present existing approaches to learn abstract models to improve the efficiency of learning for scalability reasons. Lastly, we conduct an empirical study in order to answer the above questions. Our findings include that the effectiveness of learning may sometimes be limited and it is worth investigating how abstraction should be done properly in order to learn abstract models.


international conference on formal engineering methods | 2017

Improving Probability Estimation Through Active Probabilistic Model Learning

Jingyi Wang; Xiaohong Chen; Jun Sun; Shengchao Qin

It is often necessary to estimate the probability of certain events occurring in a system. For instance, knowing the probability of events triggering a shutdown sequence allows us to estimate the availability of the system. One approach is to run the system multiple times and then construct a probabilistic model to estimate the probability. When the probability of the event to be estimated is low, many system runs are necessary in order to generate an accurate estimation. For complex cyber-physical systems, each system run is costly and time-consuming, and thus it is important to reduce the number of system runs while providing accurate estimation. In this work, we assume that the user can actively tune the initial configuration of the system before the system runs and answer the following research question: how should the user set the initial configuration so that a better estimation can be learned with fewer system runs. The proposed approach has been implemented and evaluated with a set of benchmark models, random generated models, and a real-world water treatment system.


international conference on software engineering | 2018

Towards optimal concolic testing

Xinyu Wang; Jun Sun; Zhenbang Chen; Peixin Zhang; Jingyi Wang; Yun Lin

Concolic testing integrates concrete execution (e.g., random testing) and symbolic execution for test case generation. It is shown to be more cost-effective than random testing or symbolic execution sometimes. A concolic testing strategy is a function which decides when to apply random testing or symbolic execution, and if it is the latter case, which program path to symbolically execute. Many heuristics-based strategies have been proposed. It is still an open problem what is the optimal concolic testing strategy. In this work, we make two contributions towards solving this problem. First, we show the optimal strategy can be defined based on the probability of program paths and the cost of constraint solving. The problem of identifying the optimal strategy is then reduced to a model checking problem of Markov Decision Processes with Costs. Secondly, in view of the complexity in identifying the optimal strategy, we design a greedy algorithm for approximating the optimal strategy. We conduct two sets of experiments. One is based on randomly generated models and the other is based on a set of C programs. The results show that existing heuristics have much room to improve and our greedy algorithm often outperforms existing heuristics.


formal methods | 2018

Towards 'Verifying' a Water Treatment System

Jingyi Wang; Jun Sun; Yifan Jia; Shengchao Qin; Zhiwu Xu

Modeling and verifying real-world cyber-physical systems is challenging, which is especially so for complex systems where manually modeling is infeasible. In this work, we report our experience on combining model learning and abstraction refinement to analyze a challenging system, i.e., a real-world Secure Water Treatment system (SWaT). Given a set of safety requirements, the objective is to either show that the system is safe with a high probability (so that a system shutdown is rarely triggered due to safety violation) or not. As the system is too complicated to be manually modeled, we apply latest automatic model learning techniques to construct a set of Markov chains through abstraction and refinement, based on two long system execution logs (one for training and the other for testing). For each probabilistic safety property, we either report it does not hold with a certain level of probabilistic confidence, or report that it holds by showing the evidence in the form of an abstract Markov chain. The Markov chains can subsequently be implemented as runtime monitors in SWaT.


international conference on formal engineering methods | 2016

Service Adaptation with Probabilistic Partial Models

Manman Chen; Tian Huat Tan; Jun Sun; Jingyi Wang; Yang Liu; Jing Sun; Jin Song Dong

Web service composition makes use of existing Web services to build complex business processes. Non-functional requirements are crucial for the Web service composition. In order to satisfy non-functional requirements when composing a Web service, one needs to rely on the estimated quality of the component services. However, estimation is seldom accurate especially in the dynamic environment. Hence, we propose a framework, ADFlow, to monitor and adapt the workflow of the Web service composition when necessary to maximize its ability to satisfy the non-functional requirements automatically. To reduce the monitoring overhead, ADFlow relies on asynchronous monitoring. ADFlow has been implemented and the evaluation has shown the effectiveness and efficiency of our approach. Given a composite service, ADFlow achieves 25 %–32 % of average improvement in the conformance of non-functional requirements, and only incurs 1 %–3 % of overhead with respect to the execution time.


arXiv: Software Engineering | 2016

Verifying Complex Systems Probabilistically through Learning, Abstraction and Refinement.

Jingyi Wang; Jun Sun; Shengchao Qin


dependable systems and networks | 2018

Importance Sampling of Interval Markov Chains

Cyrille Jegourel; Jingyi Wang; Jun Sun


arXiv: Learning | 2018

Detecting Adversarial Samples for Deep Neural Networks through Mutation Testing.

Jingyi Wang; Jun Sun; Peixin Zhang; Xinyu Wang

Collaboration


Dive into the Jingyi Wang's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jun Pang

University of Luxembourg

View shared research outputs
Top Co-Authors

Avatar

Qixia Yuan

University of Luxembourg

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge