Rich Model Toolkit COST Action Meeting, 16-17 June 2013
The workshop will be held in the Corinthia Palace Hotel, De Paule Avenue, San Anton. Participants will get special room rates - more information can be found in the Local Arrangements section.
The workshop is being coordinated by the University of Malta. The Malta Council for Science and Technology (MCST) is the managing authority for COST in Malta.
The planned structure of the workshop will be filled in with more detail closer to the date.
Sunday, 16 June 2013
|10.00-10.30||Verifying Web Applications: From Business Level Specifications to Automated Model-Based Testing||Christian Colombo|
|10.30-11.00||Simulation Relations for Rich Acceptance Conditions||Julian Samborski-Forlese|
|11.30-12.00||Coinductive big-step semantics for concurrency||Tarmo Uustalu|
|12.00-12.30||To Encode or to Propagate? The Best Choice for Each Constraint in SAT||Enric Rodriguez-Carbonell|
|14.00-14.30||Model-checking for Efficient Malware Detection||Tayssir Touilli|
|14.30-15.00||Decidable Classes of Mean-Payoff Games with Imperfect Information||Guillermo Perez|
|15.30-16.00||Reasoning About Contracts||Gordon Pace|
|16.00-16.30||Fictional Separation Logic||Jonas Jensen|
Monday, 17 June 2013
Meeting is organized by Gordon Pace and Cesar Sanchez.
This workshop serves as the meeting of the Rich Model Toolkit COST Action IC0901. Information about the Action: http://richmodels.epfl.ch/
The meeting continues a series of meetings held in Brussels, Belgrade, Edinburgh, Lugano, Saarbruecken, Dagstuhl, Turin, Tallinn, Trento, Manchester, Haifa and Rome held in the period 2009-2013.
The workshop will be held at the Corinthia Palace Hotel (5*), De Paule Avenue, San Anton, with which special accommodation rates have been arranged for participants. The hotel is poised midway between Malta’s capital Valletta, and the so-called ‘silent’ city of Mdina, and across the road from the 17th century San Anton Presidential Palace. The Corinthia Palace Hotel & Spa was originally built in the 19th century as a luxury private villa, the grand design and immaculately-landscaped gardens have inspired guests from around the world.
Deluxe garden-view room on bed & breakfast basis is at 90 euros per night (for sole use). Second person supplement is 20 euros per night.
Participants are requested to email Lorraine Dunnett (firstname.lastname@example.org) and Alex Cardona (email@example.com) for bookings. You are encouraged to book till 1st May, until when the hotel will be blocking a number of rooms - after this date it will be on a first come first served basis.
A workshop dinner will be held at Palazzo de Piro at 36 euros (including transport) on Sunday 16th June. Palazzo de Piro is a 17th Century Palazzo nestled in the bastion walls of the medieval city of Mdina, Malta’s Silent City. The existing structure was originally three separate houses, the oldest parts of which date back to the second half of the 16th century. The emblem on the main entrance hall indicates they were built by Malta’s most famous architect Girolamo Cassar.
Local Organisation Matters
If you have any questions regarding logistics you can email directly Ms Lucienne Bugeja (firstname.lastname@example.org).
We are pleased to confirm keynotes from:
Scope of the Event
Researchers have developed a number of useful tools for automated analysis of particular classes of models of computer systems:
- software vendors are using static analyses supported by automated theorem provers and constraint solvers to prevent software crashes;
- hardware manufacturers are using SAT solvers,model checkers, and theorem provers to identify and correct errors that could have enormous financial consequences;
- description logic reasoners analyze relationships between tens of thousands of terms in medical ontologies and verify their consistency;
- aircraft manufacturers and space agencies are using analysis tools based on abstract interpretation to eliminate errors in aircraft control software.
Despite these successes, today’s automated analysis methods are not widespread in engineering practice. Among the factors contributing to this state of affairs are the limitations of the tools themselves: insufficient automation, specialized input formats, and no support for high-level synthesis. Another factor is the lack of standards of quality that would easy tool interoperability and give formally certified computer system a competitive advantage over systems without formal assurance guarantees.
Example topics of interest
- Standardization of expressive languages. Formats to represent systems, formulas, proofs, counterexamples. Translation between specification languages. Benchmarks and competitions for automated reasoning, verification, analysis, synthesis.
- Decision procedures: Decision procedures for new classes of constraints. SAT and SMT implementation and certification. Encoding synthesis and analysis problems into SMT. Description logics and scalable reasoning about knowledge bases.
- Transition system analysis: Abstraction-based approaches and refinement for verification of infinite-state systems. Constraint-based program analysis. Data-flow analysis for complex domains. Extracting transition systems from programming languages and bytecodes.
- High-level synthesis: New algorithms for synthesis from high-level specifications. Extending decision procedures to perform synthesis tasks. Connections between invariant generation and code synthesis.
Verifying Web Applications: From Business Level Specifications to Automated Model-Based Testing Christian Colombo
While the verification of web-based applications is crucial due to their often security-critical nature, it is considerably challenging due to their complex and dynamical nature. The current state of the art involves automated test execution where a test platform automatically interacts with a web application and asserts the expected behaviour. Still, test cases are typically generated manually, particularly when it comes to directing the test to exercise particular execution paths. In this presentation, we propose a novel combination of the QuickCheck testcase generation tool with web-based applications: QuickCheck exploits an executable automata-based specification to drive and assert the application in an automated yet non-random fashion. However, writing the specification is non-trivial and requires manual intervention. To this end, we propose a means of generating QuickCheck specifications from Cucumber specifications – a commonly used high level specification language for communicating with business stakeholders.
Simulation Relations for Rich Acceptance Conditions Julian Samborski-Forlese
Antichain algorithms allow to obtain faster emptiness and model checking algorithms for alternating automata by postponing the state explosion inherent to the translation into equivalent non-deterministic automata. Antichains require computable simulation relations between sets of states in the alternating automaton which depend on the acceptance condition. So far, only Buchi conditions have been studied, which preclude the application of antichains to rich temporal logics like RLTL and PSL, for which more sophisticated acceptance conditions are used. In this talk we present simulation relations for some Rabin and Streett automata that enable the use of antichains for these richer temporal logics
Coinductive big-step semantics for concurrency Tarmo Uustalu
Going against the popular statements that big-step semantics cannot handle nontermination (they would necessarily “lose” all nonterminating runs) and that concurrency is inherently small-step, I present a big-step semantics for shared-variable concurrency that accounts correctly for both terminating and nonterminating runs. This semantics is based on coinductive and mixed inductive-coinductive definitions of resumptions (computation trees), evaluation and termination-sensitive weak bisimilarity. Big-step reasoning offers a higher degree of compositionality as is exemplified eg by rely-guarantee style program logics that are in fact big-step and are more directly matched with big-step rather than small-step semantics. I develop the metatheory of the semantics in a constructive logic.
To Encode or to Propagate? The Best Choice for Each Constraint in SAT Enric Rodriguez-Carbonell
Sophisticated compact SAT encodings exist for many types of constraints. Alternatively, for instances with many (or large) constraints, the SAT solver can also be extended with built-in propagators (the SAT Modulo Theories approach, SMT). For example, given a cardinality constraint x_1 + … + x_n ⇐ k, as soon as k variables become true, such a propagator can set the remaining variables to false, generating a so-called explanation clause of the form x_1 /\ …/\ x_k → ~ x_i. But certain “bottle-neck” constraints end up generating an exponential number of explanations, equivalent to a naive SAT encoding, much worse than using a compact encoding with auxiliary variables from the beginning. Therefore, Abio and Stuckey proposed starting off with a full SMT approach and partially encoding, on the fly, only certain ``active'' parts of constraints. Here we build upon their work.
Equipping our solvers with some additional bookkeeping to monitor constraint activity has allowed us to shed light on the effectiveness of SMT: many constraints generate very few, or few different, explanations.
We also give strong experimental evidence showing that it is typically unnecessary to consider partial encodings: it is competitive to encode the few really active constraints entirely. This makes the approach amenable to any kind of constraint, not just the ones for which partial encodings are known.
This is joint work with Ignasi Abío, Robert Nieuwenhuis, Albert Oliveras and Peter J. Stuckey.
Model-checking for efficient malware detection Tayssir Touilli
The number of malware is growing extraordinarily fast. Therefore, it is important to have efficient malware detectors. Malware writers try to obfuscate their code by different techniques. Many of these well-known obfuscation techniques rely on operations on the stack such as inserting dead code by adding useless push and pop instructions, or hiding calls to the operating system, etc. Thus, it is important for malware detectors to be able to deal with the program’s stack. In this paper we propose a new model-checking approach for malware detection that takes into account the behavior of the stack. Our approach consists in : (1) Modeling the program using a Pushdown System (PDS). (2) Introducing new logics, called SCTPL and SLTPL, to represent the malicious behavior. SCTPL (resp. SLTPL) can be seen as an extension of the branching-time temporal logic CTL (resp. the linear-time temporal logic LTL) with variables, quantifiers, and predicates over the stack. (3) Reducing the malware detection problem to the model-checking problem of PDSs against SCTPL/SLTPL formulas. We show how our new logics can be used to precisely express malicious behaviors that could not be specified by existing specification formalisms. We then consider the model-checking problem of PDSs against SCTPL/SLTPL specifications. We provide efficient algorithms to solve these problems. We implemented our techniques in a tool, and we applied it to detect several viruses. Our results are encouraging. In particular, our tool was able to detect more than 800 viruses. Several of these viruses could not be detected by well-known anti-viruses such as Avira, Avast, Norton, Kaspersky and McAfee.
Decidable Classes of Mean-Payoff Games with Imperfect Information Guillermo Perez
Energy games and Mean-payoff games have been studied as two-player games with quantitative objectives. In the perfect information setting deciding the winner of such games is known to be in both NP and coNP yet only pseudo-polynomial time algorithms have been found for the original unmodified objectives. The problem of determining if an energy game with imperfect information has a winning strategy is known to be undecidable for the case when the initial credit is not fixed. The same is true for the problem of determining if a mean-payoff game with imperfect information has a winning strategy. We propose a new class of mean-payoff games with imperfect information for which the problem is known to be decidable and for which the aforementioned problem is EXPTIME-complete. We also consider the complexity of deciding if a given weighted game graph with imperfect information belongs to this class. In order to better understand the source of undecidability for these objectives under imperfect information we also consider related classes of weighted game graphs with imperfect information as well as the related class membership and existence of winning strategy problems.
Contract Analysis Gordon J. Pace
The analysis of contracts to regulating interactive systems poses various challenges, but is closely linked to the notions of concurrency and synchrony. We present a number of recent results which take an automaton-based approach to model such contracts. We show how obligations, permissions and prohibitions can be formalised and how the norms on one of the parties sometimes imposes constraints on the other. Such formalisation allows for a clean notion of contract strictness and a derived notion of contract conflict that is enriched with issues arising from party interdependence.
Joint work with Fernando Schapachnik and Gerardo Schneider.
Fictional Separation Logic Jonas Jensen
The talk will start with an overview of techniques for abstract interpretation-based program analysis and verification of constraint logic programs (CLP), with emphasis on the on top-down fixpoint algorithms used in the Ciao system (note that bottom-up techniques will be covered in John Gallagher's talk). We will demonstrate the Ciao system analysis and verification-based program development environment, built around these techniques and capable of verifying shapes, modes, determinacy, non-failure, numerical properties, etc. as well as resources such as bandwidth, time, memory, or, more recently, energy. We will then address how, by using CLP as an intermediate representation, the approach (and the Ciao system itself) also supports the verification of similar properties in programs in other programming paradigms, including Java source or bytecode, and assembly code produced from C sources. Finally, we present how the same fixpoint techniques are applied in the execution of Tabled LP programs (such as in the XSB system) and Tabled CLP programs (such as in the TCLP package of the Ciao system), which can be an interesting component for program analyzers for other programming paradigms.
Solving Existentially Quantfied Horn Clauses Tewodros Beyene
Temporal verification of universal (i.e., valid for all computation paths) properties of various kinds of programs, e.g., procedural, multi-threaded, or functional, can be reduced to finding solutions for equations in form of universally quantified Horn clauses extended with well-foundedness conditions. Dealing with existential properties (e.g., whether there exists a particular computation path), however, requires solving forall-exists quantified Horn clauses, where the conclusion part of some clauses contains existentially quantified variables. For example, a deductive approach to CTL verification reduces to solving such clauses.
The talk presents a method for solving forall-exists quantified Horn clauses extended with well-foundedness conditions. Our method is based on a counterexample-guided abstraction refinement scheme to discover witnesses for existentially quantified variables. We also present an application of our solving method to automation of CTL verification of software, as well as its experimental evaluation.
Scaling dynamic logic for intermediate states Boriss Selajev
Standard sequential program logics are insufficient for reasoning about nonterminating program runs — they essentially overlook them. But of course we want to reason also about nonterminating behaviors, especially in case of interactive programs. To remedy this deficiency, we are interested in identifying a useful extension of dynamic logic with modalities for predicating not only final, but also intermediate states of program runs. In this talk, I discuss some possible designs for such a program logic on the level of syntax, semantics and prospective axiomatizations. (Joint work with Keiko Nakata and Tarmo Uustalu.)
Fully Automated Shape Analysis Based on Forest Automata Ondrej Lengal
Forest automata (FA) have recently been proposed as a tool for shape analysis of complex heap structures. FA encode sets of tree decompositions of heap graphs in the form of tuples of tree automata. In order to allow for representing complex heap graphs, the notion of FA allowed one to provide user-defined FA (called boxes) that encode repetitive graph patterns of shape graphs to be used as alphabet symbols of other, higher-level FA. In this paper, we propose a novel technique of automatically learning the FA to be used as boxes that avoids the need of providing them manually. Further, we propose a significant improvement of the automata abstraction used in the analysis. The result is an efficient, fully-automated analysis that can handle even as complex data structures as skip lists, with the performance comparable to state-of-the-art fully-automated tools based on separation logic, which, however, specialise in dealing with linked lists only.
Invariant Generation for Parametrized Systems using Self-Reflection Alejandro Sanchez
We examine the problem of inferring invariants for parametrized systems. Parametrized systems are concurrent systems consisting of an a priori unbounded number of process instances running the same program. Such systems are commonly encountered in many situations including device drivers, distributed systems, and robotic swarms. In this talk, I will describe a technique that enables leveraging off-the-shelf invariant generators designed for sequential programs to infer invariants of parametrized systems. The central challenge in invariant inference for parametrized systems is that naïvely exploding the transition system with all interleavings is not just impractical but impossible. In our approach, the key enabler is the notion of a reflective abstraction that we prove has an important correspondence with inductive invariants. This correspondence naturally gives rise to an iterative invariant generation procedure that alternates between computing candidate invariants and creating reflective abstractions.
This is joint work with Sriram Sankaranarayanan, César Sánchez and Bor-Yuh Evan Chang.
Verification by abstraction and specialisation of constraint logic programs John Gallagher
This talk addresses the question of how static analysis and verification techniques for constraint logic programs (CLP) are applied to other formalisms. First, the systematic translation of a variety of languages and systems to constraint logic programs is reviewed, using the established method of partial evaluation of semantics, proof rules and assertions represented as CLP meta-programs. Then the approximation of CLP programs using abstract interpretation is described, with implementation techniques using external engines to implement abstract domains. Both numeric and symbolic abstract domains (tree automata) are considered. The talk focusses on verification based on approximation of program models (bottom-up semantics) rather than the alternative approach based on computation-tree operational semantics. The role of so-called “magic set” transformations and related techniques originating in Datalog is presented, enabling query-directed analysis in the same bottom-up framework. Analysis for proof of termination is also briefly reviewed. Thirdly, program transformations that preserve CLP models are presented; this leads to verification approaches in which a CLP program is successively specialised with respect to a query until the property of interest is made explicit. Finally we look at abstract model checking based on CLP. By applying standard abstract interpretation techniques it is shown how arbitrary CTL properties can be approximated without resorting to “dual” or “modal” transition systems.
Object-Oriented Programs as Parameterised Systems Siirtola Antti
Object-oriented programs possess many parameters like the number of concurrent threads, the number of replicated objects and the number of nested function calls. Consequently, object-oriented programs are naturally modelled as multi-parameterised systems. In this talk, I present one way to model Java-like programs as parameterised systems in a CSP-style formalism. I also show how such parameterised models can be automatically model checked by utilising existing tools.
Underapproximation of Procedure Summaries for Integer Programs Filip Konecni
We show how to underapproximate the procedure summaries of recursive programs over the integers using off-the-shelf analyzers for non-recursive programs. The novelty of our approach is that the non-recursive program we compute may capture unboundedly many behaviors of the original recursive program for which stack usage cannot be bounded. Moreover, we identify a class of recursive programs on which our method terminates and returns the precise summary relations without underapproximation. Doing so, we generalize a similar result for non-recursive programs to the recursive case. Finally, we present experimental results of an implementation of our method applied on a number of examples.
Auditing User-provided Axioms in Software Verification Conditions Paul Jackson
Sometimes software verification conditions (VCs) include user-provided axioms to capture aspects of program specifications or to give hints to help automatic provers. There is always the danger of mistakes in these axioms. In the worst case these mistakes introduce inconsistencies and VCs become erroneously provable.
I'll discuss use of an SMT solver to investigate the quality of user-provided axioms employed in the formal verification of SPARK-Ada programs.
TBD David Monniaux