Rich Model Toolkit COST Action Meeting in Haifa, IL

This meeting is collocated with the Haifa Verification Conference http://www.research.ibm.com/haifa/conferences/hvc2012/

and will take place in Haifa, IL, on Sunday, November 4th.

For more information, please contact Eran Yahav and Armin Biere.

On Monday, November 5th, INTEL’S ANNUAL SYMPOSIUM ON VLSI CAD AND VALIDATION takes place at Technion.

On November 6-8, HVC takes place at IBM Haifa Research Lab

The meeting will take place at the Haifa Research Lab

Hotels

Social Events (HVC)

Student's night out! Time: Nov. 5, 19:00 Place: “The Camel” on the Haifa beach Join us for a students' evening out the night before the conference at the historic Camel Restaurant and Cafe on one of Haifa's most picturesque beaches. We'll enjoy an evening of beer and refreshments as we spend time at the Camel, one of Haifa's most unique eateries, located on Dado Beach at the southern tip of Haifa. see http://www.research.ibm.com/haifa/conferences/hvc2012/social.shtml

Suggested Dinner Locations

To register please send email with your name and affiliation to noafr@cs.technion.ac.il

Please also register to HVC. If you are only attending the COST meeting - register for the tutorials day only. We need the HVC registration to arrange parking, lunches, and wifi connections.

Title Speaker
10:00-10:45 Quantitative Abstraction Refinement Pavol Cerny (invited talk)
10:45-11:30 PINCETTE project: validation of changes and upgrades in large software systems Hana Chockler (invited talk)
11:30-11:45 short break
11:45-12:10 Finite-State and Pushdown Games with Multi-dimensional Mean-Payoff Objectives Yaron Velner
12:10-12:35 Privacy for Linked Data Silvia Ghilezan
12:35-14:00 Lunch Break
14:00-14:25 Interpolant Strength in Model Checking Simone Rollini
14:25-14:50 Assisted Verification of Invariance for Parametrized Systems Alejandro Sánchez
14:50-15:15 Interpolation for resolution and superposition Maria Paola Bonacina
15:15-15:30 short break
15:30-15:55 Reductions for Synthesis Procedures Philippe Suter
15:55-16:20 Logico-Numerical Max-Strategy Iteration Pavle Subotic
16:20-16:45 Information flow analysis and temporal logics Markus Rabe
16:45-17:00 short break
17:00-18:00 Management Committee Meeting

Pavol Cerny: Quantitative Abstraction Refinement by

Hana Chockler: PINCETTE project: validation of changes and upgrades in large software systems - unique challenges and suggested solutions

Quantitative Abstraction Refinement Pavol Cerny

We propose a general framework for abstraction with respect to quantitative properties, such as worst-case execution time, or power consumption. Our framework also provides a systematic way for counter-example guided abstraction refinement for quantitative properties. The salient aspect of the framework is that it allows anytime verification, that is, verification algorithms that can be stopped at any time (for example, due to exhaustion of memory), and report approximations that improve monotonically when the algorithms are given more time.

We instantiate the framework with a number of quantitative abstractions and refinement schemes, which differ in terms of how much quantitative information they keep from the original system. We introduce both state-based and trace-based quantitative abstractions, and we describe conditions that define classes of quantitative properties for which the abstractions provide over-approximations. We give algorithms for evaluating the quantitative properties on the abstract systems. We present algorithms for counter-example based refinements for quantitative properties for both state-based and segment-based abstractions. We perform a case study on worst-case execution time of executables to evaluate the anytime verification aspect and the quantitative abstractions we proposed.

This is joint work with Thomas Henzinger and Arjun Radhakrishna.


PINCETTE project: validation of changes and upgrades in large software systems - unique challenges and suggested solutions Hana Chockler

Changes are done frequently during the lifetime of most systems and can introduce software errors that were not present in the old version or expose errors that were present before but did not get exercised. In addition, upgrades are done gradually, so the old and new versions have to co-exist in the same system. In very large systems, for which a complete re-verification and re-validation after each change is not feasible, we need to be able to verify only the change, and we need to do it efficiently and reliably. The PINCETTE project, funded by the FP7 program of the European Union, proposes to address these challenges by a combination of static and dynamic analysis. In this talk I will outline several ideas both from static and from dynamic analysis and from their combination that were proposed and implemented in PINCETTE in order to achieve the goal of efficient and reliable verification and validation of evolving systems.


Finite-State and Pushdown Games with Multi-dimensional Mean-Payoff Objectives Yaron Velner

Two-player games on graphs are central in many problems in formal verification and program analysis such as synthesis and verification of open systems. In this talk, we consider both finite-state game graphs and recursive game graphs (or pushdown game graphs) that can model the control flow of sequential programs with recursion. The objectives we consider are multi-dimensional mean-payoff objectives, where the goal of player-1 is to ensure the mean-payoff is at least zero in all dimensions. In pushdown games two types of strategies are relevant: (1) global strategies, that depend on the entire global history; and (2) modular strategies, that have only local memory and thus do not depend on the context of invocation, but only on the history of the current invocation of the module. We present solutions to several fundamental algorithmic questions and our main contributions are as follows: (1) We show that finite-state multi-dimensional mean-payoff games can be solved in polynomial time if the number of dimensions and the maximal absolute value of weights (or rewards) are fixed; whereas if the number of dimensions is arbitrary, then the problem is already known to be coNP-complete. (2) We show that pushdown graphs (or one-player pushdown games) with multi-dimensional mean-payoff objectives can be solved in polynomial time. (3) For pushdown games under global strategies both single and multi-dimensional mean-payoff objectives problems are known to be undecidable, and we show that under modular strategies the multi-dimensional problem is also undecidable (whereas under modular strategies the single dimensional problem is known to be NP-complete). We show that if the number of modules, the number of exits, and the maximal absolute value of the weight are fixed, then pushdown games under modular strategies with single dimensional mean-payoff objectives can be solved in polynomial time, and if either the number of exits or the number of modules is unbounded, then the problem is NP-hard. (4) Finally we show that a fixed parameter tractable algorithm for finite-state multi-dimensional mean-payoff games or pushdown games under modular strategies with single-dimensional mean-payoff objectives would imply the solution of the long-standing open problem of fixed parameter tractability of parity games. (This is a joint work with Krishnendu Chatterjee)


Privacy for Linked Data Silvia Ghilezan

Web of Linked Data aims at introduction of common format and principles for publishing data on the Web, such that they can be easily interconnected and exploited. The current recommendation is to use URIs as names and RDF directed graph data, which can be consumed by use of a query language such as SPARQL. The lack of privacy mechanisms often discourage people from publishing data in the Web of Linked Data. We propose a formal model which ensures privacy protection of a single RDF triple. In our calculus, processes are those of Horne and Sassone 2011, associated with parallel composition of RDF triples. These triples describe the web-profile of the user that aims at performing an action. An administrator of a data graph in the observed system defines a privacy protection policy for each triple of the data in RDF format. A process running on behalf of a user is allowed to access a triple if the privacy protection policy gives the positive answer applied to the user’s profile. (Joint work with Svetlana Jaksic, Jovanka Pantovic)


Interpolant Strength in Model Checking Simone Rollini

Craig interpolation is a well known method of abstraction successfully used in both hardware and software model checking. The logical strength of interpolants can affect the quality of approximations and consequently the performance of the model checkers. Recently, it was observed that for the same resolution proof a complete lattice of interpolants ordered by strength can be derived.

Most state-of-the-art model checking techniques based on interpolation subject the interpolants to constraints that ensure efficient verification as, for example, in transition relation approximation for bounded model checking, counterexample-guided abstraction refinement and function summarization for software update checking. However, in general, these verification-specific constraints are not satisfied by all possible interpolants.

We analyze the restrictions within the lattice of interpolants under which the required constraints are satisfied. This enables investigation of the effect of the strength of interpolants on the particular techniques, while preserving their soundness.

(Joint work with Ondrej Sery and Natasha Sharygina)


Assisted Verification of Invariance for Parametrized Systems Alejandro Sánchez

We study the verification of safety properties over symmetric parametrized systems. These systems consist on an arbitrary and unbounded number of threads running the same program. In this scenario, following standard deductive techniques leads to the need of verifying an unbounded collection of verification conditions which grows with the number of threads involved in the system. To handle this problem, we present a technique based on parametrized invariance rules to reduce the verification of finite and infinite state parametrized programs to the analysis of only a finite number of verification conditions. In our approach, all necessary VCs are automatically generated from the program and its specification, independently of the number of threads involved in the system.

In this talk, we present the latest advances in our research as well as some running examples including a ticket-based mutual exclusion algorithm and a concurrent single-linked list implementation. Both examples are verified using Leap. Leap is a tool under development at the IMDEA Software Institute which implements the parametrized invariance rules. Our tool is able to automatically generate all necessary VCs starting from the program and the properties specifications. Leap also implements decision procedures to automatically verify VCs, and is also extended to interactively assist the user in the verification process. Currently, the decision procedures allow to programs that manipulate numbers, sets, list layouts and their combinations, built on top of state-of-the-art SMT solvers.

(joint work with César Sánchez)


Interpolation for resolution and superposition Maria Paola Bonacina

Given two inconsistent formulae, a (reverse) interpolant is a formula implied by one, inconsistent with the other and only containing symbols they share. Interpolants are used in verification to compute over-approximations of images, refine abstractions, or generate invariants. We consider interpolation of refutations generated by inference systems for first-order logic with equality, with resolution and superposition. These inference systems are at the heart of state-of-the-art theorem provers, may yield decision procedures, and can be integrated in SMT-solvers. We present a two-stage approach that computes provisional interpolants, which may contain non-shared symbols, and applies lifting to replace them with quantified variables. The resulting interpolation system is complete, meaning that it extracts an interpolant from any refutation.

(Joint work with Moa Johansson)


Reductions for Synthesis Procedures Philippe Suter

A synthesis procedure acts as a compiler for declarative specifications. It accepts a formula describing a relation between inputs and outputs, and generates a function implementing this relation. This paper presents synthesis procedures for data structures. Our procedures are reductions that lift a synthesis procedure for the elements into synthesis procedures for containers storing these elements. We introduce a framework to describe synthesis procedures as systematic applications of inference rules. We show that, by interpreting both synthesis problems and programs as relations, we can derive and modularly prove transformation rules that are widely applicable, thus simplifying both the presentation and the correctness argument.


Logico-Numerical Max-Strategy Iteration Pavle Subotic

Strategy iteration methods are used for solving fixed point equations. It has been shown that they improve precision in static analysis based on abstract interpretation and template polyhedra. However, they are limited to numerical programs. In this talk a method for applying max-strategy iteration to logico-numerical programs is presented. This method computes the least fixed point w.r.t. the abstract domain; in particular, it does not resort to widening.

(Joint work with Peter Schrammel, INRIA Grenoble)


Information flow analysis and temporal logics Markus Rabe

Most analysis methods for information flow properties, such as noninterference, do not consider temporal restrictions. In practice, however, such properties rarely occur statically, but have to consider constraints such as when and under which conditions a variable has to be kept secret. I will present recent results on how to integrate information flow properties into temporal logics by introducing a new modal operator. Finally, I will sketch upcoming work on integrating information flow properties in game logics such as ATL*.