Research Themes at LVL

Human-Computer Interaction Verification

In the context of critical applications, like autopilot in civil aviation, confusions occurring when using these can lead to dangerous situations. Human-Computer Interaction analysis follows system observability issues, abstraction and simulation which are well studied in the formal methods field. We are working on generating interfaces for critical systems, satisfying some controllability properties, using formal techniques.

Functional Requirements Coverage

The classical notion of structural coverage measures to what extent a test suite exercises all items (statements, functions, variables, ...) of a program. If requirements are formally specified, for example using temporal logic, a notion of requirements coverage can be similarly defined, characterizing a test suite that covers a given requirement. With researchers at NASA Ames and Middlesex University, we have formalized this notion and are studying its application to the generation of test suites for autonomous space-bound applications.

Symbolic Model Checking of Asynchronous Systems

Model checking is an efficient technique for verifying properties on reactive systems. BDD-based symbolic model checking is a well-known approach to deal with the state space explosion problem in model checking, though BDDs can still suffer from space blow-up. More recently, bounded model checking using SAT-based procedures has been used as a very successful alternative to BDDs. However, these approaches give poor results when it is applied to models with a lot of asynchronism. On the other hand, partial-order reduction (POR) methods reduce the number of states explored by model-checking, by avoiding to explore different equivalent interleavings of concurrent events. We are developing, implementing and experimenting with algorithms which combine partial-order reduction and symbolic model checking to allow more efficient verification on models featuring asynchronous processes.

Counterexamples in Symbolic Model Checking

An advantage of model checking is the capability to report a counter-example, i.e. an execution sequence violating the checked property. Most model checkers produce counter-examples, but they are generally only linear. We work on defining and combining solutions to improve the diagnostic information returned by symbolic model checkers. The main studied directions are trace annotation, the generation of branching counter-examples, the minimization of counter-examples and the identification of critical diagnostic information.