I was idly thinking about verification for mutation-based linked structures. A nice thing about bidirectional/doubly-linked structures is that you have an inverse function to work with as opposed to merely an injectivity and surjectivity property, so you don't need a Cantor-Schröder-Bernstein-like argument to pass between the two. More prosaically this also lets you catch functional correctness/memory corruption bugs by checking local invariants like next[prev[x]] = x and prev[next[x]] = x.
=> More informations about this toot | More toots from pervognsen@mastodon.social
I think this would mostly help in formal verification if you're relying on model checking of data-type invariants or something like that. If you're doing deductive verification with a proof assistant I don't think it would make really help (or hurt) the proof if your data structure carries an inverse witness as opposed to an injectivity/surjectivity property.
=> More informations about this toot | More toots from pervognsen@mastodon.social
@pervognsen reminds me of how in TLA+, inveraints are much faster to model-check than temporal properties. Would the difference between "not exists two list elements with the same next" and "next[prev[x]] = x" be of similar magnitude?
=> More informations about this toot | More toots from wolf480pl@mstdn.io
@pervognsen Can you do something interesting for verifying reachability invariants? E.g. "Either a node is in the freelist or it's reachable from one of the following roots by following next pointers."
=> More informations about this toot | More toots from tmyklebu@mastodon.gamedev.place
@tmyklebu @pervognsen As you probably know, transitive closure is inexpressible in first-order logic (a fun exercise with the compactness theorem), and adding it to FOL quickly leads to undecidability. Finding decidable fragments of reachability was once a hot research area, but seems to have died off. Here's one of the last big publications from that line of work: https://csaws.cs.technion.ac.il/~shachari/dl/popl2014.pdf
=> More informations about this toot | More toots from zwarich@hachyderm.io
@tmyklebu @pervognsen The other more logically satisfactory option is to use monadic second-order logic over tree structures, e.g. https://www.brics.dk/mona/papers/graph-types/article.pdf. Unfortunately, all of the decision procedures for MSOL are automata-based and haven't benefitted from the improvements in SAT solving.
=> More informations about this toot | More toots from zwarich@hachyderm.io
@tmyklebu @pervognsen Another amusing strategy is to deliberately accept incompleteness and restrict the class of proofs you consider to those that basically just correspond to successive unfolding of definitions:
https://madhu.cs.illinois.edu/pldi13.pdf
I'm curious to know the simplest program/property for which this strategy fails.
=> More informations about this toot | More toots from zwarich@hachyderm.io
@tmyklebu @pervognsen My slow acceptance of the limitations of this whole line of research is a large part of what made me accept that if there is some widespread usage of program verification, proof assistants will play a role.
=> More informations about this toot | More toots from zwarich@hachyderm.io
@zwarich @tmyklebu Have you seen the kind of mixed style you can use in Dafny? https://dafny.org/teaching-material/Lectures/2-3-Logic-ProvingByExplaining.html and https://dafny.org/teaching-material/Lectures/2-4-Logic-ProvingByConvincing.html
=> More informations about this toot | More toots from pervognsen@mastodon.social
@pervognsen @tmyklebu No, I actually haven't. It seems like a nicer, cleaned up version of VeriFast: https://lirias.kuleuven.be/retrieve/121283/
Also, it seems like VeriFast has been resurrected with Rust support?
https://github.com/verifast/verifast
=> More informations about this toot | More toots from zwarich@hachyderm.io
@zwarich @tmyklebu Yeah, the main dev seems to have completed the AWS Rust std verification challenge for LinkedList with it: https://github.com/verifast/verifast/blob/master/tests/rust/safe_abstraction/linked_list.rs. I gotta say the proof code is a little daunting at a glance. :)
=> More informations about this toot | More toots from pervognsen@mastodon.social
@pervognsen @tmyklebu Wow, that's pretty rough going. As per the Dafny support, I'm curious if you could find the right combination of syntax sugar and automation to not actually require a separate tactic language for practical SW verification usage.
=> More informations about this toot | More toots from zwarich@hachyderm.io
@zwarich @tmyklebu Yeah, I've been playing with Dafny the last few days and the way you implicitly guide the ATP along feels pretty different, a bit more like a paper proof where it's your job to mention the relevant properties and lemmas but you don't have to be super specific about how to apply them. The IDE experience with VS Code is very clean, I recommend checking it out and maybe picking up the book Program Proofs by the main developer if you want something to work through.
=> More informations about this toot | More toots from pervognsen@mastodon.social
@zwarich @tmyklebu The main downside compared to something like Coq or Lean is that failures are a lot more non-informative unless you've already subdivided your proof obligation into small enough pieces that the mere failure to prove a piece gives a strong indication of what went wrong/what is missing. Dunno how much of that could be improved given the relatively black box nature of ATPs.
=> More informations about this toot | More toots from pervognsen@mastodon.social
@zwarich @tmyklebu So the "debugging" loop for me has been starting with an initial, usually coarse proof obligation that fails and subdividing it until it's clear what part it couldn't figure out on its own. Often the final, working proof obligation can be pretty coarse, so the subdivision process is often only for debugging and you can unwind it once you know what's going on.
=> More informations about this toot | More toots from pervognsen@mastodon.social
@pervognsen @zwarich @tmyklebu > black box nature of ATPs.
Why3 has a Coq backend, so you can try to handle the proof by hand :)
=> More informations about this toot | More toots from pkhuong@discuss.systems
@pervognsen @tmyklebu This Dafny Workshop talk on industrial coding style is pretty telling:
https://www.youtube.com/watch?v=IicF7UYZ4UY
It seems that automatic management of the context quickly gets out of hand in a large project, and you need to shift to a style of using opaque functions, API lemmas, etc. They even abandon the use of postconditions!
=> More informations about this toot | More toots from zwarich@hachyderm.io
@pervognsen @tmyklebu A more animated talk on the same work:
https://www.youtube.com/watch?v=SiqtQ36vNFc
In a way, this particular portion of the project (mechanized metatheory of what is essentially a PL) seems like the worst-case scenario for something like Dafny. But it must have been sufficiently frustrating overall, since they rewrote the entire thing in Lean.
=> More informations about this toot | More toots from zwarich@hachyderm.io
@zwarich Thanks, that's an interesting talk. The heavy use of opaque API lemmas is also something that feels like it should be done more with Coq/Lean-style ITPs. Leaning too heavily on definitional equality as a shortcut always felt wrong to me. One thing I appreciated about the simple Natural Number Game for Lean is that all the function definitions were sealed. People coming to ITPs from functional programming seem maybe too eager to rely on definitional equality.
=> More informations about this toot | More toots from pervognsen@mastodon.social
@zwarich Just wrote this example, you can see how relatively low the mark-up burden is for something classical like this.
method Squares(a: array)
modifies a
ensures forall i :: 0 <= i < a.Length ==> a[i] == i * i
{
var jj, djj := 0, 1;
for j := 0 to a.Length
invariant forall i :: 0 <= i < j ==> a[i] == i * i
invariant jj == j * j && djj == 2 * j + 1
{
a[j], jj, djj := jj, jj + djj, djj + 2;
}
}
=> More informations about this toot | More toots from pervognsen@mastodon.social
@zwarich @tmyklebu This reminds me that reachability of states is something you can express in CTL but not in LTL, e.g. the classic resetability property for sequential circuits where you want every state reachable from an initial state to be able to reach a designated reset state.
=> More informations about this toot | More toots from pervognsen@mastodon.social
@zwarich @tmyklebu Although on second thought the situation is not the same for several reasons, e.g in LTL you can't even express that it should be able to reach the reset state in one step, so LTL's shortfall isn't really about the transitive closure: LTL fundamentally doesn't have a way to express the existential path quantifier from CTL.
=> More informations about this toot | More toots from pervognsen@mastodon.social
@zwarich @tmyklebu Trying to refresh my memory: the Wikipedia page does mention that if you add propositional quantification to CTL (in addition to the path quantification) you get a monadic second-order logic, which I guess intersects with what you were mentioning: https://en.wikipedia.org/wiki/Computation_tree_logic#Relations_with_other_logics
=> More informations about this toot | More toots from pervognsen@mastodon.social This content has been proxied by September (ba2dc).Proxy Information
text/gemini