Using Verus from unverified client code #105
Replies: 7 comments 12 replies
-
One more quick note: In our goals document we say that we "don't support unsafe code" but I feel this undersells Verus a little. It's more accurate to say Verus doesn't require you to write the The Rust "idiom" is that developers who use unsafe code should encapsulate the uses of |
Beta Was this translation helpful? Give feedback.
-
FWIW, historically, Dafny imposed many of the conditions in your Proposal list on |
Beta Was this translation helpful? Give feedback.
-
This came up when @gz and @matthias-brun were integrating a verified code fragment (a page table implementation) in a larger unverified code base. The outer code base was calling the verified function with arguments that violated its precondition, causing what should have been an impossible stack overflow. @gz suggests that it may be nice to mark some functions as "entry points" to the verified code base which would emit runtime assertions to check their pre-conditions. We probably cannot directly generate runtime checks for preconditions (they may contain quantifiers, and they often contain spec-only abstract data structures and functions). |
Beta Was this translation helpful? Give feedback.
-
Looks like Dafny is also looking at automating wrappers for extern functions: |
Beta Was this translation helpful? Give feedback.
-
We are considering marking all functions with a precondition This is achievable with multiple versions of the |
Beta Was this translation helpful? Give feedback.
-
I'd like to be more precise about which functions are actually unsafe to call under which conditions. Consider the following scenario. Suppose someone has implemented a binary search function in safe Rust. Then they decide to verify the function by adding a At the very least, I'd like projects to be able to opt in to using unsafe-but-verified code. If a project doesn't use any unsafe library code, I think it should be able to export a safe API, even if that API has requires clauses. But more generally, it would be good to have a story for when it's possible to wrap unsafe operations in a safe API, which is common in Rust. For example, we could have an unsafe
We could say that since The difficulty is that functions may have additional
Here's a proposal. Suppose we have an uninterpreted global boolean flag
I'd expect that most code would ignore |
Beta Was this translation helpful? Give feedback.
-
It's been a while since I've written down my mental model for safe/unsafe, which I've refined a bit over the past year, so I think it's time for an update. In Rust terms, an API is considered 'safe' if for any client code that calls into the API while satisfying rustc's type system and not using any additional 'unsafe' code, there won't be any UB. The key question is, how can we determine if an API to a Verus-verified codebase is safe? To first approximation, the answer is: We can say that some API is safe if all of its preconditions are trivially 'true', and its implementation is verified with Verus. One way to think about this is to imagine running Verus on the client code. Since the API only has trivially true preconditions, Handling invariantsNow, one problem is that lots of types need internal invariants ('well-formedness predicates') which ordinarily would show up in the preconditions. I believe this will be resolved by implementing something similar to the type invariants discussed here: #962. This allows the verified code to rely on type invariants without having to add explicit preconditions. Even when we consider the possibility of temporarily-broken type invariants, the unverified client code will not be able to break them. To be explicit, a function with this signature can be marked 'safe':
(This happens to be the default signature.) This is not the weakest possible signature (that would be Higher-order executable functionsHigher-order executable functions are a bit of a snag in this picture. Right now, the Verus model is to explicitly model the preconditions and postconditions as part of the FnOnce trait. Therefore, calling a function object would be 'unsafe' under this perspective, since the precondition might be nontrivial. More explicitly, Verus treats the pub trait FnOnce<Args: Tuple> {
type Output;
extern "rust-call" fn call_once(self, args: Args) -> (output: Self::Output)
requires call_requires(self, args),
opens_invariants any,
ensures ...
} Of course, in Rust, calling a function is safe. Functions marked 'unsafe fn' do not even implement the FnOnce trait. So the way we treat functions currently is at a mismatch with the idea that a safe function should be one with no preconditions. Now, one option to resolve this would be to walk this back and remove the notion of Another option would be to have a check that an API doesn't expose any function types with nontrivial preconditions. (It's actually pretty hard to expose a closure type. You either need to use 'dyn' or an existential type. Regardless, it can be done.) This is a bit of a wart, but I don't really see a reason it would be unsound. It would make things tricky if we ever support dyn. ConclusionNow that we have a decently concrete plan for type invariants, I think we can basically nail down what it means for a Verus-verified API to be safe. An API is safe if:
|
Beta Was this translation helpful? Give feedback.
-
Typically, we haven't thought much about the question of calling into verified code from trusted client code. The answer is usually just "well, the client has to obey the specifications, or there aren't any guarantees." In the Rust ecosystem, we can tell a slightly cleaner story.
Consider a function like:
Naturally, such a function is easy to use in Verus, which will check that the precondition holds at each call-site. However, this code is not safe to call from vanilla rust code. Not only would it be logically nonsense to call
vec_index_lookup_fast
with an out-of-bounds index, it would be memory unsafe and undefined behavior. In normal Rust, this function would have to be markedunsafe
, although here, that really means something more like “safety is dependent on the context of the caller code.“Of course, that concept of context-dependent safety is exactly what the
requires
clause captures. In some sense, Rust'sunsafe
(the keyword on afn
declaration) is the same thing as having a precondition, except that the precondition is written in English. Conversely, Rust requires that any function without theunsafe
keyword be memory safe if called from any context.Proposal
When Verus exports code to be callable by vanilla Rust code, it should be marked
unsafe
unless it meets all the following criteria:#[proof]
argumentsLikewise, an exported trait will need to be marked
unsafe
unless it meets some carefully chosen criteria.For this to really work, we do need to work how to handle encapsulated invariants, i.e., we need to be able to have a type
T
with some private, internal invariants, which clients ofT
don't have to know about as long as they don't break modularity. (Without that feature, the exported functions would have a bunch of boilerplate requires/ensures likefoo.well_formed()
). Also, such a feature would generally reduce boilerplate, although it's probably something for a separate discussion.Beta Was this translation helpful? Give feedback.
All reactions