Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implementation of an algorithm for checking subject reduction #200

Open
wants to merge 28 commits into
base: master
Choose a base branch
from

Conversation

wujuihsuan2016
Copy link

The following code defines the relation used in our algorithm for checking subject reduction.
There might be some modifications after since some details in the algorithm still need to be clarified.

@wujuihsuan2016 wujuihsuan2016 requested a review from fblanqui March 25, 2019 16:56

module Eset = Set.Make(
struct
type t = (term * term)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Parentheses are useless here.

@fblanqui
Copy link
Member

Hi Jui-Hsuan. Thanks for your first PR!

Note that this PR should modify the file sr.ml.

@fblanqui
Copy link
Member

The set of equations is already computed by the type inference algorithm. You don't have to compute it again. The main problem is to turn this set of equations into a confluent and terminating rewrite systems using completion. To this end, you should export this set of equations to some external tool, call it, and get back the convergent rewrite system if any. Then, you can check that the RHS has the type of the LHS modulo these rewrite rules using the current type-checking function.

module Eset = Set.Make(
struct
type t = (term * term)
let compare = Pervasives.compare
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note that Pervasives.compare will not work to compare terms, since their representation contains closures (in Bindlib Binders).

@wujuihsuan2016
Copy link
Author

The set of equations is already computed by the type inference algorithm. You don't have to compute it again. The main problem is to turn this set of equations into a confluent and terminating rewrite systems using completion. To this end, you should export this set of equations to some external tool, call it, and get back the convergent rewrite system if any. Then, you can check that the RHS has the type of the LHS modulo these rewrite rules using the current type-checking function.

I took a look at the MKBtt tool (http://cl-informatik.uibk.ac.at/software/mkbtt/index.php) today. According to the documentation, yices 1.11 headers and libs are required in order to compile the source code but I haven't found them online yet. I also tried to use the binary executable file which is available online. It works well for the generation of complete systems. However, I always get the message "the process terminated with exit code 127" when using the option -cert in order to get certifiable proofs and I am still trying to figure out where it comes from. Besides, it seems that there exists already a function in confluence.ml that allows to print in the TRS format and it can be adapted for our purpose (adding equations).

@rlepigre
Copy link
Contributor

The main problem is to turn this set of equations into a confluent and terminating rewrite systems using completion. To this end, you should export this set of equations to some external tool, call it, and get back the convergent rewrite system if any.

Wouldn't it be easier to implement completion directly? That is not very hard, and it would be a lot faster than having to call an external tool and converting the data both ways.

@wujuihsuan2016
Copy link
Author

Wouldn't it be easier to implement completion directly? That is not very hard, and it would be a lot faster than having to call an external tool and converting the data both ways.

In fact, in the traditional Knuth-Bendix completion algorithm, a reduction ordering on terms is required and it is in general given by users, which is not very convenient in practice. In the MKBtt tool, some termination tools are used to deal with this issue. There are more details here (http://cl-informatik.uibk.ac.at/software/mkbtt/papers/SWKM08.pdf). I am not sure if this feature can be easily implemented.

@wujuihsuan2016 wujuihsuan2016 changed the title Define the relation used in our algorithm for checking subject reduction Implementation of an algorithm for checking subject reduction Mar 26, 2019
@fblanqui
Copy link
Member

fblanqui commented Mar 26, 2019

Rodolphe, completion is very hard in general (harder than termination alone!). And, as we can already export rules to the TRS (for confluence) and XTC (for termination) formats, it should not be too difficult to use these exports to call external completion tools too. But, you are right, as we only have closed equations, we could also implement an adhoc optimized completion algorithm using any total reduction ordering on terms (e.g. LPO over an arbitrary total ordering on symbols). This is fine if, in the end, we get no critical pairs with user-defined rewriting rules. Otherwise, we may have to rely on a general completion tool. More theoretical work would be useful in this case. So, it may be interesting to have both.

@fblanqui
Copy link
Member

By the way, I see that the exports to TRS and XTC are not documented. @GuillaumeGen : could you please add a PR with at the beginning of termination.ml some details about the corresponding export functions? In particular, it should explain what is done with abstractions and products as TRS is an untyped first-order setting. As for XTC, it should explain how dependent types are handled. Is it the current simply-typed version of XTC or its extension with dependent types? @wujuihsuan2016 : you can probably do the same thing for confluence.ml by reverse-engineering the code.

@fblanqui
Copy link
Member

@wujuihsuan2016 : there exist various completion tools. It would be good to try several.

@wujuihsuan2016
Copy link
Author

In termination.ml, a qualified identifier dir1. ... .dirn.id is translated to c_dir1_..._dirn_id. It is impossible to retrieve information of symbols using their translations (c_a_b_id is the translation of both a.b.id and a_b.id for example). Thus, it might be useful to keep a mapping : translation(sym) -> sym.

@fblanqui
Copy link
Member

Is there any restriction on identifiers in the XTC format? If not, as I suspect for XTC is an XML format, then there is no reason for changing names. This should be fixed.

It's different perhaps for the TRS format. Question: what are the restrictions on identifiers in the TRS format? In particular, is "." forbidden? If not, then there is no reason for changing names and should be fixed.

@fblanqui
Copy link
Member

In the TRS format (see http://project-coco.uibk.ac.at/problems/trs.php), we have:

id is any nonempty sequence of characters not containing
whitespace
the characters ( ) " , |
the sequences -> == COMMENT VAR RULES

So, dots can be used in names.

In the XTC format (see http://cl2-informatik.uibk.ac.at/mercurial.cgi/TPDB/file/tip/xml/xtc.xsd), there are no restrictions on names.

@rlepigre
Copy link
Contributor

For the record, I implemented the translation to the TRS format based on what was done in Dedukti, hence using the same translation conventions. It would indeed be great if we could preserve more identifiers. To rule out any problem with variables named COMMENT, VAR or RULES, I don't see any other solution than to prefix every identifier coming from Lambdapi with some sequence of character (e.g., c_). And that also prevents name clashes with the locally defined symbol (see the prelude).

For the XTC format, one has to be careful with special characters (as in any XML format), but they can be easily escaped.

In both cases, somewhat arbitrary sequences of characters can arise in Lambdapi through the {|...|} syntax. Thus some care should be taken, and a first approximation could be to fail when the translation reaches such an identifier.

For exchanging arbitrary identifiers, one solution is to use some bijective encoding. As an example, sequences of characters like {|...|} could be base-64-encoded, and then prefixed with some reserved prefix. And decoding would then be the obvious thing. In any case, it may be a good idea to define encoding / decoding functions that would always be used by the translation.

@@ -0,0 +1,147 @@
(** Completion procedure *)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Precise that this is a completion procedure for closed equations.

@fblanqui
Copy link
Member

General remark: for computing critical pairs, do not use the unification algorithm implemented in unif.ml, but define your own syntactic first-order unification algorithm.

open Extra

(** total strict order *)
type 'a order = 'a -> 'a -> bool
Copy link
Member

@fblanqui fblanqui Apr 10, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please, use:

type 'a cmp = 'a -> 'a -> int
(* [cmp x y] returns
- [0] when x=y
- [r<0] when x<y
- (r>0] when x>y *)

Then, there is no need to introduce gt, ge, etc.


(** [ord_lex ord] calculates the lexicographic order corresponding to the
alphabetical order [ord] *)
let ord_lex : 'a order -> ('a list) order = fun ord l1 l2 ->
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

parentheses are not needed

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Moving from ord to cmp, we need:

let lex : 'a cmp -> 'a list cmp'

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Idem for lpo.

(** [ord_lex ord] calculates the lexicographic order corresponding to the
alphabetical order [ord] *)
let ord_lex : 'a order -> ('a list) order = fun ord l1 l2 ->
let rec elim_comm l1 l2 = match (l1, l2) with
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please, follow the style and indentation of master files.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is the meaning of "elim_comm"?


type rule_ = sym * pp_hint * rule

(** [cp builtins r1 r2] calculates the critical pairs of the rules r1 and r2
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

calculates -> computes

let tk = _Meta mk (Array.map _Vari vs) in
Bindlib.unbox (build_prod k tk)

type rule_ = sym * pp_hint * rule
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. rule_ is not nice
  2. pp_hint doesn't seem useful. Please remove it.

Then, you can do:

type rule = sym * Term.rule

let arity1 = Bindlib.mbinder_arity r1.rhs in
let arity2 = Bindlib.mbinder_arity r2.rhs in
let cps = ref [] in
let rec to_m : int -> (meta option) array -> term -> tbox = fun k metas t ->
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If this function is taken from somewhere else, it would be better to not duplicate the code, but factorize it instead.

(** [build_meta_type k] builds the type “∀(x₁:A₁) (x₂:A₂) ⋯ (xk:Ak), B” where
“x₁” through “xk” are fresh variables, “Ai = Mi[x₁,⋯,x(i-1)]”, “Mi” is a
new metavar of arity “i-1” and type “∀(x₁:A₂) ⋯ (x(i-1):A(i-1), TYPE”. *)
let build_meta_type : int -> term = fun k ->
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If this function is taken from somewhere else, it would be better to not duplicate the code, but factorize it instead.


(** [to_rule (lhs, rhs)] tranlates the pair [(lhs, rhs)] of closed terms into
a rule. *)
let to_rule : term * term -> sym * rule = fun (lhs, rhs) ->
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

rename into make_rule?

a rule. *)
let to_rule : term * term -> sym * rule = fun (lhs, rhs) ->
let (s, args) = get_args lhs in
let s = get_symb s in
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

remove this let

is added into [deps] iff there is a equation (l, r) in [eqs] such that
(l = s and t is a proper subterm of r) or (r = s and t is a proper subterm
of l). In this case, we also add s and t into [symbs]. *)
let find_deps : (term * term) list -> string list * (string * string) list
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems to be related to the computation of dependency pairs. Check with @GuillaumeGen whether you could factorize/share some code.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Complexity of the algorithm?

(fun (t1, t2) -> find_dep_aux (t1, t2); find_dep_aux (t2, t1)) eqs;
(!symbs, !deps)

module StrMap = Map.Make(struct
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is already defined in basics.

let compare = compare
end)

exception Not_DAG
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Comment missing!


(** [topo_sort symbs edges] computes a topological sort on the directed graph
[(symbs, edges)]. *)
let topo_sort : string list -> (string * string) list -> int StrMap.t
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should be defined in Extra.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

More explanations required. What is the resulting map?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You should add more comments inside the code for explaining what you are doing.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Complexity of the algorithm?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You should say that this function may raise some exception. When? Why?

| true, false -> 1
| false, true -> -1
| false, false -> Pervasives.compare s1.sym_name s2.sym_name
with Not_DAG ->
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This exception is raised by topo_sort only, right? If so, you should instead write:

let ord = try topo_sort ... with Pervasives.compare in fun s1 s2 -> ....

| _, Abst _ -> assert false
| Appl (t1, u1), Appl (t2, u2) -> unif ((t1, t2) :: (u1, u2) :: eqs)
| Meta (m1, ts1), Meta (_, ts2) ->
assert (ts1 = [||] && ts2 = [||]);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

replace by | Meta(m1,[||]), Meta(_,[||]) ->

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do you have this special case? Why can't this be done by the following case?

| (t2, Meta (m1, ts1)) ->
if occurs m1 t2 then raise Unsolvable
else
assert (ts1 = [||]);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should be in the pattern.

assert (ts1 = [||]);
let vs = Bindlib.new_mvar mkfree [||] in
let bt2 = Bindlib.bind_mvar vs (lift t2) in
set_meta m1 (Bindlib.unbox bt2);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can't this be simplified/optimized @rlepigre ?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@rlepigre : In addition, as references are timed, and all these meta and meta substitutions are only used for checking subject reduction, shouldn't we through all these references away after subject reduction using save/restore to limit memory usage? Idem for TRef instantiations used in eq when calling the tactic rewrite.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could maybe avoid a lifting/unboxing here. Is the term t2 guaranteed to be closed here?

For the second point, I don't think this would make much difference. The memory is collected for every undo that is not reachable from a reachable saved time, so I would not worry too much about that.


(** [unif_aux t1 t2 eqs] attempts to solve the first-order unification
problem [(t1 = t2) :: eqs]. *)
let rec unif_aux : term -> term -> unif_constrs -> unit = fun t1 t2 eqs ->
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should be in basics.

src/handle.ml Outdated
in
let rs = List.map handle_rule rs in
(* Adding the rules all at once. *)
let add_rule (s,h,r) =
out 3 "(rule) %a\n" Print.pp_rule (s,h,r.elt);
Sign.add_rule ss.signature s r.elt
in
List.iter add_rule rs; (ss, None)
List.iter add_rule rs;
List.iter (Sr.check_rule ss.builtins) rs; (ss, None)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SR must be checked BEFORE adding the rules. So these modifications should be reverted.

Copy link
Author

@wujuihsuan2016 wujuihsuan2016 Jun 5, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When we check SR for the rule fl -> r, we should consider all the rules defining f. This is the reason why the SR should be checked in this way.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't understand. Checking SR for a rule or set of rules should not depend on the rules themselves.

src/handle.ml Outdated
@@ -208,6 +209,7 @@ let handle_cmd : sig_state -> p_command -> sig_state * proof_data option =
wrn cmd.pos "Proof aborted."; ss
| P_proof_admit ->
(* If the proof is finished, display a warning. *)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

remove this useless modification to minimize the diff

src/sr.ml Outdated
open Print
open Extra
open Terms
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Minimize the diff

src/sr.ml Outdated
Meta(m,[||])
*)
let sr_not_preserved_msg (s, h, r) =
fatal r.pos "Rule [%a] does not preserve typing." pp_rule (s, h, r.elt)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

does not -> may not

(** [check_rule symbs builtins r] checks whether rule [r] is well-typed. The
program fails gracefully in case of error. *)
let check_rule :
sym StrMap.t -> sym * pp_hint * rule Pos.loc -> unit
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is the pp_hint argument useful?

src/unif.ml Outdated Show resolved Hide resolved
@fblanqui
Copy link
Member

fblanqui commented Jun 5, 2019

make sanity_check fails. Thanks to fix it.

@fblanqui
Copy link
Member

fblanqui commented Apr 5, 2020

This PR is probably too old now. It should be completely rewritten after #328.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants