Skip to content

Commit

Permalink
Fix latex for github rendering
Browse files Browse the repository at this point in the history
  • Loading branch information
Shon Feder committed Jul 19, 2023
1 parent 1903023 commit 01b0c2e
Show file tree
Hide file tree
Showing 2 changed files with 21 additions and 21 deletions.
2 changes: 1 addition & 1 deletion doc/adr005-type-system.md
Original file line number Diff line number Diff line change
Expand Up @@ -123,7 +123,7 @@ substitution to the context and generate constraints for the result expression.
Γ ⊢ val n = e1 { e2 }: (t2, s(c1 ∧ c2))
```

Whenever we fetch a type scheme from the constant, any quantified variables are
Whenever we fetch a type scheme from the context, any quantified variables are
replaced with fresh type variables. The class state is used to track free type
variables in the context, and operator signatures always assume quantification
of all names that are not free in the context.
Expand Down
40 changes: 20 additions & 20 deletions doc/rfc001-sum-types.md
Original file line number Diff line number Diff line change
Expand Up @@ -168,8 +168,8 @@ operators](#drop-the-exotic-operators). See the discussion in

$$
\frac
{ \Gamma \vdash e \colon (t, c) \quad \Gamma \vdash `l` \colon str \quad fresh(s) }
{ \Gamma \vdash \ `l` \cdot e \ \colon (s, c \land s \sim \{ \ l \colon t | tail\_s \ \}) }
{ \Gamma \vdash e \colon (t, c) \quad \Gamma \vdash \`l\` \colon str \quad fresh(s) }
{ \Gamma \vdash \ \`l\` \cdot e \ \colon (s, c \land s \sim \\{ \ l \colon t | tail\_s \ \\}) }
$$

#### Elimination
Expand Down Expand Up @@ -271,7 +271,7 @@ with each $t_k$-typed field indexed by label $i_k$ for $1 \le k \le n$ and the
free row variable $v$, then

$$
\{i_1 : t_1 \ , \ldots \ , i_n : i_n | v\}
\\{i_1 : t_1 \ , \ldots \ , i_n : i_n | v\\}
$$

is an open record conjoining the fields, and
Expand Down Expand Up @@ -340,7 +340,7 @@ given as

$$
\frac
{ \Gamma \vdash e \colon \{ i_1 \hookrightarrow \tau_1, \ \ldots, \ i_n \hookrightarrow \tau_n \} \quad (1 \le k \le n)}
{ \Gamma \vdash e \colon \\{ i_1 \hookrightarrow \tau_1, \ \ldots, \ i_n \hookrightarrow \tau_n \\} \quad (1 \le k \le n)}
{ \Gamma \vdash e.i_k \colon \tau_k }
$$

Expand All @@ -359,8 +359,8 @@ In our current system, typechecking the projection of a value out of a record

$$
\frac
{ \Gamma \vdash e \colon (r, c) \quad \Gamma \vdash `l` \colon str \quad fresh(t) }
{ \Gamma \vdash \ field(e, `l`) \ \colon (t, c \land r \sim \{ \ l \colon t | tail\_t \ \}) }
{ \Gamma \vdash e \colon (r, c) \quad \Gamma \vdash \`l\` \colon str \quad fresh(t) }
{ \Gamma \vdash \ field(e, \`l\`) \ \colon (t, c \land r \sim \\{ \ l \colon t | tail\_t \ \\}) }
$$

where
Expand All @@ -369,13 +369,13 @@ where
$\Gamma \vdash e : (t, c)$ means that, in the typing context $\Gamma$,
expression $e$ can be derived with type $t$ under constraints $c$,
- $fresh(t)$ is a side condition requiring the type variable $t$ to be fresh in $\Gamma$,
- $`l`$ is a string literal with the internal representation $l$,
- $\`l\`$ is a string literal with the internal representation $l$,
- $c$ are the constraints derived for the type $r$,
- $tail\_t$ is a free row-variable constructed by prefixing the fresh variable $t$
with "tail",
- $\{ \ l \colon t | tail\_t \ \}$ is the open row-based record type with field,
- $\\{ \ l \colon t | tail\_t \ \\}$ is the open row-based record type with field,
$l$ assigned type $t$ and free row- left as a free variable,
- and $r \sim \{ \ l \colon t | tail\_t \ \}$ is a unification constraint.
- and $r \sim \\{ \ l \colon t | tail\_t \ \\}$ is a unification constraint.

Comparing the textbook rule with the rule in our system helps make the
particular qualities and idiosyncrasies of our system very clear.
Expand All @@ -384,7 +384,7 @@ The most critical difference w/r/t to the complexity of the typing rules derives
form the fact that our system subordinates construction and elimination of
records to the language level operator application rather than implementing it
via a special constructs that work with product indexes (labels) directly. This
is what necessitates the consideration of the string literal $`l`$ in our
is what necessitates the consideration of the string literal $\`l\`$ in our
premise. In our rule for type checking record projections we "lift" quint
expressions (string literals for records and ints for products) into product
indexes.
Expand Down Expand Up @@ -438,8 +438,8 @@ projection out from a product:

$$
\frac
{ \Gamma \vdash e \colon (t, c) \quad \Gamma \vdash `l` \colon str \quad fresh(s) }
{ \Gamma \vdash \ inj(`l`, e) \ \colon (s, c \land s \sim \{ \ l \colon t | tail\_s \ \}) }
{ \Gamma \vdash e \colon (t, c) \quad \Gamma \vdash \`l\` \colon str \quad fresh(s) }
{ \Gamma \vdash \ inj(\`l\`, e) \ \colon (s, c \land s \sim \\{ \ l \colon t | tail\_s \ \\}) }
$$

I here retain our current practice of using quint string literals along with an
Expand Down Expand Up @@ -472,22 +472,22 @@ A textbook introduction rule for finite products is given as
$$
\frac
{ \Gamma \vdash e_1 \colon \tau_1 \quad \ldots \quad \Gamma \vdash e_n \colon \tau_n }
{ \Gamma \vdash \{ i_1 \hookrightarrow e_1, \ldots, i_n \hookrightarrow e_n \} \colon \{ i_1 \hookrightarrow \tau_1, \ldots, i_n \hookrightarrow \tau_n \} }
{ \Gamma \vdash \\{ i_1 \hookrightarrow e_1, \ldots, i_n \hookrightarrow e_n \\} \colon \\{ i_1 \hookrightarrow \tau_1, \ldots, i_n \hookrightarrow \tau_n \\} }
$$

This tells us that for any expressions $e_1 : \tau_1, \ldots, e_n : \tau_n$
derivable from our context we can form a product that indexes those
$n$ expressions by $i_1, \ldots, i_n$ distinct labels, and packages all data
together in an expression of type $\{ i_1 \hookrightarrow \tau_1, \ldots,
i_n \hookrightarrow \tau_n \}$. If we're given all the things of the needed
together in an expression of type $\\{ i_1 \hookrightarrow \tau_1, \ldots,
i_n \hookrightarrow \tau_n \\}$. If we're given all the things of the needed
types, we can conjoint them all together into one compound package.

The following rule describes our current implementation:

$$
\frac
{ \Gamma \vdash (`i_1`, e_1 \colon (t_1, c_1)) \quad \ldots \quad \Gamma \vdash (`i_1`, e_n \colon (t_n, c_n)) \quad fresh(s) }
{ \Gamma \vdash Rec(`i_1`, e_1, \ldots, `i_n`, e_n) \ \colon \ (s, c_1 \land \ldots \land c_n \land s \sim \{ i_1 \colon t_1, \ldots, i_n \colon t_n \} }
{ \Gamma \vdash Rec(`i_1`, e_1, \ldots, `i_n`, e_n) \ \colon \ (s, c_1 \land \ldots \land c_n \land s \sim \\{ i_1 \colon t_1, \ldots, i_n \colon t_n \\} }
$$

The requirement that our labels show up in the premise as quint strings paired
Expand Down Expand Up @@ -525,7 +525,7 @@ i_n \hookrightarrow e_n \rangle
\quad
\Gamma, x_n \vdash e_n \colon \tau
}
{ \Gamma \vdash \ case \ e \ \{ i_1 \cdot x_1 \hookrightarrow e_1 | \ldots | i_n \cdot x_n \hookrightarrow e_n \}}
{ \Gamma \vdash \ case \ e \ \\{ i_1 \cdot x_1 \hookrightarrow e_1 | \ldots | i_n \cdot x_n \hookrightarrow e_n \\}}
$$


Expand Down Expand Up @@ -670,7 +670,7 @@ NOTE: Loss of consistency in declaration vs. construction/elimination.
As is, I think these are not worth the complexity and overhead.

Compare our rule with the projection operation from "Extensible
Records with Scoped Labels", which does not receive the label `l` as a string,
Records with Scoped Labels", which does not receive the label \`l\` as a string,
instead treating it as a special piece of syntax:

```
Expand Down Expand Up @@ -698,15 +698,15 @@ a rule like the following for record projection:
$$
\frac
{ \Gamma \vdash e \colon (r, c) \quad fresh(t) }
{ \Gamma \vdash \ e.l \ \colon (t, c \land r \sim \{ \ l \colon t | tail\_t \ \}) }
{ \Gamma \vdash \ e.l \ \colon (t, c \land r \sim \\{ \ l \colon t | tail\_t \ \\}) }
$$

This would allow removing the checks for string literals, instead leaving that
to the outer-level, syntactic level, of our static analysis. A similar
simplification would be follow for record construction: the rule for `Rec` would
not need to validate that it had received an even number of arguments of
alternating string literals and values, since this would be statically
guaranteed by the parsing rules for the $\{ l_1 : v_1, \ldots, l_n : v_n \}$
guaranteed by the parsing rules for the $\\{ l_1 : v_1, \ldots, l_n : v_n \\}$
syntax. This would be a case of opting for the ["Parse, don't validate"][parse]
strategy.

Expand Down

0 comments on commit 01b0c2e

Please sign in to comment.