diff --git a/.local/syncdesign.sh b/.local/syncdesign.sh
new file mode 100755
index 0000000..a21862e
--- /dev/null
+++ b/.local/syncdesign.sh
@@ -0,0 +1,3 @@
+#!/usr/bin/env bash
+
+rsync -rh --progress /home/jeleniel/obsidian/Master/ba-Projects/EpilogLite/sql_syntax/* $(dirname "$0")/../design/sql_syntax/
diff --git a/LICENSE.md b/LICENSE.md
index 4dfedd3..a12bb05 100644
--- a/LICENSE.md
+++ b/LICENSE.md
@@ -2,76 +2,31 @@
Version 3, 29 June 2007
-Copyright (C) 2007 Free Software Foundation, Inc.
-
+Copyright (C) 2007 Free Software Foundation, Inc.
-Everyone is permitted to copy and distribute verbatim copies of this
-license document, but changing it is not allowed.
+Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed.
## Preamble
-The GNU General Public License is a free, copyleft license for
-software and other kinds of works.
-
-The licenses for most software and other practical works are designed
-to take away your freedom to share and change the works. By contrast,
-the GNU General Public License is intended to guarantee your freedom
-to share and change all versions of a program--to make sure it remains
-free software for all its users. We, the Free Software Foundation, use
-the GNU General Public License for most of our software; it applies
-also to any other work released this way by its authors. You can apply
-it to your programs, too.
-
-When we speak of free software, we are referring to freedom, not
-price. Our General Public Licenses are designed to make sure that you
-have the freedom to distribute copies of free software (and charge for
-them if you wish), that you receive source code or can get it if you
-want it, that you can change the software or use pieces of it in new
-free programs, and that you know you can do these things.
-
-To protect your rights, we need to prevent others from denying you
-these rights or asking you to surrender the rights. Therefore, you
-have certain responsibilities if you distribute copies of the
-software, or if you modify it: responsibilities to respect the freedom
-of others.
-
-For example, if you distribute copies of such a program, whether
-gratis or for a fee, you must pass on to the recipients the same
-freedoms that you received. You must make sure that they, too, receive
-or can get the source code. And you must show them these terms so they
-know their rights.
-
-Developers that use the GNU GPL protect your rights with two steps:
-(1) assert copyright on the software, and (2) offer you this License
-giving you legal permission to copy, distribute and/or modify it.
-
-For the developers' and authors' protection, the GPL clearly explains
-that there is no warranty for this free software. For both users' and
-authors' sake, the GPL requires that modified versions be marked as
-changed, so that their problems will not be attributed erroneously to
-authors of previous versions.
-
-Some devices are designed to deny users access to install or run
-modified versions of the software inside them, although the
-manufacturer can do so. This is fundamentally incompatible with the
-aim of protecting users' freedom to change the software. The
-systematic pattern of such abuse occurs in the area of products for
-individuals to use, which is precisely where it is most unacceptable.
-Therefore, we have designed this version of the GPL to prohibit the
-practice for those products. If such problems arise substantially in
-other domains, we stand ready to extend this provision to those
-domains in future versions of the GPL, as needed to protect the
-freedom of users.
-
-Finally, every program is threatened constantly by software patents.
-States should not allow patents to restrict development and use of
-software on general-purpose computers, but in those that do, we wish
-to avoid the special danger that patents applied to a free program
-could make it effectively proprietary. To prevent this, the GPL
-assures that patents cannot be used to render the program non-free.
-
-The precise terms and conditions for copying, distribution and
-modification follow.
+The GNU General Public License is a free, copyleft license for software and other kinds of works.
+
+The licenses for most software and other practical works are designed to take away your freedom to share and change the works. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change all versions of a program--to make sure it remains free software for all its users. We, the Free Software Foundation, use the GNU General Public License for most of our software; it applies also to any other work released this way by its authors. You can apply it to your programs, too.
+
+When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for them if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs, and that you know you can do these things.
+
+To protect your rights, we need to prevent others from denying you these rights or asking you to surrender the rights. Therefore, you have certain responsibilities if you distribute copies of the software, or if you modify it: responsibilities to respect the freedom of others.
+
+For example, if you distribute copies of such a program, whether gratis or for a fee, you must pass on to the recipients the same freedoms that you received. You must make sure that they, too, receive or can get the source code. And you must show them these terms so they know their rights.
+
+Developers that use the GNU GPL protect your rights with two steps: (1) assert copyright on the software, and (2) offer you this License giving you legal permission to copy, distribute and/or modify it.
+
+For the developers' and authors' protection, the GPL clearly explains that there is no warranty for this free software. For both users' and authors' sake, the GPL requires that modified versions be marked as changed, so that their problems will not be attributed erroneously to authors of previous versions.
+
+Some devices are designed to deny users access to install or run modified versions of the software inside them, although the manufacturer can do so. This is fundamentally incompatible with the aim of protecting users' freedom to change the software. The systematic pattern of such abuse occurs in the area of products for individuals to use, which is precisely where it is most unacceptable. Therefore, we have designed this version of the GPL to prohibit the practice for those products. If such problems arise substantially in other domains, we stand ready to extend this provision to those domains in future versions of the GPL, as needed to protect the freedom of users.
+
+Finally, every program is threatened constantly by software patents. States should not allow patents to restrict development and use of software on general-purpose computers, but in those that do, we wish to avoid the special danger that patents applied to a free program could make it effectively proprietary. To prevent this, the GPL assures that patents cannot be used to render the program non-free.
+
+The precise terms and conditions for copying, distribution and modification follow.
## TERMS AND CONDITIONS
@@ -79,556 +34,185 @@ modification follow.
"This License" refers to version 3 of the GNU General Public License.
-"Copyright" also means copyright-like laws that apply to other kinds
-of works, such as semiconductor masks.
-
-"The Program" refers to any copyrightable work licensed under this
-License. Each licensee is addressed as "you". "Licensees" and
-"recipients" may be individuals or organizations.
-
-To "modify" a work means to copy from or adapt all or part of the work
-in a fashion requiring copyright permission, other than the making of
-an exact copy. The resulting work is called a "modified version" of
-the earlier work or a work "based on" the earlier work.
-
-A "covered work" means either the unmodified Program or a work based
-on the Program.
-
-To "propagate" a work means to do anything with it that, without
-permission, would make you directly or secondarily liable for
-infringement under applicable copyright law, except executing it on a
-computer or modifying a private copy. Propagation includes copying,
-distribution (with or without modification), making available to the
-public, and in some countries other activities as well.
-
-To "convey" a work means any kind of propagation that enables other
-parties to make or receive copies. Mere interaction with a user
-through a computer network, with no transfer of a copy, is not
-conveying.
-
-An interactive user interface displays "Appropriate Legal Notices" to
-the extent that it includes a convenient and prominently visible
-feature that (1) displays an appropriate copyright notice, and (2)
-tells the user that there is no warranty for the work (except to the
-extent that warranties are provided), that licensees may convey the
-work under this License, and how to view a copy of this License. If
-the interface presents a list of user commands or options, such as a
-menu, a prominent item in the list meets this criterion.
+"Copyright" also means copyright-like laws that apply to other kinds of works, such as semiconductor masks.
+
+"The Program" refers to any copyrightable work licensed under this License. Each licensee is addressed as "you". "Licensees" and "recipients" may be individuals or organizations.
+
+To "modify" a work means to copy from or adapt all or part of the work in a fashion requiring copyright permission, other than the making of an exact copy. The resulting work is called a "modified version" of the earlier work or a work "based on" the earlier work.
+
+A "covered work" means either the unmodified Program or a work based on the Program.
+
+To "propagate" a work means to do anything with it that, without permission, would make you directly or secondarily liable for infringement under applicable copyright law, except executing it on a computer or modifying a private copy. Propagation includes copying, distribution (with or without modification), making available to the public, and in some countries other activities as well.
+
+To "convey" a work means any kind of propagation that enables other parties to make or receive copies. Mere interaction with a user through a computer network, with no transfer of a copy, is not conveying.
+
+An interactive user interface displays "Appropriate Legal Notices" to the extent that it includes a convenient and prominently visible feature that (1) displays an appropriate copyright notice, and (2) tells the user that there is no warranty for the work (except to the extent that warranties are provided), that licensees may convey the work under this License, and how to view a copy of this License. If the interface presents a list of user commands or options, such as a menu, a prominent item in the list meets this criterion.
### 1. Source Code.
-The "source code" for a work means the preferred form of the work for
-making modifications to it. "Object code" means any non-source form of
-a work.
-
-A "Standard Interface" means an interface that either is an official
-standard defined by a recognized standards body, or, in the case of
-interfaces specified for a particular programming language, one that
-is widely used among developers working in that language.
-
-The "System Libraries" of an executable work include anything, other
-than the work as a whole, that (a) is included in the normal form of
-packaging a Major Component, but which is not part of that Major
-Component, and (b) serves only to enable use of the work with that
-Major Component, or to implement a Standard Interface for which an
-implementation is available to the public in source code form. A
-"Major Component", in this context, means a major essential component
-(kernel, window system, and so on) of the specific operating system
-(if any) on which the executable work runs, or a compiler used to
-produce the work, or an object code interpreter used to run it.
-
-The "Corresponding Source" for a work in object code form means all
-the source code needed to generate, install, and (for an executable
-work) run the object code and to modify the work, including scripts to
-control those activities. However, it does not include the work's
-System Libraries, or general-purpose tools or generally available free
-programs which are used unmodified in performing those activities but
-which are not part of the work. For example, Corresponding Source
-includes interface definition files associated with source files for
-the work, and the source code for shared libraries and dynamically
-linked subprograms that the work is specifically designed to require,
-such as by intimate data communication or control flow between those
-subprograms and other parts of the work.
-
-The Corresponding Source need not include anything that users can
-regenerate automatically from other parts of the Corresponding Source.
-
-The Corresponding Source for a work in source code form is that same
-work.
+The "source code" for a work means the preferred form of the work for making modifications to it. "Object code" means any non-source form of a work.
+
+A "Standard Interface" means an interface that either is an official standard defined by a recognized standards body, or, in the case of interfaces specified for a particular programming language, one that is widely used among developers working in that language.
+
+The "System Libraries" of an executable work include anything, other than the work as a whole, that (a) is included in the normal form of packaging a Major Component, but which is not part of that Major Component, and (b) serves only to enable use of the work with that Major Component, or to implement a Standard Interface for which an implementation is available to the public in source code form. A "Major Component", in this context, means a major essential component (kernel, window system, and so on) of the specific operating system (if any) on which the executable work runs, or a compiler used to produce the work, or an object code interpreter used to run it.
+
+The "Corresponding Source" for a work in object code form means all the source code needed to generate, install, and (for an executable work) run the object code and to modify the work, including scripts to control those activities. However, it does not include the work's System Libraries, or general-purpose tools or generally available free programs which are used unmodified in performing those activities but which are not part of the work. For example, Corresponding Source includes interface definition files associated with source files for the work, and the source code for shared libraries and dynamically linked subprograms that the work is specifically designed to require, such as by intimate data communication or control flow between those subprograms and other parts of the work.
+
+The Corresponding Source need not include anything that users can regenerate automatically from other parts of the Corresponding Source.
+
+The Corresponding Source for a work in source code form is that same work.
### 2. Basic Permissions.
-All rights granted under this License are granted for the term of
-copyright on the Program, and are irrevocable provided the stated
-conditions are met. This License explicitly affirms your unlimited
-permission to run the unmodified Program. The output from running a
-covered work is covered by this License only if the output, given its
-content, constitutes a covered work. This License acknowledges your
-rights of fair use or other equivalent, as provided by copyright law.
-
-You may make, run and propagate covered works that you do not convey,
-without conditions so long as your license otherwise remains in force.
-You may convey covered works to others for the sole purpose of having
-them make modifications exclusively for you, or provide you with
-facilities for running those works, provided that you comply with the
-terms of this License in conveying all material for which you do not
-control copyright. Those thus making or running the covered works for
-you must do so exclusively on your behalf, under your direction and
-control, on terms that prohibit them from making any copies of your
-copyrighted material outside their relationship with you.
-
-Conveying under any other circumstances is permitted solely under the
-conditions stated below. Sublicensing is not allowed; section 10 makes
-it unnecessary.
+All rights granted under this License are granted for the term of copyright on the Program, and are irrevocable provided the stated conditions are met. This License explicitly affirms your unlimited permission to run the unmodified Program. The output from running a covered work is covered by this License only if the output, given its content, constitutes a covered work. This License acknowledges your rights of fair use or other equivalent, as provided by copyright law.
+
+You may make, run and propagate covered works that you do not convey, without conditions so long as your license otherwise remains in force. You may convey covered works to others for the sole purpose of having them make modifications exclusively for you, or provide you with facilities for running those works, provided that you comply with the terms of this License in conveying all material for which you do not control copyright. Those thus making or running the covered works for you must do so exclusively on your behalf, under your direction and control, on terms that prohibit them from making any copies of your copyrighted material outside their relationship with you.
+
+Conveying under any other circumstances is permitted solely under the conditions stated below. Sublicensing is not allowed; section 10 makes it unnecessary.
### 3. Protecting Users' Legal Rights From Anti-Circumvention Law.
-No covered work shall be deemed part of an effective technological
-measure under any applicable law fulfilling obligations under article
-11 of the WIPO copyright treaty adopted on 20 December 1996, or
-similar laws prohibiting or restricting circumvention of such
-measures.
+No covered work shall be deemed part of an effective technological measure under any applicable law fulfilling obligations under article 11 of the WIPO copyright treaty adopted on 20 December 1996, or similar laws prohibiting or restricting circumvention of such measures.
-When you convey a covered work, you waive any legal power to forbid
-circumvention of technological measures to the extent such
-circumvention is effected by exercising rights under this License with
-respect to the covered work, and you disclaim any intention to limit
-operation or modification of the work as a means of enforcing, against
-the work's users, your or third parties' legal rights to forbid
-circumvention of technological measures.
+When you convey a covered work, you waive any legal power to forbid circumvention of technological measures to the extent such circumvention is effected by exercising rights under this License with respect to the covered work, and you disclaim any intention to limit operation or modification of the work as a means of enforcing, against the work's users, your or third parties' legal rights to forbid circumvention of technological measures.
### 4. Conveying Verbatim Copies.
-You may convey verbatim copies of the Program's source code as you
-receive it, in any medium, provided that you conspicuously and
-appropriately publish on each copy an appropriate copyright notice;
-keep intact all notices stating that this License and any
-non-permissive terms added in accord with section 7 apply to the code;
-keep intact all notices of the absence of any warranty; and give all
-recipients a copy of this License along with the Program.
+You may convey verbatim copies of the Program's source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice; keep intact all notices stating that this License and any non-permissive terms added in accord with section 7 apply to the code; keep intact all notices of the absence of any warranty; and give all recipients a copy of this License along with the Program.
-You may charge any price or no price for each copy that you convey,
-and you may offer support or warranty protection for a fee.
+You may charge any price or no price for each copy that you convey, and you may offer support or warranty protection for a fee.
### 5. Conveying Modified Source Versions.
-You may convey a work based on the Program, or the modifications to
-produce it from the Program, in the form of source code under the
-terms of section 4, provided that you also meet all of these
-conditions:
-
-- a) The work must carry prominent notices stating that you modified
- it, and giving a relevant date.
-- b) The work must carry prominent notices stating that it is
- released under this License and any conditions added under
- section 7. This requirement modifies the requirement in section 4
- to "keep intact all notices".
-- c) You must license the entire work, as a whole, under this
- License to anyone who comes into possession of a copy. This
- License will therefore apply, along with any applicable section 7
- additional terms, to the whole of the work, and all its parts,
- regardless of how they are packaged. This License gives no
- permission to license the work in any other way, but it does not
- invalidate such permission if you have separately received it.
-- d) If the work has interactive user interfaces, each must display
- Appropriate Legal Notices; however, if the Program has interactive
- interfaces that do not display Appropriate Legal Notices, your
- work need not make them do so.
-
-A compilation of a covered work with other separate and independent
-works, which are not by their nature extensions of the covered work,
-and which are not combined with it such as to form a larger program,
-in or on a volume of a storage or distribution medium, is called an
-"aggregate" if the compilation and its resulting copyright are not
-used to limit the access or legal rights of the compilation's users
-beyond what the individual works permit. Inclusion of a covered work
-in an aggregate does not cause this License to apply to the other
-parts of the aggregate.
+You may convey a work based on the Program, or the modifications to produce it from the Program, in the form of source code under the terms of section 4, provided that you also meet all of these conditions:
+
+- a) The work must carry prominent notices stating that you modified it, and giving a relevant date.
+- b) The work must carry prominent notices stating that it is released under this License and any conditions added under section 7. This requirement modifies the requirement in section 4 to "keep intact all notices".
+- c) You must license the entire work, as a whole, under this License to anyone who comes into possession of a copy. This License will therefore apply, along with any applicable section 7 additional terms, to the whole of the work, and all its parts, regardless of how they are packaged. This License gives no permission to license the work in any other way, but it does not invalidate such permission if you have separately received it.
+- d) If the work has interactive user interfaces, each must display Appropriate Legal Notices; however, if the Program has interactive interfaces that do not display Appropriate Legal Notices, your work need not make them do so.
+
+A compilation of a covered work with other separate and independent works, which are not by their nature extensions of the covered work, and which are not combined with it such as to form a larger program, in or on a volume of a storage or distribution medium, is called an "aggregate" if the compilation and its resulting copyright are not used to limit the access or legal rights of the compilation's users beyond what the individual works permit. Inclusion of a covered work in an aggregate does not cause this License to apply to the other parts of the aggregate.
### 6. Conveying Non-Source Forms.
-You may convey a covered work in object code form under the terms of
-sections 4 and 5, provided that you also convey the machine-readable
-Corresponding Source under the terms of this License, in one of these
-ways:
-
-- a) Convey the object code in, or embodied in, a physical product
- (including a physical distribution medium), accompanied by the
- Corresponding Source fixed on a durable physical medium
- customarily used for software interchange.
-- b) Convey the object code in, or embodied in, a physical product
- (including a physical distribution medium), accompanied by a
- written offer, valid for at least three years and valid for as
- long as you offer spare parts or customer support for that product
- model, to give anyone who possesses the object code either (1) a
- copy of the Corresponding Source for all the software in the
- product that is covered by this License, on a durable physical
- medium customarily used for software interchange, for a price no
- more than your reasonable cost of physically performing this
- conveying of source, or (2) access to copy the Corresponding
- Source from a network server at no charge.
-- c) Convey individual copies of the object code with a copy of the
- written offer to provide the Corresponding Source. This
- alternative is allowed only occasionally and noncommercially, and
- only if you received the object code with such an offer, in accord
- with subsection 6b.
-- d) Convey the object code by offering access from a designated
- place (gratis or for a charge), and offer equivalent access to the
- Corresponding Source in the same way through the same place at no
- further charge. You need not require recipients to copy the
- Corresponding Source along with the object code. If the place to
- copy the object code is a network server, the Corresponding Source
- may be on a different server (operated by you or a third party)
- that supports equivalent copying facilities, provided you maintain
- clear directions next to the object code saying where to find the
- Corresponding Source. Regardless of what server hosts the
- Corresponding Source, you remain obligated to ensure that it is
- available for as long as needed to satisfy these requirements.
-- e) Convey the object code using peer-to-peer transmission,
- provided you inform other peers where the object code and
- Corresponding Source of the work are being offered to the general
- public at no charge under subsection 6d.
-
-A separable portion of the object code, whose source code is excluded
-from the Corresponding Source as a System Library, need not be
-included in conveying the object code work.
-
-A "User Product" is either (1) a "consumer product", which means any
-tangible personal property which is normally used for personal,
-family, or household purposes, or (2) anything designed or sold for
-incorporation into a dwelling. In determining whether a product is a
-consumer product, doubtful cases shall be resolved in favor of
-coverage. For a particular product received by a particular user,
-"normally used" refers to a typical or common use of that class of
-product, regardless of the status of the particular user or of the way
-in which the particular user actually uses, or expects or is expected
-to use, the product. A product is a consumer product regardless of
-whether the product has substantial commercial, industrial or
-non-consumer uses, unless such uses represent the only significant
-mode of use of the product.
-
-"Installation Information" for a User Product means any methods,
-procedures, authorization keys, or other information required to
-install and execute modified versions of a covered work in that User
-Product from a modified version of its Corresponding Source. The
-information must suffice to ensure that the continued functioning of
-the modified object code is in no case prevented or interfered with
-solely because modification has been made.
-
-If you convey an object code work under this section in, or with, or
-specifically for use in, a User Product, and the conveying occurs as
-part of a transaction in which the right of possession and use of the
-User Product is transferred to the recipient in perpetuity or for a
-fixed term (regardless of how the transaction is characterized), the
-Corresponding Source conveyed under this section must be accompanied
-by the Installation Information. But this requirement does not apply
-if neither you nor any third party retains the ability to install
-modified object code on the User Product (for example, the work has
-been installed in ROM).
-
-The requirement to provide Installation Information does not include a
-requirement to continue to provide support service, warranty, or
-updates for a work that has been modified or installed by the
-recipient, or for the User Product in which it has been modified or
-installed. Access to a network may be denied when the modification
-itself materially and adversely affects the operation of the network
-or violates the rules and protocols for communication across the
-network.
-
-Corresponding Source conveyed, and Installation Information provided,
-in accord with this section must be in a format that is publicly
-documented (and with an implementation available to the public in
-source code form), and must require no special password or key for
-unpacking, reading or copying.
+You may convey a covered work in object code form under the terms of sections 4 and 5, provided that you also convey the machine-readable Corresponding Source under the terms of this License, in one of these ways:
+
+- a) Convey the object code in, or embodied in, a physical product (including a physical distribution medium), accompanied by the Corresponding Source fixed on a durable physical medium customarily used for software interchange.
+- b) Convey the object code in, or embodied in, a physical product (including a physical distribution medium), accompanied by a written offer, valid for at least three years and valid for as long as you offer spare parts or customer support for that product model, to give anyone who possesses the object code either (1) a copy of the Corresponding Source for all the software in the product that is covered by this License, on a durable physical medium customarily used for software interchange, for a price no more than your reasonable cost of physically performing this conveying of source, or (2) access to copy the Corresponding Source from a network server at no charge.
+- c) Convey individual copies of the object code with a copy of the written offer to provide the Corresponding Source. This alternative is allowed only occasionally and noncommercially, and only if you received the object code with such an offer, in accord with subsection 6b.
+- d) Convey the object code by offering access from a designated place (gratis or for a charge), and offer equivalent access to the Corresponding Source in the same way through the same place at no further charge. You need not require recipients to copy the Corresponding Source along with the object code. If the place to copy the object code is a network server, the Corresponding Source may be on a different server (operated by you or a third party) that supports equivalent copying facilities, provided you maintain clear directions next to the object code saying where to find the Corresponding Source. Regardless of what server hosts the Corresponding Source, you remain obligated to ensure that it is available for as long as needed to satisfy these requirements.
+- e) Convey the object code using peer-to-peer transmission, provided you inform other peers where the object code and Corresponding Source of the work are being offered to the general public at no charge under subsection 6d.
+
+A separable portion of the object code, whose source code is excluded from the Corresponding Source as a System Library, need not be included in conveying the object code work.
+
+A "User Product" is either (1) a "consumer product", which means any tangible personal property which is normally used for personal, family, or household purposes, or (2) anything designed or sold for incorporation into a dwelling. In determining whether a product is a consumer product, doubtful cases shall be resolved in favor of coverage. For a particular product received by a particular user, "normally used" refers to a typical or common use of that class of product, regardless of the status of the particular user or of the way in which the particular user actually uses, or expects or is expected to use, the product. A product is a consumer product regardless of whether the product has substantial commercial, industrial or non-consumer uses, unless such uses represent the only significant mode of use of the product.
+
+"Installation Information" for a User Product means any methods, procedures, authorization keys, or other information required to install and execute modified versions of a covered work in that User Product from a modified version of its Corresponding Source. The information must suffice to ensure that the continued functioning of the modified object code is in no case prevented or interfered with solely because modification has been made.
+
+If you convey an object code work under this section in, or with, or specifically for use in, a User Product, and the conveying occurs as part of a transaction in which the right of possession and use of the User Product is transferred to the recipient in perpetuity or for a fixed term (regardless of how the transaction is characterized), the Corresponding Source conveyed under this section must be accompanied by the Installation Information. But this requirement does not apply if neither you nor any third party retains the ability to install modified object code on the User Product (for example, the work has been installed in ROM).
+
+The requirement to provide Installation Information does not include a requirement to continue to provide support service, warranty, or updates for a work that has been modified or installed by the recipient, or for the User Product in which it has been modified or installed. Access to a network may be denied when the modification itself materially and adversely affects the operation of the network or violates the rules and protocols for communication across the network.
+
+Corresponding Source conveyed, and Installation Information provided, in accord with this section must be in a format that is publicly documented (and with an implementation available to the public in source code form), and must require no special password or key for unpacking, reading or copying.
### 7. Additional Terms.
-"Additional permissions" are terms that supplement the terms of this
-License by making exceptions from one or more of its conditions.
-Additional permissions that are applicable to the entire Program shall
-be treated as though they were included in this License, to the extent
-that they are valid under applicable law. If additional permissions
-apply only to part of the Program, that part may be used separately
-under those permissions, but the entire Program remains governed by
-this License without regard to the additional permissions.
-
-When you convey a copy of a covered work, you may at your option
-remove any additional permissions from that copy, or from any part of
-it. (Additional permissions may be written to require their own
-removal in certain cases when you modify the work.) You may place
-additional permissions on material, added by you to a covered work,
-for which you have or can give appropriate copyright permission.
-
-Notwithstanding any other provision of this License, for material you
-add to a covered work, you may (if authorized by the copyright holders
-of that material) supplement the terms of this License with terms:
-
-- a) Disclaiming warranty or limiting liability differently from the
- terms of sections 15 and 16 of this License; or
-- b) Requiring preservation of specified reasonable legal notices or
- author attributions in that material or in the Appropriate Legal
- Notices displayed by works containing it; or
-- c) Prohibiting misrepresentation of the origin of that material,
- or requiring that modified versions of such material be marked in
- reasonable ways as different from the original version; or
-- d) Limiting the use for publicity purposes of names of licensors
- or authors of the material; or
-- e) Declining to grant rights under trademark law for use of some
- trade names, trademarks, or service marks; or
-- f) Requiring indemnification of licensors and authors of that
- material by anyone who conveys the material (or modified versions
- of it) with contractual assumptions of liability to the recipient,
- for any liability that these contractual assumptions directly
- impose on those licensors and authors.
-
-All other non-permissive additional terms are considered "further
-restrictions" within the meaning of section 10. If the Program as you
-received it, or any part of it, contains a notice stating that it is
-governed by this License along with a term that is a further
-restriction, you may remove that term. If a license document contains
-a further restriction but permits relicensing or conveying under this
-License, you may add to a covered work material governed by the terms
-of that license document, provided that the further restriction does
-not survive such relicensing or conveying.
-
-If you add terms to a covered work in accord with this section, you
-must place, in the relevant source files, a statement of the
-additional terms that apply to those files, or a notice indicating
-where to find the applicable terms.
-
-Additional terms, permissive or non-permissive, may be stated in the
-form of a separately written license, or stated as exceptions; the
-above requirements apply either way.
+"Additional permissions" are terms that supplement the terms of this License by making exceptions from one or more of its conditions. Additional permissions that are applicable to the entire Program shall be treated as though they were included in this License, to the extent that they are valid under applicable law. If additional permissions apply only to part of the Program, that part may be used separately under those permissions, but the entire Program remains governed by this License without regard to the additional permissions.
+
+When you convey a copy of a covered work, you may at your option remove any additional permissions from that copy, or from any part of it. (Additional permissions may be written to require their own removal in certain cases when you modify the work.) You may place additional permissions on material, added by you to a covered work, for which you have or can give appropriate copyright permission.
+
+Notwithstanding any other provision of this License, for material you add to a covered work, you may (if authorized by the copyright holders of that material) supplement the terms of this License with terms:
+
+- a) Disclaiming warranty or limiting liability differently from the terms of sections 15 and 16 of this License; or
+- b) Requiring preservation of specified reasonable legal notices or author attributions in that material or in the Appropriate Legal Notices displayed by works containing it; or
+- c) Prohibiting misrepresentation of the origin of that material, or requiring that modified versions of such material be marked in reasonable ways as different from the original version; or
+- d) Limiting the use for publicity purposes of names of licensors or authors of the material; or
+- e) Declining to grant rights under trademark law for use of some trade names, trademarks, or service marks; or
+- f) Requiring indemnification of licensors and authors of that material by anyone who conveys the material (or modified versions of it) with contractual assumptions of liability to the recipient, for any liability that these contractual assumptions directly impose on those licensors and authors.
+
+All other non-permissive additional terms are considered "further restrictions" within the meaning of section 10. If the Program as you received it, or any part of it, contains a notice stating that it is governed by this License along with a term that is a further restriction, you may remove that term. If a license document contains a further restriction but permits relicensing or conveying under this License, you may add to a covered work material governed by the terms of that license document, provided that the further restriction does not survive such relicensing or conveying.
+
+If you add terms to a covered work in accord with this section, you must place, in the relevant source files, a statement of the additional terms that apply to those files, or a notice indicating where to find the applicable terms.
+
+Additional terms, permissive or non-permissive, may be stated in the form of a separately written license, or stated as exceptions; the above requirements apply either way.
### 8. Termination.
-You may not propagate or modify a covered work except as expressly
-provided under this License. Any attempt otherwise to propagate or
-modify it is void, and will automatically terminate your rights under
-this License (including any patent licenses granted under the third
-paragraph of section 11).
-
-However, if you cease all violation of this License, then your license
-from a particular copyright holder is reinstated (a) provisionally,
-unless and until the copyright holder explicitly and finally
-terminates your license, and (b) permanently, if the copyright holder
-fails to notify you of the violation by some reasonable means prior to
-60 days after the cessation.
-
-Moreover, your license from a particular copyright holder is
-reinstated permanently if the copyright holder notifies you of the
-violation by some reasonable means, this is the first time you have
-received notice of violation of this License (for any work) from that
-copyright holder, and you cure the violation prior to 30 days after
-your receipt of the notice.
-
-Termination of your rights under this section does not terminate the
-licenses of parties who have received copies or rights from you under
-this License. If your rights have been terminated and not permanently
-reinstated, you do not qualify to receive new licenses for the same
-material under section 10.
+You may not propagate or modify a covered work except as expressly provided under this License. Any attempt otherwise to propagate or modify it is void, and will automatically terminate your rights under this License (including any patent licenses granted under the third paragraph of section 11).
+
+However, if you cease all violation of this License, then your license from a particular copyright holder is reinstated (a) provisionally, unless and until the copyright holder explicitly and finally terminates your license, and (b) permanently, if the copyright holder fails to notify you of the violation by some reasonable means prior to 60 days after the cessation.
+
+Moreover, your license from a particular copyright holder is reinstated permanently if the copyright holder notifies you of the violation by some reasonable means, this is the first time you have received notice of violation of this License (for any work) from that copyright holder, and you cure the violation prior to 30 days after your receipt of the notice.
+
+Termination of your rights under this section does not terminate the licenses of parties who have received copies or rights from you under this License. If your rights have been terminated and not permanently reinstated, you do not qualify to receive new licenses for the same material under section 10.
### 9. Acceptance Not Required for Having Copies.
-You are not required to accept this License in order to receive or run
-a copy of the Program. Ancillary propagation of a covered work
-occurring solely as a consequence of using peer-to-peer transmission
-to receive a copy likewise does not require acceptance. However,
-nothing other than this License grants you permission to propagate or
-modify any covered work. These actions infringe copyright if you do
-not accept this License. Therefore, by modifying or propagating a
-covered work, you indicate your acceptance of this License to do so.
+You are not required to accept this License in order to receive or run a copy of the Program. Ancillary propagation of a covered work occurring solely as a consequence of using peer-to-peer transmission to receive a copy likewise does not require acceptance. However, nothing other than this License grants you permission to propagate or modify any covered work. These actions infringe copyright if you do not accept this License. Therefore, by modifying or propagating a covered work, you indicate your acceptance of this License to do so.
### 10. Automatic Licensing of Downstream Recipients.
-Each time you convey a covered work, the recipient automatically
-receives a license from the original licensors, to run, modify and
-propagate that work, subject to this License. You are not responsible
-for enforcing compliance by third parties with this License.
-
-An "entity transaction" is a transaction transferring control of an
-organization, or substantially all assets of one, or subdividing an
-organization, or merging organizations. If propagation of a covered
-work results from an entity transaction, each party to that
-transaction who receives a copy of the work also receives whatever
-licenses to the work the party's predecessor in interest had or could
-give under the previous paragraph, plus a right to possession of the
-Corresponding Source of the work from the predecessor in interest, if
-the predecessor has it or can get it with reasonable efforts.
-
-You may not impose any further restrictions on the exercise of the
-rights granted or affirmed under this License. For example, you may
-not impose a license fee, royalty, or other charge for exercise of
-rights granted under this License, and you may not initiate litigation
-(including a cross-claim or counterclaim in a lawsuit) alleging that
-any patent claim is infringed by making, using, selling, offering for
-sale, or importing the Program or any portion of it.
+Each time you convey a covered work, the recipient automatically receives a license from the original licensors, to run, modify and propagate that work, subject to this License. You are not responsible for enforcing compliance by third parties with this License.
+
+An "entity transaction" is a transaction transferring control of an organization, or substantially all assets of one, or subdividing an organization, or merging organizations. If propagation of a covered work results from an entity transaction, each party to that transaction who receives a copy of the work also receives whatever licenses to the work the party's predecessor in interest had or could give under the previous paragraph, plus a right to possession of the Corresponding Source of the work from the predecessor in interest, if the predecessor has it or can get it with reasonable efforts.
+
+You may not impose any further restrictions on the exercise of the rights granted or affirmed under this License. For example, you may not impose a license fee, royalty, or other charge for exercise of rights granted under this License, and you may not initiate litigation (including a cross-claim or counterclaim in a lawsuit) alleging that any patent claim is infringed by making, using, selling, offering for sale, or importing the Program or any portion of it.
### 11. Patents.
-A "contributor" is a copyright holder who authorizes use under this
-License of the Program or a work on which the Program is based. The
-work thus licensed is called the contributor's "contributor version".
-
-A contributor's "essential patent claims" are all patent claims owned
-or controlled by the contributor, whether already acquired or
-hereafter acquired, that would be infringed by some manner, permitted
-by this License, of making, using, or selling its contributor version,
-but do not include claims that would be infringed only as a
-consequence of further modification of the contributor version. For
-purposes of this definition, "control" includes the right to grant
-patent sublicenses in a manner consistent with the requirements of
-this License.
-
-Each contributor grants you a non-exclusive, worldwide, royalty-free
-patent license under the contributor's essential patent claims, to
-make, use, sell, offer for sale, import and otherwise run, modify and
-propagate the contents of its contributor version.
-
-In the following three paragraphs, a "patent license" is any express
-agreement or commitment, however denominated, not to enforce a patent
-(such as an express permission to practice a patent or covenant not to
-sue for patent infringement). To "grant" such a patent license to a
-party means to make such an agreement or commitment not to enforce a
-patent against the party.
-
-If you convey a covered work, knowingly relying on a patent license,
-and the Corresponding Source of the work is not available for anyone
-to copy, free of charge and under the terms of this License, through a
-publicly available network server or other readily accessible means,
-then you must either (1) cause the Corresponding Source to be so
-available, or (2) arrange to deprive yourself of the benefit of the
-patent license for this particular work, or (3) arrange, in a manner
-consistent with the requirements of this License, to extend the patent
-license to downstream recipients. "Knowingly relying" means you have
-actual knowledge that, but for the patent license, your conveying the
-covered work in a country, or your recipient's use of the covered work
-in a country, would infringe one or more identifiable patents in that
-country that you have reason to believe are valid.
-
-If, pursuant to or in connection with a single transaction or
-arrangement, you convey, or propagate by procuring conveyance of, a
-covered work, and grant a patent license to some of the parties
-receiving the covered work authorizing them to use, propagate, modify
-or convey a specific copy of the covered work, then the patent license
-you grant is automatically extended to all recipients of the covered
-work and works based on it.
-
-A patent license is "discriminatory" if it does not include within the
-scope of its coverage, prohibits the exercise of, or is conditioned on
-the non-exercise of one or more of the rights that are specifically
-granted under this License. You may not convey a covered work if you
-are a party to an arrangement with a third party that is in the
-business of distributing software, under which you make payment to the
-third party based on the extent of your activity of conveying the
-work, and under which the third party grants, to any of the parties
-who would receive the covered work from you, a discriminatory patent
-license (a) in connection with copies of the covered work conveyed by
-you (or copies made from those copies), or (b) primarily for and in
-connection with specific products or compilations that contain the
-covered work, unless you entered into that arrangement, or that patent
-license was granted, prior to 28 March 2007.
-
-Nothing in this License shall be construed as excluding or limiting
-any implied license or other defenses to infringement that may
-otherwise be available to you under applicable patent law.
+A "contributor" is a copyright holder who authorizes use under this License of the Program or a work on which the Program is based. The work thus licensed is called the contributor's "contributor version".
+
+A contributor's "essential patent claims" are all patent claims owned or controlled by the contributor, whether already acquired or hereafter acquired, that would be infringed by some manner, permitted by this License, of making, using, or selling its contributor version, but do not include claims that would be infringed only as a consequence of further modification of the contributor version. For purposes of this definition, "control" includes the right to grant patent sublicenses in a manner consistent with the requirements of this License.
+
+Each contributor grants you a non-exclusive, worldwide, royalty-free patent license under the contributor's essential patent claims, to make, use, sell, offer for sale, import and otherwise run, modify and propagate the contents of its contributor version.
+
+In the following three paragraphs, a "patent license" is any express agreement or commitment, however denominated, not to enforce a patent (such as an express permission to practice a patent or covenant not to sue for patent infringement). To "grant" such a patent license to a party means to make such an agreement or commitment not to enforce a patent against the party.
+
+If you convey a covered work, knowingly relying on a patent license, and the Corresponding Source of the work is not available for anyone to copy, free of charge and under the terms of this License, through a publicly available network server or other readily accessible means, then you must either (1) cause the Corresponding Source to be so available, or (2) arrange to deprive yourself of the benefit of the patent license for this particular work, or (3) arrange, in a manner consistent with the requirements of this License, to extend the patent license to downstream recipients. "Knowingly relying" means you have actual knowledge that, but for the patent license, your conveying the covered work in a country, or your recipient's use of the covered work in a country, would infringe one or more identifiable patents in that country that you have reason to believe are valid.
+
+If, pursuant to or in connection with a single transaction or arrangement, you convey, or propagate by procuring conveyance of, a covered work, and grant a patent license to some of the parties receiving the covered work authorizing them to use, propagate, modify or convey a specific copy of the covered work, then the patent license you grant is automatically extended to all recipients of the covered work and works based on it.
+
+A patent license is "discriminatory" if it does not include within the scope of its coverage, prohibits the exercise of, or is conditioned on the non-exercise of one or more of the rights that are specifically granted under this License. You may not convey a covered work if you are a party to an arrangement with a third party that is in the business of distributing software, under which you make payment to the third party based on the extent of your activity of conveying the work, and under which the third party grants, to any of the parties who would receive the covered work from you, a discriminatory patent license (a) in connection with copies of the covered work conveyed by you (or copies made from those copies), or (b) primarily for and in connection with specific products or compilations that contain the covered work, unless you entered into that arrangement, or that patent license was granted, prior to 28 March 2007.
+
+Nothing in this License shall be construed as excluding or limiting any implied license or other defenses to infringement that may otherwise be available to you under applicable patent law.
### 12. No Surrender of Others' Freedom.
-If conditions are imposed on you (whether by court order, agreement or
-otherwise) that contradict the conditions of this License, they do not
-excuse you from the conditions of this License. If you cannot convey a
-covered work so as to satisfy simultaneously your obligations under
-this License and any other pertinent obligations, then as a
-consequence you may not convey it at all. For example, if you agree to
-terms that obligate you to collect a royalty for further conveying
-from those to whom you convey the Program, the only way you could
-satisfy both those terms and this License would be to refrain entirely
-from conveying the Program.
+If conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot convey a covered work so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not convey it at all. For example, if you agree to terms that obligate you to collect a royalty for further conveying from those to whom you convey the Program, the only way you could satisfy both those terms and this License would be to refrain entirely from conveying the Program.
### 13. Use with the GNU Affero General Public License.
-Notwithstanding any other provision of this License, you have
-permission to link or combine any covered work with a work licensed
-under version 3 of the GNU Affero General Public License into a single
-combined work, and to convey the resulting work. The terms of this
-License will continue to apply to the part which is the covered work,
-but the special requirements of the GNU Affero General Public License,
-section 13, concerning interaction through a network will apply to the
-combination as such.
+Notwithstanding any other provision of this License, you have permission to link or combine any covered work with a work licensed under version 3 of the GNU Affero General Public License into a single combined work, and to convey the resulting work. The terms of this License will continue to apply to the part which is the covered work, but the special requirements of the GNU Affero General Public License, section 13, concerning interaction through a network will apply to the combination as such.
### 14. Revised Versions of this License.
-The Free Software Foundation may publish revised and/or new versions
-of the GNU General Public License from time to time. Such new versions
-will be similar in spirit to the present version, but may differ in
-detail to address new problems or concerns.
-
-Each version is given a distinguishing version number. If the Program
-specifies that a certain numbered version of the GNU General Public
-License "or any later version" applies to it, you have the option of
-following the terms and conditions either of that numbered version or
-of any later version published by the Free Software Foundation. If the
-Program does not specify a version number of the GNU General Public
-License, you may choose any version ever published by the Free
-Software Foundation.
-
-If the Program specifies that a proxy can decide which future versions
-of the GNU General Public License can be used, that proxy's public
-statement of acceptance of a version permanently authorizes you to
-choose that version for the Program.
-
-Later license versions may give you additional or different
-permissions. However, no additional obligations are imposed on any
-author or copyright holder as a result of your choosing to follow a
-later version.
+The Free Software Foundation may publish revised and/or new versions of the GNU General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns.
+
+Each version is given a distinguishing version number. If the Program specifies that a certain numbered version of the GNU General Public License "or any later version" applies to it, you have the option of following the terms and conditions either of that numbered version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of the GNU General Public License, you may choose any version ever published by the Free Software Foundation.
+
+If the Program specifies that a proxy can decide which future versions of the GNU General Public License can be used, that proxy's public statement of acceptance of a version permanently authorizes you to choose that version for the Program.
+
+Later license versions may give you additional or different permissions. However, no additional obligations are imposed on any author or copyright holder as a result of your choosing to follow a later version.
### 15. Disclaimer of Warranty.
-THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
-APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
-HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT
-WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT
-LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
-A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND
-PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE
-DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR
-CORRECTION.
+THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
### 16. Limitation of Liability.
-IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
-WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR
-CONVEYS THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES,
-INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES
-ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT
-NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR
-LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM
-TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER
-PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
+IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
### 17. Interpretation of Sections 15 and 16.
-If the disclaimer of warranty and limitation of liability provided
-above cannot be given local legal effect according to their terms,
-reviewing courts shall apply local law that most closely approximates
-an absolute waiver of all civil liability in connection with the
-Program, unless a warranty or assumption of liability accompanies a
-copy of the Program in return for a fee.
+If the disclaimer of warranty and limitation of liability provided above cannot be given local legal effect according to their terms, reviewing courts shall apply local law that most closely approximates an absolute waiver of all civil liability in connection with the Program, unless a warranty or assumption of liability accompanies a copy of the Program in return for a fee.
END OF TERMS AND CONDITIONS
## How to Apply These Terms to Your New Programs
-If you develop a new program, and you want it to be of the greatest
-possible use to the public, the best way to achieve this is to make it
-free software which everyone can redistribute and change under these
-terms.
+If you develop a new program, and you want it to be of the greatest possible use to the public, the best way to achieve this is to make it free software which everyone can redistribute and change under these terms.
-To do so, attach the following notices to the program. It is safest to
-attach them to the start of each source file to most effectively state
-the exclusion of warranty; and each file should have at least the
-"copyright" line and a pointer to where the full notice is found.
+To do so, attach the following notices to the program. It is safest to attach them to the start of each source file to most effectively state the exclusion of warranty; and each file should have at least the "copyright" line and a pointer to where the full notice is found.
Copyright (C)
@@ -646,188 +230,86 @@ the exclusion of warranty; and each file should have at least the
You should have received a copy of the GNU General Public License
along with this program. If not, see .
-Also add information on how to contact you by electronic and paper
-mail.
+Also add information on how to contact you by electronic and paper mail.
-If the program does terminal interaction, make it output a short
-notice like this when it starts in an interactive mode:
+If the program does terminal interaction, make it output a short notice like this when it starts in an interactive mode:
Copyright (C)
This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
This is free software, and you are welcome to redistribute it
under certain conditions; type `show c' for details.
-The hypothetical commands \`show w' and \`show c' should show the
-appropriate parts of the General Public License. Of course, your
-program's commands might be different; for a GUI interface, you would
-use an "about box".
+The hypothetical commands \`show w' and \`show c' should show the appropriate parts of the General Public License. Of course, your program's commands might be different; for a GUI interface, you would use an "about box".
-You should also get your employer (if you work as a programmer) or
-school, if any, to sign a "copyright disclaimer" for the program, if
-necessary. For more information on this, and how to apply and follow
-the GNU GPL, see .
+You should also get your employer (if you work as a programmer) or school, if any, to sign a "copyright disclaimer" for the program, if necessary. For more information on this, and how to apply and follow the GNU GPL, see .
-The GNU General Public License does not permit incorporating your
-program into proprietary programs. If your program is a subroutine
-library, you may consider it more useful to permit linking proprietary
-applications with the library. If this is what you want to do, use the
-GNU Lesser General Public License instead of this License. But first,
-please read .
+The GNU General Public License does not permit incorporating your program into proprietary programs. If your program is a subroutine library, you may consider it more useful to permit linking proprietary applications with the library. If this is what you want to do, use the GNU Lesser General Public License instead of this License. But first, please read .
# GNU LESSER GENERAL PUBLIC LICENSE
-*Version 3, 29 June 2007*
+_Version 3, 29 June 2007_
-Copyright (C) 2007 Free Software Foundation, Inc.
-
+Copyright (C) 2007 Free Software Foundation, Inc.
-Everyone is permitted to copy and distribute verbatim copies of this
-license document, but changing it is not allowed.
+Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed.
-This version of the GNU Lesser General Public License incorporates the
-terms and conditions of version 3 of the GNU General Public License,
-supplemented by the additional permissions listed below.
+This version of the GNU Lesser General Public License incorporates the terms and conditions of version 3 of the GNU General Public License, supplemented by the additional permissions listed below.
## 0. Additional Definitions.
-As used herein, "this License" refers to version 3 of the GNU Lesser
-General Public License, and the "GNU GPL" refers to version 3 of the
-GNU General Public License.
+As used herein, "this License" refers to version 3 of the GNU Lesser General Public License, and the "GNU GPL" refers to version 3 of the GNU General Public License.
-"The Library" refers to a covered work governed by this License, other
-than an Application or a Combined Work as defined below.
+"The Library" refers to a covered work governed by this License, other than an Application or a Combined Work as defined below.
-An "Application" is any work that makes use of an interface provided
-by the Library, but which is not otherwise based on the Library.
-Defining a subclass of a class defined by the Library is deemed a mode
-of using an interface provided by the Library.
+An "Application" is any work that makes use of an interface provided by the Library, but which is not otherwise based on the Library. Defining a subclass of a class defined by the Library is deemed a mode of using an interface provided by the Library.
-A "Combined Work" is a work produced by combining or linking an
-Application with the Library. The particular version of the Library
-with which the Combined Work was made is also called the "Linked
-Version".
+A "Combined Work" is a work produced by combining or linking an Application with the Library. The particular version of the Library with which the Combined Work was made is also called the "Linked Version".
-The "Minimal Corresponding Source" for a Combined Work means the
-Corresponding Source for the Combined Work, excluding any source code
-for portions of the Combined Work that, considered in isolation, are
-based on the Application, and not on the Linked Version.
+The "Minimal Corresponding Source" for a Combined Work means the Corresponding Source for the Combined Work, excluding any source code for portions of the Combined Work that, considered in isolation, are based on the Application, and not on the Linked Version.
-The "Corresponding Application Code" for a Combined Work means the
-object code and/or source code for the Application, including any data
-and utility programs needed for reproducing the Combined Work from the
-Application, but excluding the System Libraries of the Combined Work.
+The "Corresponding Application Code" for a Combined Work means the object code and/or source code for the Application, including any data and utility programs needed for reproducing the Combined Work from the Application, but excluding the System Libraries of the Combined Work.
## 1. Exception to Section 3 of the GNU GPL.
-You may convey a covered work under sections 3 and 4 of this License
-without being bound by section 3 of the GNU GPL.
+You may convey a covered work under sections 3 and 4 of this License without being bound by section 3 of the GNU GPL.
## 2. Conveying Modified Versions.
-If you modify a copy of the Library, and, in your modifications, a
-facility refers to a function or data to be supplied by an Application
-that uses the facility (other than as an argument passed when the
-facility is invoked), then you may convey a copy of the modified
-version:
+If you modify a copy of the Library, and, in your modifications, a facility refers to a function or data to be supplied by an Application that uses the facility (other than as an argument passed when the facility is invoked), then you may convey a copy of the modified version:
-- a) under this License, provided that you make a good faith effort
- to ensure that, in the event an Application does not supply the
- function or data, the facility still operates, and performs
- whatever part of its purpose remains meaningful, or
-- b) under the GNU GPL, with none of the additional permissions of
- this License applicable to that copy.
+- a) under this License, provided that you make a good faith effort to ensure that, in the event an Application does not supply the function or data, the facility still operates, and performs whatever part of its purpose remains meaningful, or
+- b) under the GNU GPL, with none of the additional permissions of this License applicable to that copy.
## 3. Object Code Incorporating Material from Library Header Files.
-The object code form of an Application may incorporate material from a
-header file that is part of the Library. You may convey such object
-code under terms of your choice, provided that, if the incorporated
-material is not limited to numerical parameters, data structure
-layouts and accessors, or small macros, inline functions and templates
-(ten or fewer lines in length), you do both of the following:
+The object code form of an Application may incorporate material from a header file that is part of the Library. You may convey such object code under terms of your choice, provided that, if the incorporated material is not limited to numerical parameters, data structure layouts and accessors, or small macros, inline functions and templates (ten or fewer lines in length), you do both of the following:
-- a) Give prominent notice with each copy of the object code that
- the Library is used in it and that the Library and its use are
- covered by this License.
-- b) Accompany the object code with a copy of the GNU GPL and this
- license document.
+- a) Give prominent notice with each copy of the object code that the Library is used in it and that the Library and its use are covered by this License.
+- b) Accompany the object code with a copy of the GNU GPL and this license document.
## 4. Combined Works.
-You may convey a Combined Work under terms of your choice that, taken
-together, effectively do not restrict modification of the portions of
-the Library contained in the Combined Work and reverse engineering for
-debugging such modifications, if you also do each of the following:
-
-- a) Give prominent notice with each copy of the Combined Work that
- the Library is used in it and that the Library and its use are
- covered by this License.
-- b) Accompany the Combined Work with a copy of the GNU GPL and this
- license document.
-- c) For a Combined Work that displays copyright notices during
- execution, include the copyright notice for the Library among
- these notices, as well as a reference directing the user to the
- copies of the GNU GPL and this license document.
+You may convey a Combined Work under terms of your choice that, taken together, effectively do not restrict modification of the portions of the Library contained in the Combined Work and reverse engineering for debugging such modifications, if you also do each of the following:
+
+- a) Give prominent notice with each copy of the Combined Work that the Library is used in it and that the Library and its use are covered by this License.
+- b) Accompany the Combined Work with a copy of the GNU GPL and this license document.
+- c) For a Combined Work that displays copyright notices during execution, include the copyright notice for the Library among these notices, as well as a reference directing the user to the copies of the GNU GPL and this license document.
- d) Do one of the following:
- - 0) Convey the Minimal Corresponding Source under the terms of
- this License, and the Corresponding Application Code in a form
- suitable for, and under terms that permit, the user to
- recombine or relink the Application with a modified version of
- the Linked Version to produce a modified Combined Work, in the
- manner specified by section 6 of the GNU GPL for conveying
- Corresponding Source.
- - 1) Use a suitable shared library mechanism for linking with
- the Library. A suitable mechanism is one that (a) uses at run
- time a copy of the Library already present on the user's
- computer system, and (b) will operate properly with a modified
- version of the Library that is interface-compatible with the
- Linked Version.
-- e) Provide Installation Information, but only if you would
- otherwise be required to provide such information under section 6
- of the GNU GPL, and only to the extent that such information is
- necessary to install and execute a modified version of the
- Combined Work produced by recombining or relinking the Application
- with a modified version of the Linked Version. (If you use option
- 4d0, the Installation Information must accompany the Minimal
- Corresponding Source and Corresponding Application Code. If you
- use option 4d1, you must provide the Installation Information in
- the manner specified by section 6 of the GNU GPL for conveying
- Corresponding Source.)
+ - 0. Convey the Minimal Corresponding Source under the terms of this License, and the Corresponding Application Code in a form suitable for, and under terms that permit, the user to recombine or relink the Application with a modified version of the Linked Version to produce a modified Combined Work, in the manner specified by section 6 of the GNU GPL for conveying Corresponding Source.
+ - 1. Use a suitable shared library mechanism for linking with the Library. A suitable mechanism is one that (a) uses at run time a copy of the Library already present on the user's computer system, and (b) will operate properly with a modified version of the Library that is interface-compatible with the Linked Version.
+- e) Provide Installation Information, but only if you would otherwise be required to provide such information under section 6 of the GNU GPL, and only to the extent that such information is necessary to install and execute a modified version of the Combined Work produced by recombining or relinking the Application with a modified version of the Linked Version. (If you use option 4d0, the Installation Information must accompany the Minimal Corresponding Source and Corresponding Application Code. If you use option 4d1, you must provide the Installation Information in the manner specified by section 6 of the GNU GPL for conveying Corresponding Source.)
## 5. Combined Libraries.
-You may place library facilities that are a work based on the Library
-side by side in a single library together with other library
-facilities that are not Applications and are not covered by this
-License, and convey such a combined library under terms of your
-choice, if you do both of the following:
+You may place library facilities that are a work based on the Library side by side in a single library together with other library facilities that are not Applications and are not covered by this License, and convey such a combined library under terms of your choice, if you do both of the following:
-- a) Accompany the combined library with a copy of the same work
- based on the Library, uncombined with any other library
- facilities, conveyed under the terms of this License.
-- b) Give prominent notice with the combined library that part of it
- is a work based on the Library, and explaining where to find the
- accompanying uncombined form of the same work.
+- a) Accompany the combined library with a copy of the same work based on the Library, uncombined with any other library facilities, conveyed under the terms of this License.
+- b) Give prominent notice with the combined library that part of it is a work based on the Library, and explaining where to find the accompanying uncombined form of the same work.
## 6. Revised Versions of the GNU Lesser General Public License.
-The Free Software Foundation may publish revised and/or new versions
-of the GNU Lesser General Public License from time to time. Such new
-versions will be similar in spirit to the present version, but may
-differ in detail to address new problems or concerns.
-
-Each version is given a distinguishing version number. If the Library
-as you received it specifies that a certain numbered version of the
-GNU Lesser General Public License "or any later version" applies to
-it, you have the option of following the terms and conditions either
-of that published version or of any later version published by the
-Free Software Foundation. If the Library as you received it does not
-specify a version number of the GNU Lesser General Public License, you
-may choose any version of the GNU Lesser General Public License ever
-published by the Free Software Foundation.
-
-If the Library as you received it specifies that a proxy can decide
-whether future versions of the GNU Lesser General Public License shall
-apply, that proxy's public statement of acceptance of any version is
-permanent authorization for you to choose that version for the
-Library.
+The Free Software Foundation may publish revised and/or new versions of the GNU Lesser General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns.
+
+Each version is given a distinguishing version number. If the Library as you received it specifies that a certain numbered version of the GNU Lesser General Public License "or any later version" applies to it, you have the option of following the terms and conditions either of that published version or of any later version published by the Free Software Foundation. If the Library as you received it does not specify a version number of the GNU Lesser General Public License, you may choose any version of the GNU Lesser General Public License ever published by the Free Software Foundation.
+
+If the Library as you received it specifies that a proxy can decide whether future versions of the GNU Lesser General Public License shall apply, that proxy's public statement of acceptance of any version is permanent authorization for you to choose that version for the Library.
diff --git a/README.md b/README.md
index e497ba4..4fc6dde 100644
--- a/README.md
+++ b/README.md
@@ -1,8 +1,8 @@
# EpilogLite Source Repository
-This repository contains the complete source code for the EpilogLite database engine, including test scripts.
+This repository contains the complete source code for the EpilogLite database engine, including test scripts.
-See the [on-line documentation](https://github.com/jeleniel/epiloglite/wiki) for more information about what EpilogLite is and how it works from a user's perspective. This [README.md](README.md) file is about the source code that goes into building EpilogLite, not about how EpilogLite is used.
+See the [on-line documentation](https://github.com/jeleniel/epiloglite/wiki) for more information about what EpilogLite is and how it works from a user's perspective. This [README.md](README.md) file is about the source code that goes into building EpilogLite, not about how EpilogLite is used.
## Version Control
@@ -14,17 +14,15 @@ Bug reports, enhancement requests, and documentation suggestions can be opened a
The preferred way to ask questions or make comments about EpilogLite is to visit the [EpilogLite Discussions](https://github.com/jeleniel/epiloglite/discussions).
-If you think you have found a bug that has security implications and
-you do not want to report it on the public forum, you can send a private
-email to security at neurodivergentnetworking dot org.
+If you think you have found a bug that has security implications and you do not want to report it on the public forum, you can send a private email to jeleniel at turkeyofman dot com.
## GNU LESSER GENERAL PUBLIC LICENSE
-The EpilogLite source code is released under the GNU Lesser General Public License 3.0 only. See [COPYING.md](COPYING.md) for details.
+The EpilogLite source code is released under the GNU Lesser General Public License 3.0 only. See [LICENSE.md](LICENSE.md) for details.
## Testing and Compiling
-Since this is a Rust application, the normal 'cargo' commands can be used to test or build the application.
+Since this is a Rust application, the normal 'cargo' commands can be used to test or build the application.
To execute the test suite run:
@@ -42,5 +40,4 @@ The compiled binaries will be in the 'target' folder after the build completes.
## How It All Fits Together
-EpilogLite is modular in design.
-See the [architectural description](design/ARCHITECTURE.md) for details. Other documents that are useful in helping to understand how EpilogLite works include the [file format](design/FILEFORMAT.md) description, the [virtual machine](design/VIRTUALMACHINE.md) that runs prepared statements, the description of [how transactions work](design/TRANSACTIONS.md), and the [overview of the query planner](design/QUERYPLANNER.md).
+EpilogLite is modular in design. See the [architectural description](design/ARCHITECTURE.md) for details. Other documents that are useful in helping to understand how EpilogLite works include the [file format](design/FILEFORMAT.md) description, the [virtual machine](design/VIRTUALMACHINE.md) that runs prepared statements, the description of [how transactions work](design/TRANSACTIONS.md), and the [overview of the query planner](design/QUERYPLANNER.md).
diff --git a/design/ARCHITECTURE.md b/design/ARCHITECTURE.md
index 5392321..9b683d8 100644
--- a/design/ARCHITECTURE.md
+++ b/design/ARCHITECTURE.md
@@ -1,5 +1,7 @@
# EpilogLite Architecture
+status: draft
+
## Introduction
EpilogLite is an implementation of the SQLite database library using pure Rust. This document describes the architecture of the EpilogLite library crate. The information here is useful to those who want to understand or modify the inner workings of EpilogLite.
@@ -48,7 +50,7 @@ classDiagram
sqlite3 --> database
database --> processor
-
+
processor --> virtual_machine
processor --> tokenizer
virtual_machine --> btree
@@ -65,7 +67,7 @@ classDiagram
### epiloglite
-The public interface is found in the `epiloglite` module. Functions are generally asynchronous.
+The public interface is found in the `epiloglite` module. Functions are generally asynchronous.
### sqlite
@@ -77,7 +79,7 @@ This module contains the components responsible for parsing and execution od SQL
#### epiloglite::command::processor
-This module coordinates the tokenization, parsing, and execution of SQL statements.
+This module coordinates the tokenization, parsing, and execution of SQL statements.
#### epiloglite::command::tokenizer
@@ -93,7 +95,7 @@ After the semantics have been assigned and a parse tree constructed the code gen
### epiloglite::comand::virtual_machine
-The bytecode from the code generator is handed off to a virtual machine to be executed.
+The bytecode from the code generator is handed off to a virtual machine to be executed.
### epiloglite::persistence::btree
@@ -105,11 +107,11 @@ The B-Tree module requests information from the block storage in fixed size page
### epiloglite::os
-In order to provide portability across operating systems EpilogLite uses an abstract Virtual File System ("VFS"). The VFS provides methods for finding, opening, creating, modifying, and closing files on block storage. In addition, the OS Interface provides functions for other OS specific tasks, such as finding the current time, and generating randomness.
+In order to provide portability across operating systems EpilogLite uses an abstract Virtual File System ("VFS"). The VFS provides methods for finding, opening, creating, modifying, and closing files on block storage. In addition, the OS Interface provides functions for other OS specific tasks, such as finding the current time, and generating randomness.
### epiloglite::utility
-Memory allocation, string handling, data type conversion routines, and other utility functions are in the Utilities module.
+Memory allocation, string handling, data type conversion routines, and other utility functions are in the Utilities module.
## Tests
@@ -117,4 +119,4 @@ Tests are implemented in the same file as the components under test, in keeping
```rust
cargo test
-```
\ No newline at end of file
+```
diff --git a/design/FILEFORMAT.md b/design/FILEFORMAT.md
index ea366fc..99e9248 100644
--- a/design/FILEFORMAT.md
+++ b/design/FILEFORMAT.md
@@ -1,5 +1,7 @@
# EpilogLite ("EL") Database Format
+status: draft
+
## Overview
This document describes and defines the database format used by EpilogLite. Because EpilogLite is designed to be drop-in compatible with SQLite ("SL") it is based off the [SQLite Database File Format 3.0.0](https://www.sqlite.org/fileformat2.html). Whenever EL extends the SL format backwards compatability is maintained, unless noted otherwise.
diff --git a/design/QUERYPLANNER.md b/design/QUERYPLANNER.md
index 42d80e5..f36c855 100644
--- a/design/QUERYPLANNER.md
+++ b/design/QUERYPLANNER.md
@@ -1,288 +1,325 @@
+# The EpilogLite Query Optimizer Overview
-SQLite
-Small. Fast. Reliable.
-Choose any three.
+status: draft
- Home
- About
- Documentation
- Download
- License
- Support
- Purchase
- Search
+## Introduction
-The SQLite Query Optimizer Overview
-Table Of Contents
-1. Introduction
-
-This document provides an overview of how the query planner and optimizer for SQLite works.
+This document provides an overview of how the query planner and optimizer for EpilogLite works.
Given a single SQL statement, there might be dozens, hundreds, or even thousands of ways to implement that statement, depending on the complexity of the statement itself and of the underlying database schema. The task of the query planner is to select the algorithm that minimizes disk I/O and CPU overhead.
Additional background information is available in the indexing tutorial document. The Next Generation Query Planner document provides more detail on how the join order is chosen.
-2. WHERE Clause Analysis
+
+## WHERE Clause Analysis
Prior to analysis, the following transformations are made to shift all join constraints into the WHERE clause:
- All NATURAL joins are converted into joins with a USING clause.
- All USING clauses (including ones created by the previous step) are converted into equivalent ON clauses.
- All ON clauses (include ones created by the previous step) are added as new conjuncts (AND-connected terms) in the WHERE clause.
+- All NATURAL joins are converted into joins with a USING clause.
+- All USING clauses (including ones created by the previous step) are converted into equivalent ON clauses.
+- All ON clauses (include ones created by the previous step) are added as new conjuncts (AND-connected terms) in the WHERE clause.
-SQLite makes no distinction between join constraints that occur in the WHERE clause and constraints in the ON clause of an inner join, since that distinction does not affect the outcome. However, there is a difference between ON clause constraints and WHERE clause constraints for outer joins. Therefore, when SQLite moves an ON clause constraint from an outer join over to the WHERE clause it adds special tags to the Abstract Syntax Tree (AST) to indicate that the constraint came from an outer join and from which outer join it came. There is no way to add those tags in pure SQL text. Hence, the SQL input must use ON clauses on outer joins. But in the internal AST, all constraints are part of the WHERE clause, because having everything in one place simplifies processing.
+EpilogLite makes no distinction between join constraints that occur in the WHERE clause and constraints in the ON clause of an inner join, since that distinction does not affect the outcome. However, there is a difference between ON clause constraints and WHERE clause constraints for outer joins. Therefore, when EpilogLite moves an ON clause constraint from an outer join over to the WHERE clause it adds special tags to the Abstract Syntax Tree (AST) to indicate that the constraint came from an outer join and from which outer join it came. There is no way to add those tags in pure SQL text. Hence, the SQL input must use ON clauses on outer joins. But in the internal AST, all constraints are part of the WHERE clause, because having everything in one place simplifies processing.
After all constraints have been shifted into the WHERE clause, The WHERE clause is broken up into conjuncts (hereafter called "terms"). In other words, the WHERE clause is broken up into pieces separated from the others by an AND operator. If the WHERE clause is composed of constraints separated by the OR operator (disjuncts) then the entire clause is considered to be a single "term" to which the OR-clause optimization is applied.
All terms of the WHERE clause are analyzed to see if they can be satisfied using indexes. To be usable by an index a term must usually be of one of the following forms:
-
- column = expression
- column IS expression
- column > expression
- column >= expression
- column < expression
- column <= expression
- expression = column
- expression IS column
- expression > column
- expression >= column
- expression < column
- expression <= column
- column IN (expression-list)
- column IN (subquery)
- column IS NULL
- column LIKE pattern
- column GLOB pattern
+- `column = expression`
+- `column IS expression`
+- `column > expression`
+- `column >= expression`
+- `column < expression`
+- `column <= expression`
+- `expression = column`
+- `expression IS column`
+- `expression > column`
+- `expression >= column`
+- `expression < column`
+- `expression <= column`
+- `column IN (expression-list)`
+- `column IN (subquery)`
+- `column IS NULL`
+- `column LIKE pattern`
+- `column GLOB pattern`
If an index is created using a statement like this:
+```sql
CREATE INDEX idx_ex1 ON ex1(a,b,c,d,e,...,y,z);
+```
Then the index might be used if the initial columns of the index (columns a, b, and so forth) appear in WHERE clause terms. The initial columns of the index must be used with the = or IN or IS operators. The right-most column that is used can employ inequalities. For the right-most column of an index that is used, there can be up to two inequalities that must sandwich the allowed values of the column between two extremes.
It is not necessary for every column of an index to appear in a WHERE clause term in order for that index to be used. However, there cannot be gaps in the columns of the index that are used. Thus for the example index above, if there is no WHERE clause term that constrains column c, then terms that constrain columns a and b can be used with the index but not terms that constrain columns d through z. Similarly, index columns will not normally be used (for indexing purposes) if they are to the right of a column that is constrained only by inequalities. (See the skip-scan optimization below for the exception.)
In the case of indexes on expressions, whenever the word "column" is used in the foregoing text, one can substitute "indexed expression" (meaning a copy of the expression that appears in the CREATE INDEX statement) and everything will work the same.
-2.1. Index Term Usage Examples
+
+### Index Term Usage Examples
For the index above and WHERE clause like this:
+```sql
... WHERE a=5 AND b IN (1,2,3) AND c IS NULL AND d='hello'
+```
The first four columns a, b, c, and d of the index would be usable since those four columns form a prefix of the index and are all bound by equality constraints.
For the index above and WHERE clause like this:
+```sql
... WHERE a=5 AND b IN (1,2,3) AND c>12 AND d='hello'
+```
Only columns a, b, and c of the index would be usable. The d column would not be usable because it occurs to the right of c and c is constrained only by inequalities.
For the index above and WHERE clause like this:
+```sql
... WHERE a=5 AND b IN (1,2,3) AND d='hello'
+```
Only columns a and b of the index would be usable. The d column would not be usable because column c is not constrained and there can be no gaps in the set of columns that usable by the index.
For the index above and WHERE clause like this:
+```sql
... WHERE b IN (1,2,3) AND c NOT NULL AND d='hello'
+```
The index is not usable at all because the left-most column of the index (column "a") is not constrained. Assuming there are no other indexes, the query above would result in a full table scan.
For the index above and WHERE clause like this:
+```sql
... WHERE a=5 OR b IN (1,2,3) OR c NOT NULL OR d='hello'
+```
The index is not usable because the WHERE clause terms are connected by OR instead of AND. This query would result in a full table scan. However, if three additional indexes where added that contained columns b, c, and d as their left-most columns, then the OR-clause optimization might apply.
-3. The BETWEEN Optimization
-If a term of the WHERE clause is of the following form:
+## The BETWEEN Optimization
+If a term of the WHERE clause is of the following form:
- expr1 BETWEEN expr2 AND expr3
+```sql
+expr1 BETWEEN expr2 AND expr3
+```
Then two "virtual" terms are added as follows:
-
- expr1 >= expr2 AND expr1 <= expr3
+```sql
+expr1 >= expr2 AND expr1 <= expr3
+```
Virtual terms are used for analysis only and do not cause any byte-code to be generated. If both virtual terms end up being used as constraints on an index, then the original BETWEEN term is omitted and the corresponding test is not performed on input rows. Thus if the BETWEEN term ends up being used as an index constraint no tests are ever performed on that term. On the other hand, the virtual terms themselves never causes tests to be performed on input rows. Thus if the BETWEEN term is not used as an index constraint and instead must be used to test input rows, the expr1 expression is only evaluated once.
-4. OR Optimizations
+
+## OR Optimizations
WHERE clause constraints that are connected by OR instead of AND can be handled in two different ways.
-4.1. Converting OR-connected constraint into an IN operator
-If a term consists of multiple subterms containing a common column name and separated by OR, like this:
+### Converting OR-connected constraint into an IN operator
+If a term consists of multiple subterms containing a common column name and separated by OR, like this:
- column = expr1 OR column = expr2 OR column = expr3 OR ...
+```sql
+column = expr1 OR column = expr2 OR column = expr3 OR ...
+```
Then that term is rewritten as follows:
-
- column IN (expr1,expr2,expr3,...)
+```sql
+column IN (expr1,expr2,expr3,...)
+```
The rewritten term then might go on to constrain an index using the normal rules for IN operators. Note that column must be the same column in every OR-connected subterm, although the column can occur on either the left or the right side of the = operator.
-4.2. Evaluating OR constraints separately and taking the UNION of the result
-
-If and only if the previously described conversion of OR to an IN operator does not work, the second OR-clause optimization is attempted. Suppose the OR clause consists of multiple subterms as follows:
+### Evaluating OR constraints separately and taking the UNION of the result
- expr1 OR expr2 OR expr3
+If and only if the previously described conversion of OR to an IN operator does not work, the second OR-clause optimization is attempted. Suppose the OR clause consists of multiple subterms as follows:
-Individual subterms might be a single comparison expression like a=5 or x>y or they can be LIKE or BETWEEN expressions, or a subterm can be a parenthesized list of AND-connected sub-subterms. Each subterm is analyzed as if it were itself the entire WHERE clause in order to see if the subterm is indexable by itself. If every subterm of an OR clause is separately indexable then the OR clause might be coded such that a separate index is used to evaluate each term of the OR clause. One way to think about how SQLite uses separate indexes for each OR clause term is to imagine that the WHERE clause where rewritten as follows:
+```sql
+expr1 OR expr2 OR expr3
+```
+Individual subterms might be a single comparison expression like a=5 or x>y or they can be LIKE or BETWEEN expressions, or a subterm can be a parenthesized list of AND-connected sub-subterms. Each subterm is analyzed as if it were itself the entire WHERE clause in order to see if the subterm is indexable by itself. If every subterm of an OR clause is separately indexable then the OR clause might be coded such that a separate index is used to evaluate each term of the OR clause. One way to think about how EpilogLite uses separate indexes for each OR clause term is to imagine that the WHERE clause where rewritten as follows:
- rowid IN (SELECT rowid FROM table WHERE expr1
- UNION SELECT rowid FROM table WHERE expr2
- UNION SELECT rowid FROM table WHERE expr3)
+```sql
+ rowid IN (SELECT rowid FROM table WHERE expr1
+ UNION SELECT rowid FROM table WHERE expr2
+ UNION SELECT rowid FROM table WHERE expr3)
+```
The rewritten expression above is conceptual; WHERE clauses containing OR are not really rewritten this way. The actual implementation of the OR clause uses a mechanism that is more efficient and that works even for WITHOUT ROWID tables or tables in which the "rowid" is inaccessible. Nevertheless, the essence of the implementation is captured by the statement above: Separate indexes are used to find candidate result rows from each OR clause term and the final result is the union of those rows.
-Note that in most cases, SQLite will only use a single index for each table in the FROM clause of a query. The second OR-clause optimization described here is the exception to that rule. With an OR-clause, a different index might be used for each subterm in the OR-clause.
+Note that in most cases, EpilogLite will only use a single index for each table in the FROM clause of a query. The second OR-clause optimization described here is the exception to that rule. With an OR-clause, a different index might be used for each subterm in the OR-clause.
+
+For any given query, the fact that the OR-clause optimization described here can be used does not guarantee that it will be used. EpilogLite uses a cost-based query planner that estimates the CPU and disk I/O costs of various competing query plans and chooses the plan that it thinks will be the fastest. If there are many OR terms in the WHERE clause or if some of the indexes on individual OR-clause subterms are not very selective, then EpilogLite might decide that it is faster to use a different query algorithm, or even a full-table scan. Application developers can use the EXPLAIN QUERY PLAN prefix on a statement to get a high-level overview of the chosen query strategy.
-For any given query, the fact that the OR-clause optimization described here can be used does not guarantee that it will be used. SQLite uses a cost-based query planner that estimates the CPU and disk I/O costs of various competing query plans and chooses the plan that it thinks will be the fastest. If there are many OR terms in the WHERE clause or if some of the indexes on individual OR-clause subterms are not very selective, then SQLite might decide that it is faster to use a different query algorithm, or even a full-table scan. Application developers can use the EXPLAIN QUERY PLAN prefix on a statement to get a high-level overview of the chosen query strategy.
-5. The LIKE Optimization
+## The LIKE Optimization
A WHERE-clause term that uses the LIKE or GLOB operator can sometimes be used with an index to do a range search, almost as if the LIKE or GLOB were an alternative to a BETWEEN operator. There are many conditions on this optimization:
- The right-hand side of the LIKE or GLOB must be either a string literal or a parameter bound to a string literal that does not begin with a wildcard character.
- It must not be possible to make the LIKE or GLOB operator true by having a numeric value (instead of a string or blob) on the left-hand side. This means that either:
- the left-hand side of the LIKE or GLOB operator is the name of an indexed column with TEXT affinity, or
- the right-hand side pattern argument does not begin with a minus sign ("-") or a digit.
- This constraint arises from the fact that numbers do not sort in lexicographical order. For example: 9<10 but '9'>'10'.
- The built-in functions used to implement LIKE and GLOB must not have been overloaded using the sqlite3_create_function() API.
- For the GLOB operator, the column must be indexed using the built-in BINARY collating sequence.
- For the LIKE operator, if case_sensitive_like mode is enabled then the column must indexed using BINARY collating sequence, or if case_sensitive_like mode is disabled then the column must indexed using built-in NOCASE collating sequence.
- If the ESCAPE option is used, the ESCAPE character must be ASCII, or a single-byte character in UTF-8.
+- The right-hand side of the LIKE or GLOB must be either a string literal or a parameter bound to a string literal that does not begin with a wildcard character.
+- It must not be possible to make the LIKE or GLOB operator true by having a numeric value (instead of a string or blob) on the left-hand side. This means that either:
+- - the left-hand side of the LIKE or GLOB operator is the name of an indexed column with TEXT affinity, or
+- - the right-hand side pattern argument does not begin with a minus sign ("-") or a digit.
+- This constraint arises from the fact that numbers do not sort in lexicographical order. For example: 9<10 but '9'>'10'.
+- The built-in functions used to implement LIKE and GLOB must not have been overloaded using the EpilogLite3_create_function() API.
+- For the GLOB operator, the column must be indexed using the built-in BINARY collating sequence.
+- For the LIKE operator, if case_sensitive_like mode is enabled then the column must indexed using BINARY collating sequence, or if case_sensitive_like mode is disabled then the column must indexed using built-in NOCASE collating sequence.
+- If the ESCAPE option is used, the ESCAPE character must be ASCII, or a single-byte character in UTF-8.
The LIKE operator has two modes that can be set by a pragma. The default mode is for LIKE comparisons to be insensitive to differences of case for latin1 characters. Thus, by default, the following expression is true:
-'a' LIKE 'A'
+```sql
+'a' LIKE 'A'`
+```
If the case_sensitive_like pragma is enabled as follows:
+```sql
PRAGMA case_sensitive_like=ON;
+```
-Then the LIKE operator pays attention to case and the example above would evaluate to false. Note that case insensitivity only applies to latin1 characters - basically the upper and lower case letters of English in the lower 127 byte codes of ASCII. International character sets are case sensitive in SQLite unless an application-defined collating sequence and like() SQL function are provided that take non-ASCII characters into account. If an application-defined collating sequence and/or like() SQL function are provided, the LIKE optimization described here will never be taken.
+Then the LIKE operator pays attention to case and the example above would evaluate to false. Note that case insensitivity only applies to latin1 characters - basically the upper and lower case letters of English in the lower 127 byte codes of ASCII. International character sets are case sensitive in EpilogLite unless an application-defined collating sequence and like() SQL function are provided that take non-ASCII characters into account. If an application-defined collating sequence and/or like() SQL function are provided, the LIKE optimization described here will never be taken.
-The LIKE operator is case insensitive by default because this is what the SQL standard requires. You can change the default behavior at compile time by using the SQLITE_CASE_SENSITIVE_LIKE command-line option to the compiler.
+The LIKE operator is case insensitive by default because this is what the SQL standard requires. You can change the default behavior at compile time by using the EpilogLite_CASE_SENSITIVE_LIKE command-line option to the compiler.
The LIKE optimization might occur if the column named on the left of the operator is indexed using the built-in BINARY collating sequence and case_sensitive_like is turned on. Or the optimization might occur if the column is indexed using the built-in NOCASE collating sequence and the case_sensitive_like mode is off. These are the only two combinations under which LIKE operators will be optimized.
The GLOB operator is always case sensitive. The column on the left side of the GLOB operator must always use the built-in BINARY collating sequence or no attempt will be made to optimize that operator with indexes.
-The LIKE optimization will only be attempted if the right-hand side of the GLOB or LIKE operator is either literal string or a parameter that has been bound to a string literal. The string literal must not begin with a wildcard; if the right-hand side begins with a wildcard character then this optimization is not attempted. If the right-hand side is a parameter that is bound to a string, then this optimization is only attempted if the prepared statement containing the expression was compiled with sqlite3_prepare_v2() or sqlite3_prepare16_v2(). The LIKE optimization is not attempted if the right-hand side is a parameter and the statement was prepared using sqlite3_prepare() or sqlite3_prepare16().
+The LIKE optimization will only be attempted if the right-hand side of the GLOB or LIKE operator is either literal string or a parameter that has been bound to a string literal. The string literal must not begin with a wildcard; if the right-hand side begins with a wildcard character then this optimization is not attempted. If the right-hand side is a parameter that is bound to a string, then this optimization is only attempted if the prepared statement containing the expression was compiled with EpilogLite3_prepare_v2() or EpilogLite3_prepare16_v2(). The LIKE optimization is not attempted if the right-hand side is a parameter and the statement was prepared using EpilogLite3_prepare() or EpilogLite3_prepare16().
Suppose the initial sequence of non-wildcard characters on the right-hand side of the LIKE or GLOB operator is x. We are using a single character to denote this non-wildcard prefix but the reader should understand that the prefix can consist of more than 1 character. Let y be the smallest string that is the same length as /x/ but which compares greater than x. For example, if x is 'hello' then y would be 'hellp'. The LIKE and GLOB optimizations consist of adding two virtual terms like this:
-
- column >= x AND column < y
+```sql
+column >= x AND column < y
+```
Under most circumstances, the original LIKE or GLOB operator is still tested against each input row even if the virtual terms are used to constrain an index. This is because we do not know what additional constraints may be imposed by characters to the right of the x prefix. However, if there is only a single global wildcard to the right of x, then the original LIKE or GLOB test is disabled. In other words, if the pattern is like this:
+```sql
+column LIKE x%
- column LIKE x%
- column GLOB x*
+column GLOB x*
+```
then the original LIKE or GLOB tests are disabled when the virtual terms constrain an index because in that case we know that all of the rows selected by the index will pass the LIKE or GLOB test.
-Note that when the right-hand side of a LIKE or GLOB operator is a parameter and the statement is prepared using sqlite3_prepare_v2() or sqlite3_prepare16_v2() then the statement is automatically reparsed and recompiled on the first sqlite3_step() call of each run if the binding to the right-hand side parameter has changed since the previous run. This reparse and recompile is essentially the same action that occurs following a schema change. The recompile is necessary so that the query planner can examine the new value bound to the right-hand side of the LIKE or GLOB operator and determine whether or not to employ the optimization described above.
-6. The Skip-Scan Optimization
+Note that when the right-hand side of a LIKE or GLOB operator is a parameter and the statement is prepared using EpilogLite3_prepare_v2() or EpilogLite3_prepare16_v2() then the statement is automatically reparsed and recompiled on the first EpilogLite3_step() call of each run if the binding to the right-hand side parameter has changed since the previous run. This reparse and recompile is essentially the same action that occurs following a schema change. The recompile is necessary so that the query planner can examine the new value bound to the right-hand side of the LIKE or GLOB operator and determine whether or not to employ the optimization described above.
+
+## The Skip-Scan Optimization
-The general rule is that indexes are only useful if there are WHERE-clause constraints on the left-most columns of the index. However, in some cases, SQLite is able to use an index even if the first few columns of the index are omitted from the WHERE clause but later columns are included.
+The general rule is that indexes are only useful if there are WHERE-clause constraints on the left-most columns of the index. However, in some cases, EpilogLite is able to use an index even if the first few columns of the index are omitted from the WHERE clause but later columns are included.
Consider a table such as the following:
+```sql
CREATE TABLE people(
- name TEXT PRIMARY KEY,
- role TEXT NOT NULL,
- height INT NOT NULL, -- in cm
- CHECK( role IN ('student','teacher') )
+ name TEXT PRIMARY KEY,
+ role TEXT NOT NULL,
+ height INT NOT NULL, -- in cm
+ CHECK( role IN ('student','teacher') )
);
CREATE INDEX people_idx1 ON people(role, height);
+```
The people table has one entry for each person in a large organization. Each person is either a "student" or a "teacher", as determined by the "role" field. The table also records the height in centimeters of each person. The role and height are indexed. Notice that the left-most column of the index is not very selective - it only contains two possible values.
Now consider a query to find the names of everyone in the organization that is 180cm tall or taller:
+```sql
SELECT name FROM people WHERE height>=180;
+```
-Because the left-most column of the index does not appear in the WHERE clause of the query, one is tempted to conclude that the index is not usable here. However, SQLite is able to use the index. Conceptually, SQLite uses the index as if the query were more like the following:
+Because the left-most column of the index does not appear in the WHERE clause of the query, one is tempted to conclude that the index is not usable here. However, EpilogLite is able to use the index. Conceptually, EpilogLite uses the index as if the query were more like the following:
+```sql
SELECT name FROM people
WHERE role IN (SELECT DISTINCT role FROM people)
- AND height>=180;
+ AND height>=180;
+```
Or this:
+```sql
SELECT name FROM people WHERE role='teacher' AND height>=180
UNION ALL
SELECT name FROM people WHERE role='student' AND height>=180;
+```
-The alternative query formulations shown above are conceptual only. SQLite does not really transform the query. The actual query plan is like this: SQLite locates the first possible value for "role", which it can do by rewinding the "people_idx1" index to the beginning and reading the first record. SQLite stores this first "role" value in an internal variable that we will here call "$role". Then SQLite runs a query like: "SELECT name FROM people WHERE role=$role AND height>=180". This query has an equality constraint on the left-most column of the index and so the index can be used to resolve that query. Once that query is finished, SQLite then uses the "people_idx1" index to locate the next value of the "role" column, using code that is logically similar to "SELECT role FROM people WHERE role>$role LIMIT 1". This new "role" value overwrites the $role variable, and the process repeats until all possible values for "role" have been examined.
+The alternative query formulations shown above are conceptual only. EpilogLite does not really transform the query. The actual query plan is like this: EpilogLite locates the first possible value for "role", which it can do by rewinding the "people_idx1" index to the beginning and reading the first record. EpilogLite stores this first "role" value in an internal variable that we will here call "$role". Then EpilogLite runs a query like: "SELECT name FROM people WHERE role=$role AND height>=180". This query has an equality constraint on the left-most column of the index and so the index can be used to resolve that query. Once that query is finished, EpilogLite then uses the "people_idx1" index to locate the next value of the "role" column, using code that is logically similar to "SELECT role FROM people WHERE role>$role LIMIT 1". This new "role" value overwrites the $role variable, and the process repeats until all possible values for "role" have been examined.
We call this kind of index usage a "skip-scan" because the database engine is basically doing a full scan of the index but it optimizes the scan (making it less than "full") by occasionally skipping ahead to the next candidate value.
-SQLite might use a skip-scan on an index if it knows that the first one or more columns contain many duplication values. If there are too few duplicates in the left-most columns of the index, then it would be faster to simply step ahead to the next value, and thus do a full table scan, than to do a binary search on an index to locate the next left-column value.
+EpilogLite might use a skip-scan on an index if it knows that the first one or more columns contain many duplication values. If there are too few duplicates in the left-most columns of the index, then it would be faster to simply step ahead to the next value, and thus do a full table scan, than to do a binary search on an index to locate the next left-column value.
-The only way that SQLite can know that there are many duplicates in the left-most columns of an index is if the ANALYZE command has been run on the database. Without the results of ANALYZE, SQLite has to guess at the "shape" of the data in the table, and the default guess is that there are an average of 10 duplicates for every value in the left-most column of the index. Skip-scan only becomes profitable (it only gets to be faster than a full table scan) when the number of duplicates is about 18 or more. Hence, a skip-scan is never used on a database that has not been analyzed.
-7. Joins
+The only way that EpilogLite can know that there are many duplicates in the left-most columns of an index is if the ANALYZE command has been run on the database. Without the results of ANALYZE, EpilogLite has to guess at the "shape" of the data in the table, and the default guess is that there are an average of 10 duplicates for every value in the left-most column of the index. Skip-scan only becomes profitable (it only gets to be faster than a full table scan) when the number of duplicates is about 18 or more. Hence, a skip-scan is never used on a database that has not been analyzed.
-SQLite implements joins as nested loops. The default order of the nested loops in a join is for the left-most table in the FROM clause to form the outer loop and the right-most table to form the inner loop. However, SQLite will nest the loops in a different order if doing so will help it to select better indexes.
+## Joins
+
+EpilogLite implements joins as nested loops. The default order of the nested loops in a join is for the left-most table in the FROM clause to form the outer loop and the right-most table to form the inner loop. However, EpilogLite will nest the loops in a different order if doing so will help it to select better indexes.
Inner joins can be freely reordered. However outer joins are neither commutative nor associative and hence will not be reordered. Inner joins to the left and right of an outer join might be reordered if the optimizer thinks that is advantageous but outer joins are always evaluated in the order in which they occur.
-SQLite treats the CROSS JOIN operator specially. The CROSS JOIN operator is commutative, in theory. However, SQLite chooses to never reorder tables in a CROSS JOIN. This provides a mechanism by which the programmer can force SQLite to choose a particular loop nesting order.
+EpilogLite treats the CROSS JOIN operator specially. The CROSS JOIN operator is commutative, in theory. However, EpilogLite chooses to never reorder tables in a CROSS JOIN. This provides a mechanism by which the programmer can force EpilogLite to choose a particular loop nesting order.
-When selecting the order of tables in a join, SQLite uses an efficient polynomial-time algorithm graph algorithm described in the Next Generation Query Planner document. Because of this, SQLite is able to plan queries with 50- or 60-way joins in a matter of microseconds
+When selecting the order of tables in a join, EpilogLite uses an efficient polynomial-time algorithm graph algorithm described in the Next Generation Query Planner document. Because of this, EpilogLite is able to plan queries with 50- or 60-way joins in a matter of microseconds
Join reordering is automatic and usually works well enough that programmers do not have to think about it, especially if ANALYZE has been used to gather statistics about the available indexes, though occasionally some hints from the programmer are needed. Consider, for example, the following schema:
+```sql
CREATE TABLE node(
- id INTEGER PRIMARY KEY,
- name TEXT
+ id INTEGER PRIMARY KEY,
+ name TEXT
);
CREATE INDEX node_idx ON node(name);
CREATE TABLE edge(
- orig INTEGER REFERENCES node,
- dest INTEGER REFERENCES node,
- PRIMARY KEY(orig, dest)
+ orig INTEGER REFERENCES node,
+ dest INTEGER REFERENCES node,
+ PRIMARY KEY(orig, dest)
);
CREATE INDEX edge_idx ON edge(dest,orig);
+```
The schema above defines a directed graph with the ability to store a name at each node. Now consider a query against this schema:
+```sql
SELECT *
- FROM edge AS e,
- node AS n1,
- node AS n2
+ FROM edge AS e,
+- node AS n1,
+- node AS n2
WHERE n1.name = 'alice'
- AND n2.name = 'bob'
- AND e.orig = n1.id
- AND e.dest = n2.id;
+ AND n2.name = 'bob'
+ AND e.orig = n1.id
+ AND e.dest = n2.id;
+```
-This query asks for is all information about edges that go from nodes labeled "alice" to nodes labeled "bob". The query optimizer in SQLite has basically two choices on how to implement this query. (There are actually six different choices, but we will only consider two of them here.) Pseudocode below demonstrating these two choices.
+This query asks for is all information about edges that go from nodes labeled "alice" to nodes labeled "bob". The query optimizer in EpilogLite has basically two choices on how to implement this query. (There are actually six different choices, but we will only consider two of them here.) Pseudocode below demonstrating these two choices.
Option 1:
+```sql
foreach n1 where n1.name='alice' do:
- foreach n2 where n2.name='bob' do:
- foreach e where e.orig=n1.id and e.dest=n2.id
- return n1.*, n2.*, e.*
- end
- end
+ foreach n2 where n2.name='bob' do:
+- foreach e where e.orig=n1.id and e.dest=n2.id
+- return n1.*, n2.*, e.*
+- end
+ end
end
+```
Option 2:
+```sql
foreach n1 where n1.name='alice' do:
- foreach e where e.orig=n1.id do:
- foreach n2 where n2.id=e.dest and n2.name='bob' do:
- return n1.*, n2.*, e.*
- end
- end
+ foreach e where e.orig=n1.id do:
+- foreach n2 where n2.id=e.dest and n2.name='bob' do:
+- return n1.*, n2.*, e.*
+- end
+ end
end
+```
The same indexes are used to speed up every loop in both implementation options. The only difference in these two query plans is the order in which the loops are nested.
@@ -292,176 +329,203 @@ Let the number of alice nodes be M and the number of bob nodes be N. Consider tw
Now consider the case where M and N are both 3500. Alice nodes are abundant. This time suppose each of these nodes is connected by only one or two edges. Now option 2 is preferred. With option 2, the outer loop still has to run 3500 times, but the middle loop only runs once or twice for each outer loop and the inner loop will only run once for each middle loop, if at all. So the total number of iterations of the inner loop is around 7000. Option 1, on the other hand, has to run both its outer loop and its middle loop 3500 times each, resulting in 12 million iterations of the middle loop. Thus in the second scenario, option 2 is nearly 2000 times faster than option 1.
-So you can see that depending on how the data is structured in the table, either query plan 1 or query plan 2 might be better. Which plan does SQLite choose by default? As of version 3.6.18, without running ANALYZE, SQLite will choose option 2. If the ANALYZE command is run in order to gather statistics, a different choice might be made if the statistics indicate that the alternative is likely to run faster.
-7.1. Manual Control Of Join Order
+So you can see that depending on how the data is structured in the table, either query plan 1 or query plan 2 might be better. Which plan does EpilogLite choose by default? As of version 3.6.18, without running ANALYZE, EpilogLite will choose option 2. If the ANALYZE command is run in order to gather statistics, a different choice might be made if the statistics indicate that the alternative is likely to run faster.
+
+### Manual Control Of Join Order
-SQLite almost always picks the best join order automatically. It is very rare that a developer needs to intervene to give the query planner hints about the best join order. The best policy is to make use of PRAGMA optimize to ensure that the query planner has access to up-to-date statistics on the shape of the data in the database.
+EpilogLite almost always picks the best join order automatically. It is very rare that a developer needs to intervene to give the query planner hints about the best join order. The best policy is to make use of PRAGMA optimize to ensure that the query planner has access to up-to-date statistics on the shape of the data in the database.
-This section describes techniques by which developers can control the join order in SQLite, to work around any performance problems that may arise. However, the use of these techniques is not recommended, except as a last resort.
+This section describes techniques by which developers can control the join order in EpilogLite, to work around any performance problems that may arise. However, the use of these techniques is not recommended, except as a last resort.
-If you do encounter a situation where SQLite is picking a suboptimal join order even after running PRAGMA optimize, please report your situation on the SQLite Community Forum so that the SQLite maintainers can make new refinements to the query planner such that manual intervention is not required.
-7.1.1. Manual Control Of Query Plans Using SQLITE_STAT Tables
+If you do encounter a situation where EpilogLite is picking a suboptimal join order even after running PRAGMA optimize, please report your situation on the EpilogLite Community Forum so that the EpilogLite maintainers can make new refinements to the query planner such that manual intervention is not required.
-SQLite provides the ability for advanced programmers to exercise control over the query plan chosen by the optimizer. One method for doing this is to fudge the ANALYZE results in the sqlite_stat1 table.
-7.1.2. Manual Control of Query Plans using CROSS JOIN
+#### Manual Control Of Query Plans Using EpilogLite_STAT Tables
-Programmers can force SQLite to use a particular loop nesting order for a join by using the CROSS JOIN operator instead of just JOIN, INNER JOIN, NATURAL JOIN, or a "," join. Though CROSS JOINs are commutative in theory, SQLite chooses to never reorder the tables in a CROSS JOIN. Hence, the left table of a CROSS JOIN will always be in an outer loop relative to the right table.
+EpilogLite provides the ability for advanced programmers to exercise control over the query plan chosen by the optimizer. One method for doing this is to fudge the ANALYZE results in the EpilogLite_stat1 table.
+
+#### Manual Control of Query Plans using CROSS JOIN
+
+Programmers can force EpilogLite to use a particular loop nesting order for a join by using the CROSS JOIN operator instead of just JOIN, INNER JOIN, NATURAL JOIN, or a "," join. Though CROSS JOINs are commutative in theory, EpilogLite chooses to never reorder the tables in a CROSS JOIN. Hence, the left table of a CROSS JOIN will always be in an outer loop relative to the right table.
In the following query, the optimizer is free to reorder the tables of FROM clause any way it sees fit:
+```sql
SELECT *
- FROM node AS n1,
- edge AS e,
- node AS n2
+ FROM node AS n1,
+- edge AS e,
+- node AS n2
WHERE n1.name = 'alice'
- AND n2.name = 'bob'
- AND e.orig = n1.id
- AND e.dest = n2.id;
+ AND n2.name = 'bob'
+ AND e.orig = n1.id
+ AND e.dest = n2.id;
+```
In the following logically equivalent formulation of the same query, the substitution of "CROSS JOIN" for the "," means that the order of tables must be N1, E, N2.
+```sql
SELECT *
- FROM node AS n1 CROSS JOIN
- edge AS e CROSS JOIN
- node AS n2
+ FROM node AS n1 CROSS JOIN
+- edge AS e CROSS JOIN
+- node AS n2
WHERE n1.name = 'alice'
- AND n2.name = 'bob'
- AND e.orig = n1.id
- AND e.dest = n2.id;
+ AND n2.name = 'bob'
+ AND e.orig = n1.id
+ AND e.dest = n2.id;
+```
In the latter query, the query plan must be option 2. Note that you must use the keyword "CROSS" in order to disable the table reordering optimization; INNER JOIN, NATURAL JOIN, JOIN, and other similar combinations work just like a comma join in that the optimizer is free to reorder tables as it sees fit. (Table reordering is also disabled on an outer join, but that is because outer joins are not associative or commutative. Reordering tables in OUTER JOIN changes the result.)
See "The Fossil NGQP Upgrade Case Study" for another real-world example of using CROSS JOIN to manually control the nesting order of a join. The query planner checklist found later in the same document provides further guidance on manual control of the query planner.
-8. Choosing Between Multiple Indexes
-Each table in the FROM clause of a query can use at most one index (except when the OR-clause optimization comes into play) and SQLite strives to use at least one index on each table. Sometimes, two or more indexes might be candidates for use on a single table. For example:
+## Choosing Between Multiple Indexes
+
+Each table in the FROM clause of a query can use at most one index (except when the OR-clause optimization comes into play) and EpilogLite strives to use at least one index on each table. Sometimes, two or more indexes might be candidates for use on a single table. For example:
+```sql
CREATE TABLE ex2(x,y,z);
CREATE INDEX ex2i1 ON ex2(x);
CREATE INDEX ex2i2 ON ex2(y);
SELECT z FROM ex2 WHERE x=5 AND y=6;
+```
For the SELECT statement above, the optimizer can use the ex2i1 index to lookup rows of ex2 that contain x=5 and then test each row against the y=6 term. Or it can use the ex2i2 index to lookup rows of ex2 that contain y=6 then test each of those rows against the x=5 term.
-When faced with a choice of two or more indexes, SQLite tries to estimate the total amount of work needed to perform the query using each option. It then selects the option that gives the least estimated work.
+When faced with a choice of two or more indexes, EpilogLite tries to estimate the total amount of work needed to perform the query using each option. It then selects the option that gives the least estimated work.
-To help the optimizer get a more accurate estimate of the work involved in using various indexes, the user may optionally run the ANALYZE command. The ANALYZE command scans all indexes of database where there might be a choice between two or more indexes and gathers statistics on the selectiveness of those indexes. The statistics gathered by this scan are stored in special database tables names shows names all begin with "sqlite_stat". The content of these tables is not updated as the database changes so after making significant changes it might be prudent to rerun ANALYZE. The results of an ANALYZE command are only available to database connections that are opened after the ANALYZE command completes.
+To help the optimizer get a more accurate estimate of the work involved in using various indexes, the user may optionally run the ANALYZE command. The ANALYZE command scans all indexes of database where there might be a choice between two or more indexes and gathers statistics on the selectiveness of those indexes. The statistics gathered by this scan are stored in special database tables names shows names all begin with "EpilogLite_stat". The content of these tables is not updated as the database changes so after making significant changes it might be prudent to rerun ANALYZE. The results of an ANALYZE command are only available to database connections that are opened after the ANALYZE command completes.
-The various sqlite_statN tables contain information on how selective the various indexes are. For example, the sqlite_stat1 table might indicate that an equality constraint on column x reduces the search space to 10 rows on average, whereas an equality constraint on column y reduces the search space to 3 rows on average. In that case, SQLite would prefer to use index ex2i2 since that index is more selective.
-8.1. Disqualifying WHERE Clause Terms using Unary-"+"
+The various EpilogLite_statN tables contain information on how selective the various indexes are. For example, the EpilogLite_stat1 table might indicate that an equality constraint on column x reduces the search space to 10 rows on average, whereas an equality constraint on column y reduces the search space to 3 rows on average. In that case, EpilogLite would prefer to use index ex2i2 since that index is more selective.
-Note: Disqualifying WHERE clause terms this way is not recommended. This is a work-around. Only do this as a last resort to get the performance you need. If you find a situation where this work-around is necessary, please report the situation on the SQLite Community Forum so that the SQLite maintainers can try to improve the query planner such that the work-around is no longer required for your situation.
+### Disqualifying WHERE Clause Terms using Unary-"+"
+
+Note: Disqualifying WHERE clause terms this way is not recommended. This is a work-around. Only do this as a last resort to get the performance you need. If you find a situation where this work-around is necessary, please report the situation on the EpilogLite Community Forum so that the EpilogLite maintainers can try to improve the query planner such that the work-around is no longer required for your situation.
Terms of the WHERE clause can be manually disqualified for use with indexes by prepending a unary + operator to the column name. The unary + is a no-op and will not generate any byte code in the prepared statement. However, the unary + operator will prevent the term from constraining an index. So, in the example above, if the query were rewritten as:
+```sql
SELECT z FROM ex2 WHERE +x=5 AND y=6;
+```
The + operator on the x column will prevent that term from constraining an index. This would force the use of the ex2i2 index.
Note that the unary + operator also removes type affinity from an expression, and in some cases this can cause subtle changes in the meaning of an expression. In the example above, if column x has TEXT affinity then the comparison "x=5" will be done as text. The + operator removes the affinity. So the comparison "+x=5" will compare the text in column x with the numeric value 5 and will always be false.
-8.2. Range Queries
+
+### Range Queries
Consider a slightly different scenario:
+```sql
CREATE TABLE ex2(x,y,z);
CREATE INDEX ex2i1 ON ex2(x);
CREATE INDEX ex2i2 ON ex2(y);
SELECT z FROM ex2 WHERE x BETWEEN 1 AND 100 AND y BETWEEN 1 AND 100;
+```
Further suppose that column x contains values spread out between 0 and 1,000,000 and column y contains values that span between 0 and 1,000. In that scenario, the range constraint on column x should reduce the search space by a factor of 10,000 whereas the range constraint on column y should reduce the search space by a factor of only 10. So the ex2i1 index should be preferred.
-SQLite will make this determination, but only if it has been compiled with SQLITE_ENABLE_STAT3 or SQLITE_ENABLE_STAT4. The SQLITE_ENABLE_STAT3 and SQLITE_ENABLE_STAT4 options causes the ANALYZE command to collect a histogram of column content in the sqlite_stat3 or sqlite_stat4 tables and to use this histogram to make a better guess at the best query to use for range constraints such as the above. The main difference between STAT3 and STAT4 is that STAT3 records histogram data for only the left-most column of an index whereas STAT4 records histogram data for all columns of an index. For single-column indexes, STAT3 and STAT4 work the same.
+EpilogLite will make this determination, but only if it has been compiled with EpilogLite_ENABLE_STAT3 or EpilogLite_ENABLE_STAT4. The EpilogLite_ENABLE_STAT3 and EpilogLite_ENABLE_STAT4 options causes the ANALYZE command to collect a histogram of column content in the EpilogLite_stat3 or EpilogLite_stat4 tables and to use this histogram to make a better guess at the best query to use for range constraints such as the above. The main difference between STAT3 and STAT4 is that STAT3 records histogram data for only the left-most column of an index whereas STAT4 records histogram data for all columns of an index. For single-column indexes, STAT3 and STAT4 work the same.
The histogram data is only useful if the right-hand side of the constraint is a simple compile-time constant or parameter and not an expression.
Another limitation of the histogram data is that it only applies to the left-most column on an index. Consider this scenario:
+```sql
CREATE TABLE ex3(w,x,y,z);
CREATE INDEX ex3i1 ON ex2(w, x);
CREATE INDEX ex3i2 ON ex2(w, y);
SELECT z FROM ex3 WHERE w=5 AND x BETWEEN 1 AND 100 AND y BETWEEN 1 AND 100;
+```
Here the inequalities are on columns x and y which are not the left-most index columns. Hence, the histogram data which is collected no left-most column of indexes is useless in helping to choose between the range constraints on columns x and y.
-9. Covering Indexes
-When doing an indexed lookup of a row, the usual procedure is to do a binary search on the index to find the index entry, then extract the rowid from the index and use that rowid to do a binary search on the original table. Thus a typical indexed lookup involves two binary searches. If, however, all columns that were to be fetched from the table are already available in the index itself, SQLite will use the values contained in the index and will never look up the original table row. This saves one binary search for each row and can make many queries run twice as fast.
+## Covering Indexes
+
+When doing an indexed lookup of a row, the usual procedure is to do a binary search on the index to find the index entry, then extract the rowid from the index and use that rowid to do a binary search on the original table. Thus a typical indexed lookup involves two binary searches. If, however, all columns that were to be fetched from the table are already available in the index itself, EpilogLite will use the values contained in the index and will never look up the original table row. This saves one binary search for each row and can make many queries run twice as fast.
When an index contains all of the data needed for a query and when the original table never needs to be consulted, we call that index a "covering index".
-10. ORDER BY Optimizations
-SQLite attempts to use an index to satisfy the ORDER BY clause of a query when possible. When faced with the choice of using an index to satisfy WHERE clause constraints or satisfying an ORDER BY clause, SQLite does the same cost analysis described above and chooses the index that it believes will result in the fastest answer.
+## ORDER BY Optimizations
+
+EpilogLite attempts to use an index to satisfy the ORDER BY clause of a query when possible. When faced with the choice of using an index to satisfy WHERE clause constraints or satisfying an ORDER BY clause, EpilogLite does the same cost analysis described above and chooses the index that it believes will result in the fastest answer.
+
+EpilogLite will also attempt to use indexes to help satisfy GROUP BY clauses and the DISTINCT keyword. If the nested loops of the join can be arranged such that rows that are equivalent for the GROUP BY or for the DISTINCT are consecutive, then the GROUP BY or DISTINCT logic can determine if the current row is part of the same group or if the current row is distinct simply by comparing the current row to the previous row. This can be much faster than the alternative of comparing each row to all prior rows.
+
+### Partial ORDER BY via Index
-SQLite will also attempt to use indexes to help satisfy GROUP BY clauses and the DISTINCT keyword. If the nested loops of the join can be arranged such that rows that are equivalent for the GROUP BY or for the DISTINCT are consecutive, then the GROUP BY or DISTINCT logic can determine if the current row is part of the same group or if the current row is distinct simply by comparing the current row to the previous row. This can be much faster than the alternative of comparing each row to all prior rows.
-10.1. Partial ORDER BY via Index
+If a query contains an ORDER BY clause with multiple terms, it might be that EpilogLite can use indexes to cause rows to come out in the order of some prefix of the terms in the ORDER BY but that later terms in the ORDER BY are not satisfied. In that case, EpilogLite does block sorting. Suppose the ORDER BY clause has four terms and the natural order of the query results in rows appearing in order of the first two terms. As each row is output by the query engine and enters the sorter, the outputs in the current row corresponding to the first two terms of the ORDER BY are compared against the previous row. If they have changed, the current sort is finished and output and a new sort is started. This results in a slightly faster sort. Even bigger advantages are that many fewer rows need to be held in memory, reducing memory requirements, and outputs can begin to appear before the core query has run to completion.
-If a query contains an ORDER BY clause with multiple terms, it might be that SQLite can use indexes to cause rows to come out in the order of some prefix of the terms in the ORDER BY but that later terms in the ORDER BY are not satisfied. In that case, SQLite does block sorting. Suppose the ORDER BY clause has four terms and the natural order of the query results in rows appearing in order of the first two terms. As each row is output by the query engine and enters the sorter, the outputs in the current row corresponding to the first two terms of the ORDER BY are compared against the previous row. If they have changed, the current sort is finished and output and a new sort is started. This results in a slightly faster sort. Even bigger advantages are that many fewer rows need to be held in memory, reducing memory requirements, and outputs can begin to appear before the core query has run to completion.
-11. Subquery Flattening
+## Subquery Flattening
When a subquery occurs in the FROM clause of a SELECT, the simplest behavior is to evaluate the subquery into a transient table, then run the outer SELECT against the transient table. Such a plan can be suboptimal since the transient table will not have any indexes and the outer query (which is likely a join) will be forced to either do full table scan on the transient table or else construct a query-time index on the transient table, neither or which is likely to be particularly fast.
-To overcome this problem, SQLite attempts to flatten subqueries in the FROM clause of a SELECT. This involves inserting the FROM clause of the subquery into the FROM clause of the outer query and rewriting expressions in the outer query that refer to the result set of the subquery. For example:
+To overcome this problem, EpilogLite attempts to flatten subqueries in the FROM clause of a SELECT. This involves inserting the FROM clause of the subquery into the FROM clause of the outer query and rewriting expressions in the outer query that refer to the result set of the subquery. For example:
+```sql
SELECT t1.a, t2.b FROM t2, (SELECT x+y AS a FROM t1 WHERE z<100) WHERE a>5
+```
Would be rewritten using query flattening as:
+```sql
SELECT t1.x+t1.y AS a, t2.b FROM t2, t1 WHERE z<100 AND a>5
+```
There is a long list of conditions that must all be met in order for query flattening to occur. Some of the constraints are marked as obsolete by italic text. These extra constraints are retained in the documentation to preserve the numbering of the other constraints.
Casual readers are not expected to understand all of these rules. The point here is that flattening rules are subtle and complex. There have been multiple bugs over the years caused by over-aggressive query flattening. On the other hand, performance of complex queries and/or queries involving views tends to suffer if query flattening is more conservative.
- (Obsolete)
- (Obsolete)
- If the subquery is the right operand of a LEFT JOIN then
- the subquery may not be a join, and
- the FROM clause of the subquery may not contain a virtual table, and
- the outer query may not be DISTINCT.
- The subquery is not DISTINCT.
- (Obsolete - subsumed into constraint 4)
- (Obsolete)
- The subquery has a FROM clause.
- The subquery does not use LIMIT or the outer query is not a join.
- The subquery does not use LIMIT or the outer query does not use aggregates.
- (Obsolete)
- The subquery and the outer query do not both have ORDER BY clauses.
- (Obsolete - subsumed into constraint 3)
- The subquery and outer query do not both use LIMIT.
- The subquery does not use OFFSET.
- If the outer query is part of a compound select, then the subquery may not have a LIMIT clause.
- If the outer query is an aggregate, then the subquery may not contain ORDER BY.
- If the sub-query is a compound SELECT, then
- all compound operators must be UNION ALL, and
- no terms with the subquery compound may be aggregate or DISTINCT, and
- every term within the subquery must have a FROM clause, and
- the outer query may not be an aggregateor DISTINCT query.
- the subquery may not contain window functions.
- the subquery must not be the right-hand side of a LEFT JOIN.
- either the subquery is the first element of the outer query or there are not RIGHT or FULL JOINs in any arm of the subquery.
- the corresponding result set expressions in all arms of the compound subquery must have the same affinity.
- The parent and sub-query may contain WHERE clauses. Subject to rules (11), (12) and (13), they may also contain ORDER BY, LIMIT and OFFSET clauses.
- If the sub-query is a compound select, then all terms of the ORDER by clause of the parent must be simple references to columns of the sub-query.
- If the subquery uses LIMIT then the outer query may not have a WHERE clause.
- If the sub-query is a compound select, then it must not use an ORDER BY clause.
- If the subquery uses LIMIT, then the outer query may not be DISTINCT.
- The subquery may not be a recursive CTE.
- If the outer query is a recursive CTE, then the sub-query may not be a compound query.
- (Obsolete)
- Neither the subquery nor the outer query may contain a window function in the result set nor the ORDER BY clause.
- The subquery may not be the right operand of a RIGHT or FULL OUTER JOIN.
- The subquery may not contain a FULL or RIGHT JOIN unless it is the first element of the parent query. Two subcases:
- the subquery is not a compound query.
- the subquery is a compound query and the RIGHT JOIN occurs in any arm of the compound query. (See also (17g)).
- The subquery is not a MATERIALIZED CTE.
+- (Obsolete)
+- (Obsolete)
+- If the subquery is the right operand of a LEFT JOIN then
+- - the subquery may not be a join, and
+- - the FROM clause of the subquery may not contain a virtual table, and
+- - the outer query may not be DISTINCT.
+- The subquery is not DISTINCT.
+- (Obsolete - subsumed into constraint 4)
+- (Obsolete)
+- The subquery has a FROM clause.
+- The subquery does not use LIMIT or the outer query is not a join.
+- The subquery does not use LIMIT or the outer query does not use aggregates.
+- (Obsolete)
+- The subquery and the outer query do not both have ORDER BY clauses.
+- (Obsolete - subsumed into constraint 3)
+- The subquery and outer query do not both use LIMIT.
+- The subquery does not use OFFSET.
+- If the outer query is part of a compound select, then the subquery may not have a LIMIT clause.
+- If the outer query is an aggregate, then the subquery may not contain ORDER BY.
+- If the sub-query is a compound SELECT, then
+- - all compound operators must be UNION ALL, and
+- - no terms with the subquery compound may be aggregate or DISTINCT, and
+- - every term within the subquery must have a FROM clause, and
+- - the outer query may not be an aggregateor DISTINCT query.
+- - the subquery may not contain window functions.
+- - the subquery must not be the right-hand side of a LEFT JOIN.
+- - either the subquery is the first element of the outer query or there are not RIGHT or FULL JOINs in any arm of the subquery.
+- - the corresponding result set expressions in all arms of the compound subquery must have the same affinity.
+- The parent and sub-query may contain WHERE clauses. Subject to rules (11), (12) and (13), they may also contain ORDER BY, LIMIT and OFFSET clauses.
+- If the sub-query is a compound select, then all terms of the ORDER by clause of the parent must be simple references to columns of the sub-query.
+- If the subquery uses LIMIT then the outer query may not have a WHERE clause.
+- If the sub-query is a compound select, then it must not use an ORDER BY clause.
+- If the subquery uses LIMIT, then the outer query may not be DISTINCT.
+- The subquery may not be a recursive CTE.
+- If the outer query is a recursive CTE, then the sub-query may not be a compound query.
+- (Obsolete)
+- Neither the subquery nor the outer query may contain a window function in the result set nor the ORDER BY clause.
+- The subquery may not be the right operand of a RIGHT or FULL OUTER JOIN.
+- The subquery may not contain a FULL or RIGHT JOIN unless it is the first element of the parent query. Two subcases:
+- - the subquery is not a compound query.
+- - the subquery is a compound query and the RIGHT JOIN occurs in any arm of the compound query. (See also (17g)).
+- The subquery is not a MATERIALIZED CTE.
Query flattening is an important optimization when views are used as each use of a view is translated into a subquery.
-12. Subquery Co-routines
-SQLite implements FROM-clause subqueries in one of three ways:
+## Subquery Co-routines
- Flatten the subquery into its outer query
- Evaluate the subquery into a transient table that exists for the duration of the one SQL statement that is being evaluated, then run the outer query against that transient table.
- Evaluate the subquery in a co-routine that runs in parallel with the outer query, providing rows to the outer query as needed.
+EpilogLite implements FROM-clause subqueries in one of three ways:
+
+- Flatten the subquery into its outer query
+- Evaluate the subquery into a transient table that exists for the duration of the one SQL statement that is being evaluated, then run the outer query against that transient table.
+- Evaluate the subquery in a co-routine that runs in parallel with the outer query, providing rows to the outer query as needed.
This section describes the third technique: implementing the subquery as a co-routine.
@@ -472,134 +536,161 @@ When a subquery is implemented as a co-routine, byte-code is generated to implem
Co-routines are better than storing the complete result set of the subquery in a transient table because co-routines use less memory. With a co-routine, only a single row of the result needs to be remembered, whereas all rows of the result must be stored for a transient table. Also, because the co-routine does not need to run to completion before the outer query begins its work, the first rows of output can appear much sooner, and if the overall query is abandoned before it has finished, less work is done overall.
On the other hand, if the result of the subquery must be scanned multiple times (because, for example, it is just one table in a join) then it is better to use a transient table to remember the entire result of the subquery, in order to avoid computing the subquery more than once.
-12.1. Using Co-routines to Defer Work until after the Sorting
-As of SQLite version 3.21.0 (2017-10-24), the query planner will always prefer to use a co-routine to implement FROM-clause subqueries that contains an ORDER BY clause and that are not part of a join when the result set of the outer query is "complex". This feature allows applications to shift expensive computations from before the sorter until after the sorter, which can result in faster operation. For example, consider this query:
+### Using Co-routines to Defer Work until after the Sorting
+
+As of EpilogLite version 3.21.0 (2017-10-24), the query planner will always prefer to use a co-routine to implement FROM-clause subqueries that contains an ORDER BY clause and that are not part of a join when the result set of the outer query is "complex". This feature allows applications to shift expensive computations from before the sorter until after the sorter, which can result in faster operation. For example, consider this query:
+```sql
SELECT expensive_function(a) FROM tab ORDER BY date DESC LIMIT 5;
+```
The goal of this query is to compute some value for the five most recent entries in the table. In the query above, the "expensive_function()" is invoked prior to the sort and thus is invoked on every row of the table, even rows that are ultimately omitted due to the LIMIT clause. A co-routine can be used to work around this:
+```sql
SELECT expensive_function(a) FROM (
- SELECT a FROM tab ORDER BY date DESC LIMIT 5
+ SELECT a FROM tab ORDER BY date DESC LIMIT 5
);
+```
In the revised query, the subquery implemented by a co-routine computes the five most recent values for "a". Those five values are passed from the co-routine up into the outer query where the "expensive_function()" is invoked on only the specific rows that the application cares about.
-The query planner in future versions of SQLite might grow smart enough to make transformations such as the above automatically, in both directions. That is to say, future versions of SQLite might transform queries of the first form into the second, or queries written the second way into the first. As of SQLite version 3.22.0 (2018-01-22), the query planner will flatten the subquery if the outer query does not make use of any user-defined functions or subqueries in its result set. For the examples shown above, however, SQLite implements each of the queries as written.
-13. The MIN/MAX Optimization
+The query planner in future versions of EpilogLite might grow smart enough to make transformations such as the above automatically, in both directions. That is to say, future versions of EpilogLite might transform queries of the first form into the second, or queries written the second way into the first. As of EpilogLite version 3.22.0 (2018-01-22), the query planner will flatten the subquery if the outer query does not make use of any user-defined functions or subqueries in its result set. For the examples shown above, however, EpilogLite implements each of the queries as written.
+
+## The MIN/MAX Optimization
Queries that contain a single MIN() or MAX() aggregate function whose argument is the left-most column of an index might be satisfied by doing a single index lookup rather than by scanning the entire table. Examples:
+```sql
SELECT MIN(x) FROM table;
SELECT MAX(x)+1 FROM table;
+```
-14. Automatic Query-Time Indexes
+## Automatic Query-Time Indexes
-When no indexes are available to aid the evaluation of a query, SQLite might create an automatic index that lasts only for the duration of a single SQL statement. Automatic indexes are also sometimes called "Query-time indexes". Since the cost of constructing the automatic or query-time index is O(NlogN) (where N is the number of entries in the table) and the cost of doing a full table scan is only O(N), an automatic index will only be created if SQLite expects that the lookup will be run more than logN times during the course of the SQL statement. Consider an example:
+When no indexes are available to aid the evaluation of a query, EpilogLite might create an automatic index that lasts only for the duration of a single SQL statement. Automatic indexes are also sometimes called "Query-time indexes". Since the cost of constructing the automatic or query-time index is O(NlogN) (where N is the number of entries in the table) and the cost of doing a full table scan is only O(N), an automatic index will only be created if EpilogLite expects that the lookup will be run more than logN times during the course of the SQL statement. Consider an example:
+```sql
CREATE TABLE t1(a,b);
CREATE TABLE t2(c,d);
-- Insert many rows into both t1 and t2
SELECT * FROM t1, t2 WHERE a=c;
+```
-In the query above, if both t1 and t2 have approximately N rows, then without any indexes the query will require O(N*N) time. On the other hand, creating an index on table t2 requires O(NlogN) time and using that index to evaluate the query requires an additional O(NlogN) time. In the absence of ANALYZE information, SQLite guesses that N is one million and hence it believes that constructing the automatic index will be the cheaper approach.
+In the query above, if both t1 and t2 have approximately N rows, then without any indexes the query will require O(N\*N) time. On the other hand, creating an index on table t2 requires O(NlogN) time and using that index to evaluate the query requires an additional O(NlogN) time. In the absence of ANALYZE information, EpilogLite guesses that N is one million and hence it believes that constructing the automatic index will be the cheaper approach.
An automatic query-time index might also be used for a subquery:
+```sql
CREATE TABLE t1(a,b);
CREATE TABLE t2(c,d);
-- Insert many rows into both t1 and t2
SELECT a, (SELECT d FROM t2 WHERE c=b) FROM t1;
+```
-In this example, the t2 table is used in a subquery to translate values of the t1.b column. If each table contains N rows, SQLite expects that the subquery will run N times, and hence it will believe it is faster to construct an automatic, transient index on t2 first and then use that index to satisfy the N instances of the subquery.
+In this example, the t2 table is used in a subquery to translate values of the t1.b column. If each table contains N rows, EpilogLite expects that the subquery will run N times, and hence it will believe it is faster to construct an automatic, transient index on t2 first and then use that index to satisfy the N instances of the subquery.
-The automatic indexing capability can be disabled at run-time using the automatic_index pragma. Automatic indexing is turned on by default, but this can be changed so that automatic indexing is off by default using the SQLITE_DEFAULT_AUTOMATIC_INDEX compile-time option. The ability to create automatic indexes can be completely disabled by compiling with the SQLITE_OMIT_AUTOMATIC_INDEX compile-time option.
+The automatic indexing capability can be disabled at run-time using the automatic_index pragma. Automatic indexing is turned on by default, but this can be changed so that automatic indexing is off by default using the EpilogLite_DEFAULT_AUTOMATIC_INDEX compile-time option. The ability to create automatic indexes can be completely disabled by compiling with the EpilogLite_OMIT_AUTOMATIC_INDEX compile-time option.
-In SQLite version 3.8.0 (2013-08-26) and later, an SQLITE_WARNING_AUTOINDEX message is sent to the error log every time a statement is prepared that uses an automatic index. Application developers can and should use these warnings to identify the need for new persistent indexes in the schema.
+In EpilogLite version 3.8.0 (2013-08-26) and later, an EpilogLite_WARNING_AUTOINDEX message is sent to the error log every time a statement is prepared that uses an automatic index. Application developers can and should use these warnings to identify the need for new persistent indexes in the schema.
-Do not confuse automatic indexes with the internal indexes (having names like "sqlite_autoindex_table_N") that are sometimes created to implement a PRIMARY KEY constraint or UNIQUE constraint. The automatic indexes described here exist only for the duration of a single query, are never persisted to disk, and are only visible to a single database connection. Internal indexes are part of the implementation of PRIMARY KEY and UNIQUE constraints, are long-lasting and persisted to disk, and are visible to all database connections. The term "autoindex" appears in the names of internal indexes for legacy reasons and does not indicate that internal indexes and automatic indexes are related.
-14.1. Hash Joins
+Do not confuse automatic indexes with the internal indexes (having names like "EpilogLite_autoindex_table_N") that are sometimes created to implement a PRIMARY KEY constraint or UNIQUE constraint. The automatic indexes described here exist only for the duration of a single query, are never persisted to disk, and are only visible to a single database connection. Internal indexes are part of the implementation of PRIMARY KEY and UNIQUE constraints, are long-lasting and persisted to disk, and are visible to all database connections. The term "autoindex" appears in the names of internal indexes for legacy reasons and does not indicate that internal indexes and automatic indexes are related.
+
+### Hash Joins
An automatic index is almost the same thing as a hash join. The only difference is that a B-Tree is used instead of a hash table. If you are willing to say that the transient B-Tree constructed for an automatic index is really just a fancy hash table, then a query that uses an automatic index is just a hash join.
-SQLite constructs a transient index instead of a hash table in this instance because it already has a robust and high performance B-Tree implementation at hand, whereas a hash-table would need to be added. Adding a separate hash table implementation to handle this one case would increase the size of the library (which is designed for use on low-memory embedded devices) for minimal performance gain. SQLite might be enhanced with a hash-table implementation someday, but for now it seems better to continue using automatic indexes in cases where client/server database engines might use a hash join.
-15. The Predicate Push-Down Optimization
+EpilogLite constructs a transient index instead of a hash table in this instance because it already has a robust and high performance B-Tree implementation at hand, whereas a hash-table would need to be added. Adding a separate hash table implementation to handle this one case would increase the size of the library (which is designed for use on low-memory embedded devices) for minimal performance gain. EpilogLite might be enhanced with a hash-table implementation someday, but for now it seems better to continue using automatic indexes in cases where client/server database engines might use a hash join.
+
+## The Predicate Push-Down Optimization
If a subquery cannot be flattened into the outer query, it might still be possible to enhance performance by "pushing down" WHERE clause terms from the outer query into the subquery. Consider an example:
+```sql
CREATE TABLE t1(a INT, b INT);
CREATE TABLE t2(x INT, y INT);
CREATE VIEW v1(a,b) AS SELECT DISTINCT a, b FROM t1;
SELECT x, y, b
- FROM t2 JOIN v1 ON (x=a)
+ FROM t2 JOIN v1 ON (x=a)
WHERE b BETWEEN 10 AND 20;
+```
The view v1 cannot be flattened because it is DISTINCT. It must instead be run as a subquery with the results being stored in a transient table, then the join is performed between t2 and the transient table. The push-down optimization pushes down the "b BETWEEN 10 AND 20" term into the view. This makes the transient table smaller, and helps the subquery to run faster if there is an index on t1.b. The resulting evaluation is like this:
+```sql
SELECT x, y, b
- FROM t2
- JOIN (SELECT DISTINCT a, b FROM t1 WHERE b BETWEEN 10 AND 20)
+ FROM t2
+ JOIN (SELECT DISTINCT a, b FROM t1 WHERE b BETWEEN 10 AND 20)
WHERE b BETWEEN 10 AND 20;
+```
The WHERE-clause push-down optimization cannot always be used. For example, if the subquery contains a LIMIT, then pushing down any part of the WHERE clause from the outer query could change the result of the inner query. There are other restrictions, explained in a comment in the source code on the pushDownWhereTerms() routine that implements this optimization.
-Do not confuse this optimization with the optimization by a similar name in MySQL. The MySQL push-down optimization changes the order of evaluation of WHERE-clause constraints such that those that can be evaluated using only the index and without having to find the corresponding table row are evaluated first, thus avoiding an unnecessary table row lookup if the constraint fails. For disambiguation, SQLite calls this the "MySQL push-down optimization". SQLite does do the MySQL push-down optimization too, in addition to the WHERE-clause push-down optimization. But the focus of this section is the WHERE-clause push-down optimization.
-16. The OUTER JOIN Strength Reduction Optimization
+Do not confuse this optimization with the optimization by a similar name in MySQL. The MySQL push-down optimization changes the order of evaluation of WHERE-clause constraints such that those that can be evaluated using only the index and without having to find the corresponding table row are evaluated first, thus avoiding an unnecessary table row lookup if the constraint fails. For disambiguation, EpilogLite calls this the "MySQL push-down optimization". EpilogLite does do the MySQL push-down optimization too, in addition to the WHERE-clause push-down optimization. But the focus of this section is the WHERE-clause push-down optimization.
+
+## The OUTER JOIN Strength Reduction Optimization
An OUTER JOIN (either a LEFT JOIN, a RIGHT JOIN, or a FULL JOIN) can sometimes be simplified. A LEFT or RIGHT JOIN can be converted into an ordinary (INNER) JOIN, or a FULL JOIN might be converted into either a LEFT or a RIGHT JOIN. This can happen if there are terms in the WHERE clause that guarantee the same result after simplification. For example, if any column in the right-hand table of the LEFT JOIN must be non-NULL in order for the WHERE clause to be true, then the LEFT JOIN is demoted to an ordinary JOIN.
The theorem prover that determines whether a join can be simplified is imperfect. It sometimes returns a false negative. In other words, it sometimes fails to prove that reducing the strength of an OUTER JOIN is safe when in fact it is safe. For example, the prover does not know the datetime() SQL function will always return NULL if its first argument is NULL, and so it will not recognize that the LEFT JOIN in the following query could be strength-reduced:
+```sql
SELECT urls.url
- FROM urls
- LEFT JOIN
- (SELECT *
- FROM (SELECT url_id AS uid, max(retrieval_time) AS rtime
- FROM lookups GROUP BY 1 ORDER BY 1)
- WHERE uid IN (358341,358341,358341)
- ) recent
- ON u.source_seed_id = recent.xyz OR u.url_id = recent.xyz
- WHERE
- DATETIME(recent.rtime) > DATETIME('now', '-5 days');
+ FROM urls
+ LEFT JOIN
+ (SELECT *
+ FROM (SELECT url_id AS uid, max(retrieval_time) AS rtime
+ FROM lookups GROUP BY 1 ORDER BY 1)
+ WHERE uid IN (358341,358341,358341)
+ ) recent
+ ON u.source_seed_id = recent.xyz OR u.url_id = recent.xyz
+WHERE
+DATETIME(recent.rtime) > DATETIME('now', '-5 days');
+```
It is possible that future enhancements to the prover might enable it to recognize that NULL inputs to certain built-in functions always result in a NULL answer. However, not all built-in functions have that property (for example coalesce()) and, of course, the prover will never be able to reason about application-defined SQL functions.
-17. The Omit OUTER JOIN Optimization
+
+## The Omit OUTER JOIN Optimization
Sometimes a LEFT or RIGHT JOIN can be completely omitted from a query without changing the result. This can happen if all of the following are true:
- The query is not an aggregate
- Either the query is DISTINCT or else the ON or USING clause on the OUTER JOIN constrains the join such that it matches only a single row
- The right-hand table of the LEFT JOIN or the left-hand table of a RIGHT JOIN is not be used anywhere in the query outside of its own USING or ON clause.
+- The query is not an aggregate
+- Either the query is DISTINCT or else the ON or USING clause on the OUTER JOIN constrains the join such that it matches only a single row
+- The right-hand table of the LEFT JOIN or the left-hand table of a RIGHT JOIN is not be used anywhere in the query outside of its own USING or ON clause.
OUTER JOIN elimination often comes up when OUTER JOINs are used inside of views, and then the view is used in such as way that none of the columns on the right-hand table of the LEFT JOIN or on the left-hand table of a RIGHT JOIN are referenced.
Here is a simple example of omitting a LEFT JOIN:
+```sql
CREATE TABLE t1(ipk INTEGER PRIMARY KEY, v1);
CREATE TABLE t2(ipk INTEGER PRIMARY KEY, v2);
CREATE TABLE t3(ipk INTEGER PRIMARY KEY, v3);
-SELECT v1, v3 FROM t1
- LEFT JOIN t2 ON (t1.ipk=t2.ipk)
- LEFT JOIN t3 ON (t1.ipk=t3.ipk)
+SELECT v1, v3 FROM t1
+ LEFT JOIN t2 ON (t1.ipk=t2.ipk)
+ LEFT JOIN t3 ON (t1.ipk=t3.ipk)
+```
The t2 table is completely unused in the query above, and so the query planner is able to implement the query as if it were written:
-SELECT v1, v3 FROM t1
- LEFT JOIN t3 ON (t1.ipk=t3.ipk)
+```sql
+SELECT v1, v3 FROM t1
+ LEFT JOIN t3 ON (t1.ipk=t3.ipk)
+```
-As of this writing, only LEFT JOINs are eliminated. This optimize has not yet been generalized to work with RIGHT JOINs as RIGHT JOIN is a relatively new addition to SQLite. That asymmetry will probably be corrected in a future release.
-18. The Constant Propagation Optimization
+As of this writing, only LEFT JOINs are eliminated. This optimize has not yet been generalized to work with RIGHT JOINs as RIGHT JOIN is a relatively new addition to EpilogLite. That asymmetry will probably be corrected in a future release.
-When a WHERE clause contains two or more equality constraints connected by the AND operator such that all of the affinities of the various constraints are the same, then SQLite might use the transitive property of equality to construct new "virtual" constraints that can be used to simplify expressions and/or improve performance. This is called the "constant-propagation optimization".
+## The Constant Propagation Optimization
+
+When a WHERE clause contains two or more equality constraints connected by the AND operator such that all of the affinities of the various constraints are the same, then EpilogLite might use the transitive property of equality to construct new "virtual" constraints that can be used to simplify expressions and/or improve performance. This is called the "constant-propagation optimization".
For example, consider the following schema and query:
+```sql
CREATE TABLE t1(a INTEGER PRIMARY KEY, b INT, c INT);
SELECT * FROM t1 WHERE a=b AND b=5;
+```
-SQLite looks at the "a=b" and "b=5" constraints and deduces that if those two constraints are true, then it must also be the case that "a=5" is true. This means that the desired row can be looked up quickly using a value of 5 for the INTEGER PRIMARY KEY.
-
-This page last modified on 2024-07-24 12:16:13 UTC
+EpilogLite looks at the "a=b" and "b=5" constraints and deduces that if those two constraints are true, then it must also be the case that "a=5" is true. This means that the desired row can be looked up quickly using a value of 5 for the INTEGER PRIMARY KEY.
diff --git a/design/TRANSACTIONS.md b/design/TRANSACTIONS.md
index 88034c3..b6dc2c2 100644
--- a/design/TRANSACTIONS.md
+++ b/design/TRANSACTIONS.md
@@ -1,68 +1,8 @@
+# Atomic Commit In SQLite
-SQLite
-Small. Fast. Reliable.
-Choose any three.
-
- Home
- About
- Documentation
- Download
- License
- Support
- Purchase
- Search
-
-Atomic Commit In SQLite
-Table Of Contents
-1. Introduction
-2. Hardware Assumptions
-3. Single File Commit
-3.1. Initial State
-3.2. Acquiring A Read Lock
-3.3. Reading Information Out Of The Database
-3.4. Obtaining A Reserved Lock
-3.5. Creating A Rollback Journal File
-3.6. Changing Database Pages In User Space
-3.7. Flushing The Rollback Journal File To Mass Storage
-3.8. Obtaining An Exclusive Lock
-3.9. Writing Changes To The Database File
-3.10. 0 Flushing Changes To Mass Storage
-3.11. 1 Deleting The Rollback Journal
-3.12. 2 Releasing The Lock
-4. Rollback
-4.1. When Something Goes Wrong...
-4.2. Hot Rollback Journals
-4.3. Obtaining An Exclusive Lock On The Database
-4.4. Rolling Back Incomplete Changes
-4.5. Deleting The Hot Journal
-4.6. Continue As If The Uncompleted Writes Had Never Happened
-5. Multi-file Commit
-5.1. Separate Rollback Journals For Each Database
-5.2. The Super-Journal File
-5.3. Updating Rollback Journal Headers
-5.4. Updating The Database Files
-5.5. Delete The Super-Journal File
-5.6. Clean Up The Rollback Journals
-6. Additional Details Of The Commit Process
-6.1. Always Journal Complete Sectors
-6.2. Dealing With Garbage Written Into Journal Files
-6.3. Cache Spill Prior To Commit
-7. Optimizations
-7.1. Cache Retained Between Transactions
-7.2. Exclusive Access Mode
-7.3. Do Not Journal Freelist Pages
-7.4. Single Page Updates And Atomic Sector Writes
-7.5. Filesystems With Safe Append Semantics
-7.6. Persistent Rollback Journals
-8. Testing Atomic Commit Behavior
-9. Things That Can Go Wrong
-9.1. Broken Locking Implementations
-9.2. Incomplete Disk Flushes
-9.3. Partial File Deletions
-9.4. Garbage Written Into Files
-9.5. Deleting Or Renaming A Hot Journal
-10. Future Directions And Conclusion
-1. Introduction
+status: draft
+
+## Introduction
An important feature of transactional databases like SQLite is "atomic commit". Atomic commit means that either all database changes within a single transaction occur or none of them occur. With atomic commit, it is as if many different writes to different sections of the database file occur instantaneously and simultaneously. Real hardware serializes writes to mass storage, and writing a single sector takes a finite amount of time. So it is impossible to truly write many different sectors of a database file simultaneously and/or instantaneously. But the atomic commit logic within SQLite makes it appear as if the changes for a transaction are all written instantaneously and simultaneously.
@@ -71,7 +11,8 @@ SQLite has the important property that transactions appear to be atomic even if
This article describes the techniques used by SQLite to create the illusion of atomic commit.
The information in this article applies only when SQLite is operating in "rollback mode", or in other words when SQLite is not using a write-ahead log. SQLite still supports atomic commit when write-ahead logging is enabled, but it accomplishes atomic commit by a different mechanism from the one described in this article. See the write-ahead log documentation for additional information on how SQLite supports atomic commit in that context.
-2. Hardware Assumptions
+
+## Hardware Assumptions
Throughout this article, we will call the mass storage device "disk" even though the mass storage device might really be flash memory.
@@ -94,14 +35,16 @@ SQLite assumes that a file deletion is atomic from the point of view of a user p
SQLite assumes that the detection and/or correction of bit errors caused by cosmic rays, thermal noise, quantum fluctuations, device driver bugs, or other mechanisms, is the responsibility of the underlying hardware and operating system. SQLite does not add any redundancy to the database file for the purpose of detecting corruption or I/O errors. SQLite assumes that the data it reads is exactly the same data that it previously wrote.
By default, SQLite assumes that an operating system call to write a range of bytes will not damage or alter any bytes outside of that range even if a power loss or OS crash occurs during that write. We call this the "powersafe overwrite" property. Prior to version 3.7.9 (2011-11-01), SQLite did not assume powersafe overwrite. But with the standard sector size increasing from 512 to 4096 bytes on most disk drives, it has become necessary to assume powersafe overwrite in order to maintain historical performance levels and so powersafe overwrite is assumed by default in recent versions of SQLite. The assumption of powersafe overwrite property can be disabled at compile-time or a run-time if desired. See the powersafe overwrite documentation for further details.
-3. Single File Commit
+
+## Single File Commit
We begin with an overview of the steps SQLite takes in order to perform an atomic commit of a transaction against a single database file. The details of file formats used to guard against damage from power failures and techniques for performing an atomic commit across multiple databases are discussed in later sections.
-3.1. Initial State
+
+### Initial State
The state of the computer when a database connection is first opened is shown conceptually by the diagram at the right. The area of the diagram on the extreme right (labeled "Disk") represents information stored on the mass storage device. Each rectangle is a sector. The blue color represents that the sectors contain original data. The middle area is the operating systems disk cache. At the onset of our example, the cache is cold and this is represented by leaving the rectangles of the disk cache empty. The left area of the diagram shows the content of memory for the process that is using SQLite. The database connection has just been opened and no information has been read yet, so the user space is empty.
-3.2. Acquiring A Read Lock
+### Acquiring A Read Lock
Before SQLite can write to a database, it must first read the database to see what is there already. Even if it is just appending new data, SQLite still has to read in the database schema from the "sqlite_schema" table so that it can know how to parse the INSERT statements and discover where in the database file the new information should be stored.
@@ -109,19 +52,19 @@ The first step toward reading from the database file is obtaining a shared lock
Notice that the shared lock is on the operating system disk cache, not on the disk itself. File locks really are just flags within the operating system kernel, usually. (The details depend on the specific OS layer interface.) Hence, the lock will instantly vanish if the operating system crashes or if there is a power loss. It is usually also the case that the lock will vanish if the process that created the lock exits.
-3.3. Reading Information Out Of The Database
+### Reading Information Out Of The Database
After the shared lock is acquired, we can begin reading information from the database file. In this scenario, we are assuming a cold cache, so information must first be read from mass storage into the operating system cache then transferred from operating system cache into user space. On subsequent reads, some or all of the information might already be found in the operating system cache and so only the transfer to user space would be required.
Usually only a subset of the pages in the database file are read. In this example we are showing three pages out of eight being read. In a typical application, a database will have thousands of pages and a query will normally only touch a small percentage of those pages.
-3.4. Obtaining A Reserved Lock
+### Obtaining A Reserved Lock
Before making changes to the database, SQLite first obtains a "reserved" lock on the database file. A reserved lock is similar to a shared lock in that both a reserved lock and shared lock allow other processes to read from the database file. A single reserve lock can coexist with multiple shared locks from other processes. However, there can only be a single reserved lock on the database file. Hence only a single process can be attempting to write to the database at one time.
The idea behind a reserved lock is that it signals that a process intends to modify the database file in the near future but has not yet started to make the modifications. And because the modifications have not yet started, other processes can continue to read from the database. However, no other process should also begin trying to write to the database.
-3.5. Creating A Rollback Journal File
+### Creating A Rollback Journal File
Prior to making any changes to the database file, SQLite first creates a separate rollback journal file and writes into the rollback journal the original content of the database pages that are to be altered. The idea behind the rollback journal is that it contains all information needed to restore the database back to its original state.
@@ -129,31 +72,31 @@ The rollback journal contains a small header (shown in green in the diagram) tha
When a new file is created, most desktop operating systems (Windows, Linux, Mac OS X) will not actually write anything to disk. The new file is created in the operating systems disk cache only. The file is not created on mass storage until sometime later, when the operating system has a spare moment. This creates the impression to users that I/O is happening much faster than is possible when doing real disk I/O. We illustrate this idea in the diagram to the right by showing that the new rollback journal appears in the operating system disk cache only and not on the disk itself.
-3.6. Changing Database Pages In User Space
+### Changing Database Pages In User Space
After the original page content has been saved in the rollback journal, the pages can be modified in user memory. Each database connection has its own private copy of user space, so the changes that are made in user space are only visible to the database connection that is making the changes. Other database connections still see the information in operating system disk cache buffers which have not yet been changed. And so even though one process is busy modifying the database, other processes can continue to read their own copies of the original database content.
-3.7. Flushing The Rollback Journal File To Mass Storage
+### Flushing The Rollback Journal File To Mass Storage
The next step is to flush the content of the rollback journal file to nonvolatile storage. As we will see later, this is a critical step in insuring that the database can survive an unexpected power loss. This step also takes a lot of time, since writing to nonvolatile storage is normally a slow operation.
This step is usually more complicated than simply flushing the rollback journal to the disk. On most platforms two separate flush (or fsync()) operations are required. The first flush writes out the base rollback journal content. Then the header of the rollback journal is modified to show the number of pages in the rollback journal. Then the header is flushed to disk. The details on why we do this header modification and extra flush are provided in a later section of this paper.
-3.8. Obtaining An Exclusive Lock
+### Obtaining An Exclusive Lock
Prior to making changes to the database file itself, we must obtain an exclusive lock on the database file. Obtaining an exclusive lock is really a two-step process. First SQLite obtains a "pending" lock. Then it escalates the pending lock to an exclusive lock.
A pending lock allows other processes that already have a shared lock to continue reading the database file. But it prevents new shared locks from being established. The idea behind a pending lock is to prevent writer starvation caused by a large pool of readers. There might be dozens, even hundreds, of other processes trying to read the database file. Each process acquires a shared lock before it starts reading, reads what it needs, then releases the shared lock. If, however, there are many different processes all reading from the same database, it might happen that a new process always acquires its shared lock before the previous process releases its shared lock. And so there is never an instant when there are no shared locks on the database file and hence there is never an opportunity for the writer to seize the exclusive lock. A pending lock is designed to prevent that cycle by allowing existing shared locks to proceed but blocking new shared locks from being established. Eventually all shared locks will clear and the pending lock will then be able to escalate into an exclusive lock.
-3.9. Writing Changes To The Database File
+### Writing Changes To The Database File
Once an exclusive lock is held, we know that no other processes are reading from the database file and it is safe to write changes into the database file. Usually those changes only go as far as the operating systems disk cache and do not make it all the way to mass storage.
-3.10. 0 Flushing Changes To Mass Storage
+### 0 Flushing Changes To Mass Storage
Another flush must occur to make sure that all the database changes are written into nonvolatile storage. This is a critical step to ensure that the database will survive a power loss without damage. However, because of the inherent slowness of writing to disk or flash memory, this step together with the rollback journal file flush in section 3.7 above takes up most of the time required to complete a transaction commit in SQLite.
-3.11. 1 Deleting The Rollback Journal
+### 1 Deleting The Rollback Journal
After the database changes are all safely on the mass storage device, the rollback journal file is deleted. This is the instant where the transaction commits. If a power failure or system crash occurs prior to this point, then recovery processes to be described later make it appear as if no changes were ever made to the database file. If a power failure or system crash occurs after the rollback journal is deleted, then it appears as if all changes have been written to disk. Thus, SQLite gives the appearance of having made no changes to the database file or having made the complete set of changes to the database file depending on whether or not the rollback journal file exists.
@@ -163,63 +106,65 @@ The existence of a transaction depends on whether or not the rollback journal fi
The act of deleting a file is expensive on many systems. As an optimization, SQLite can be configured to truncate the journal file to zero bytes in length or overwrite the journal file header with zeros. In either case, the resulting journal file is no longer capable of rolling back and so the transaction still commits. Truncating a file to zero length, like deleting a file, is assumed to be an atomic operation from the point of view of a user process. Overwriting the header of the journal with zeros is not atomic, but if any part of the header is malformed the journal will not roll back. Hence, one can say that the commit occurs as soon as the header is sufficiently changed to make it invalid. Typically this happens as soon as the first byte of the header is zeroed.
-3.12. 2 Releasing The Lock
+### 2 Releasing The Lock
The last step in the commit process is to release the exclusive lock so that other processes can once again start accessing the database file.
In the diagram at the right, we show that the information that was held in user space is cleared when the lock is released. This used to be literally true for older versions of SQLite. But more recent versions of SQLite keep the user space information in memory in case it might be needed again at the start of the next transaction. It is cheaper to reuse information that is already in local memory than to transfer the information back from the operating system disk cache or to read it off of the disk drive again. Prior to reusing the information in user space, we must first reacquire the shared lock and then we have to check to make sure that no other process modified the database file while we were not holding a lock. There is a counter in the first page of the database that is incremented every time the database file is modified. We can find out if another process has modified the database by checking that counter. If the database was modified, then the user space cache must be cleared and reread. But it is commonly the case that no changes have been made and the user space cache can be reused for a significant performance savings.
-4. Rollback
+## Rollback
An atomic commit is supposed to happen instantaneously. But the processing described above clearly takes a finite amount of time. Suppose the power to the computer were cut part way through the commit operation described above. In order to maintain the illusion that the changes were instantaneous, we have to "rollback" any partial changes and restore the database to the state it was in prior to the beginning of the transaction.
-4.1. When Something Goes Wrong...
+
+### When Something Goes Wrong...
Suppose the power loss occurred during step 3.10 above, while the database changes were being written to disk. After power is restored, the situation might be something like what is shown to the right. We were trying to change three pages of the database file but only one page was successfully written. Another page was partially written and a third page was not written at all.
The rollback journal is complete and intact on disk when the power is restored. This is a key point. The reason for the flush operation in step 3.7 is to make absolutely sure that all of the rollback journal is safely on nonvolatile storage prior to making any changes to the database file itself.
-4.2. Hot Rollback Journals
+### Hot Rollback Journals
The first time that any SQLite process attempts to access the database file, it obtains a shared lock as described in section 3.2 above. But then it notices that there is a rollback journal file present. SQLite then checks to see if the rollback journal is a "hot journal". A hot journal is a rollback journal that needs to be played back in order to restore the database to a sane state. A hot journal only exists when an earlier process was in the middle of committing a transaction when it crashed or lost power.
A rollback journal is a "hot" journal if all of the following are true:
- The rollback journal exists.
- The rollback journal is not an empty file.
- There is no reserved lock on the main database file.
- The header of the rollback journal is well-formed and in particular has not been zeroed out.
- The rollback journal does not contain the name of a super-journal file (see section 5.5 below) or if does contain the name of a super-journal, then that super-journal file exists.
+- The rollback journal exists.
+- The rollback journal is not an empty file.
+- There is no reserved lock on the main database file.
+- - The header of the rollback journal is well-formed and in particular has not been zeroed out.
+- The rollback journal does not contain the name of a super-journal file (see section 5.5 below) or if does contain the name of a super-journal, then that super-journal file exists.
The presence of a hot journal is our indication that a previous process was trying to commit a transaction but it aborted for some reason prior to the completion of the commit. A hot journal means that the database file is in an inconsistent state and needs to be repaired (by rollback) prior to being used.
-4.3. Obtaining An Exclusive Lock On The Database
+### Obtaining An Exclusive Lock On The Database
The first step toward dealing with a hot journal is to obtain an exclusive lock on the database file. This prevents two or more processes from trying to rollback the same hot journal at the same time.
-4.4. Rolling Back Incomplete Changes
+### Rolling Back Incomplete Changes
Once a process obtains an exclusive lock, it is permitted to write to the database file. It then proceeds to read the original content of pages out of the rollback journal and write that content back to where it came from in the database file. Recall that the header of the rollback journal records the original size of the database file prior to the start of the aborted transaction. SQLite uses this information to truncate the database file back to its original size in cases where the incomplete transaction caused the database to grow. At the end of this step, the database should be the same size and contain the same information as it did before the start of the aborted transaction.
-4.5. Deleting The Hot Journal
+### Deleting The Hot Journal
After all information in the rollback journal has been played back into the database file (and flushed to disk in case we encounter yet another power failure), the hot rollback journal can be deleted.
As in section 3.11, the journal file might be truncated to zero length or its header might be overwritten with zeros as an optimization on systems where deleting a file is expensive. Either way, the journal is no longer hot after this step.
-4.6. Continue As If The Uncompleted Writes Had Never Happened
+### Continue As If The Uncompleted Writes Had Never Happened
The final recovery step is to reduce the exclusive lock back to a shared lock. Once this happens, the database is back in the state that it would have been if the aborted transaction had never started. Since all of this recovery activity happens completely automatically and transparently, it appears to the program using SQLite as if the aborted transaction had never begun.
-5. Multi-file Commit
+## Multi-file Commit
SQLite allows a single database connection to talk to two or more database files simultaneously through the use of the ATTACH DATABASE command. When multiple database files are modified within a single transaction, all files are updated atomically. In other words, either all of the database files are updated or else none of them are. Achieving an atomic commit across multiple database files is more complex that doing so for a single file. This section describes how SQLite works that bit of magic.
-5.1. Separate Rollback Journals For Each Database
+
+### Separate Rollback Journals For Each Database
When multiple database files are involved in a transaction, each database has its own rollback journal and each database is locked separately. The diagram at the right shows a scenario where three different database files have been modified within one transaction. The situation at this step is analogous to the single-file transaction scenario at step 3.6. Each database file has a reserved lock. For each database, the original content of pages that are being changed have been written into the rollback journal for that database, but the content of the journals have not yet been flushed to disk. No changes have been made to the database file itself yet, though presumably there are changes being held in user memory.
For brevity, the diagrams in this section are simplified from those that came before. Blue color still signifies original content and pink still signifies new content. But the individual pages in the rollback journal and the database file are not shown and we are not making the distinction between information in the operating system cache and information that is on disk. All of these factors still apply in a multi-file commit scenario. They just take up a lot of space in the diagrams and they do not add any new information, so they are omitted here.
-5.2. The Super-Journal File
+### The Super-Journal File
The next step in a multi-file commit is the creation of a "super-journal" file. The name of the super-journal file is the same name as the original database filename (the database that was opened using the sqlite3_open() interface, not one of the ATTACHed auxiliary databases) with the text "-mjHHHHHHHH" appended where HHHHHHHH is a random 32-bit hexadecimal number. The random HHHHHHHH suffix changes for every new super-journal.
@@ -230,7 +175,8 @@ Unlike the rollback journals, the super-journal does not contain any original da
After the super-journal is constructed, its content is flushed to disk before any further actions are taken. On Unix, the directory that contains the super-journal is also synced in order to make sure the super-journal file will appear in the directory following a power failure.
The purpose of the super-journal is to ensure that multi-file transactions are atomic across a power-loss. But if the database files have other settings that compromise integrity across a power-loss event (such as PRAGMA synchronous=OFF or PRAGMA journal_mode=MEMORY) then the creation of the super-journal is omitted, as an optimization.
-5.3. Updating Rollback Journal Headers
+
+### Updating Rollback Journal Headers
The next step is to record the full pathname of the super-journal file in the header of every rollback journal. Space to hold the super-journal filename was reserved at the beginning of each rollback journal as the rollback journals were created.
@@ -238,32 +184,35 @@ The content of each rollback journal is flushed to disk both before and after th
This step is analogous to step 3.7 in the single-file commit scenario described above.
-5.4. Updating The Database Files
+### Updating The Database Files
Once all rollback journal files have been flushed to disk, it is safe to begin updating database files. We have to obtain an exclusive lock on all database files before writing the changes. After all the changes are written, it is important to flush the changes to disk so that they will be preserved in the event of a power failure or operating system crash.
This step corresponds to steps 3.8, 3.9, and 3.10 in the single-file commit scenario described previously.
-5.5. Delete The Super-Journal File
+### Delete The Super-Journal File
The next step is to delete the super-journal file. This is the point where the multi-file transaction commits. This step corresponds to step 3.11 in the single-file commit scenario where the rollback journal is deleted.
If a power failure or operating system crash occurs at this point, the transaction will not rollback when the system reboots even though there are rollback journals present. The difference is the super-journal pathname in the header of the rollback journal. Upon restart, SQLite only considers a journal to be hot and will only playback the journal if there is no super-journal filename in the header (which is the case for a single-file commit) or if the super-journal file still exists on disk.
-5.6. Clean Up The Rollback Journals
+### Clean Up The Rollback Journals
The final step in a multi-file commit is to delete the individual rollback journals and drop the exclusive locks on the database files so that other processes can see the changes. This corresponds to step 3.12 in the single-file commit sequence.
The transaction has already committed at this point so timing is not critical in the deletion of the rollback journals. The current implementation deletes a single rollback journal then unlocks the corresponding database file before proceeding to the next rollback journal. But in the future we might change this so that all rollback journals are deleted before any database files are unlocked. As long as the rollback journal is deleted before its corresponding database file is unlocked it does not matter in what order the rollback journals are deleted or the database files are unlocked.
-6. Additional Details Of The Commit Process
+
+## Additional Details Of The Commit Process
Section 3.0 above provides an overview of how atomic commit works in SQLite. But it glosses over a number of important details. The following subsections will attempt to fill in the gaps.
-6.1. Always Journal Complete Sectors
+
+### Always Journal Complete Sectors
When the original content of a database page is written into the rollback journal (as shown in section 3.5), SQLite always writes a complete sector of data, even if the page size of the database is smaller than the sector size. Historically, the sector size in SQLite has been hard coded to 512 bytes and since the minimum page size is also 512 bytes, this has never been an issue. But beginning with SQLite version 3.3.14, it is possible for SQLite to use mass storage devices with a sector size larger than 512 bytes. So, beginning with version 3.3.14, whenever any page within a sector is written into the journal file, all pages in that same sector are stored with it.
It is important to store all pages of a sector in the rollback journal in order to prevent database corruption following a power loss while writing the sector. Suppose that pages 1, 2, 3, and 4 are all stored in sector 1 and that page 2 is modified. In order to write the changes to page 2, the underlying hardware must also rewrite the content of pages 1, 3, and 4 since the hardware must write the complete sector. If this write operation is interrupted by a power outage, one or more of the pages 1, 3, or 4 might be left with incorrect data. Hence, to avoid lasting corruption to the database, the original content of all of those pages must be contained in the rollback journal.
-6.2. Dealing With Garbage Written Into Journal Files
+
+### Dealing With Garbage Written Into Journal Files
When data is appended to the end of the rollback journal, SQLite normally makes the pessimistic assumption that the file is first extended with invalid "garbage" data and that afterwards the correct data replaces the garbage. In other words, SQLite assumes that the file size is increased first and then afterwards the content is written into the file. If a power failure occurs after the file size has been increased but before the file content has been written, the rollback journal can be left containing garbage data. If after power is restored, another SQLite process sees the rollback journal containing the garbage data and tries to roll it back into the original database file, it might copy some of the garbage into the database file and thus corrupt the database file.
@@ -271,47 +220,55 @@ SQLite uses two defenses against this problem. In the first place, SQLite record
The previous paragraph describes what happens when the synchronous pragma setting is "full".
- PRAGMA synchronous=FULL;
+```sql
+PRAGMA synchronous=FULL;
+```
The default synchronous setting is full so the above is what usually happens. However, if the synchronous setting is lowered to "normal", SQLite only flushes the rollback journal once, after the page count has been written. This carries a risk of corruption because it might happen that the modified (non-zero) page count reaches the disk surface before all of the data does. The data will have been written first, but SQLite assumes that the underlying filesystem can reorder write requests and that the page count can be burned into oxide first even though its write request occurred last. So as a second line of defense, SQLite also uses a 32-bit checksum on every page of data in the rollback journal. This checksum is evaluated for each page during rollback while rolling back a journal as described in section 4.4. If an incorrect checksum is seen, the rollback is abandoned. Note that the checksum does not guarantee that the page data is correct since there is a small but finite probability that the checksum might be right even if the data is corrupt. But the checksum does at least make such an error unlikely.
Note that the checksums in the rollback journal are not necessary if the synchronous setting is FULL. We only depend on the checksums when synchronous is lowered to NORMAL. Nevertheless, the checksums never hurt and so they are included in the rollback journal regardless of the synchronous setting.
-6.3. Cache Spill Prior To Commit
+
+### Cache Spill Prior To Commit
The commit process shown in section 3.0 assumes that all database changes fit in memory until it is time to commit. This is the common case. But sometimes a larger change will overflow the user-space cache prior to transaction commit. In those cases, the cache must spill to the database before the transaction is complete.
At the beginning of a cache spill, the status of the database connection is as shown in step 3.6. Original page content has been saved in the rollback journal and modifications of the pages exist in user memory. To spill the cache, SQLite executes steps 3.7 through 3.9. In other words, the rollback journal is flushed to disk, an exclusive lock is acquired, and changes are written into the database. But the remaining steps are deferred until the transaction really commits. A new journal header is appended to the end of the rollback journal (in its own sector) and the exclusive database lock is retained, but otherwise processing returns to step 3.6. When the transaction commits, or if another cache spill occurs, steps 3.7 and 3.9 are repeated. (Step 3.8 is omitted on second and subsequent passes since an exclusive database lock is already held due to the first pass.)
A cache spill causes the lock on the database file to escalate from reserved to exclusive. This reduces concurrency. A cache spill also causes extra disk flush or fsync operations to occur and these operations are slow, hence a cache spill can seriously reduce performance. For these reasons a cache spill is avoided whenever possible.
-7. Optimizations
+
+## Optimizations
Profiling indicates that for most systems and in most circumstances SQLite spends most of its time doing disk I/O. It follows then that anything we can do to reduce the amount of disk I/O will likely have a large positive impact on the performance of SQLite. This section describes some of the techniques used by SQLite to try to reduce the amount of disk I/O to a minimum while still preserving atomic commit.
-7.1. Cache Retained Between Transactions
+
+### Cache Retained Between Transactions
Step 3.12 of the commit process shows that once the shared lock has been released, all user-space cache images of database content must be discarded. This is done because without a shared lock, other processes are free to modify the database file content and so any user-space image of that content might become obsolete. Consequently, each new transaction would begin by rereading data which had previously been read. This is not as bad as it sounds at first since the data being read is still likely in the operating systems file cache. So the "read" is really just a copy of data from kernel space into user space. But even so, it still takes time.
Beginning with SQLite version 3.3.14 a mechanism has been added to try to reduce the needless rereading of data. In newer versions of SQLite, the data in the user-space pager cache is retained when the lock on the database file is released. Later, after the shared lock is acquired at the beginning of the next transaction, SQLite checks to see if any other process has modified the database file. If the database has been changed in any way since the lock was last released, the user-space cache is erased at that point. But commonly the database file is unchanged and the user-space cache can be retained, and some unnecessary read operations can be avoided.
In order to determine whether or not the database file has changed, SQLite uses a counter in the database header (in bytes 24 through 27) which is incremented during every change operation. SQLite saves a copy of this counter prior to releasing its database lock. Then after acquiring the next database lock it compares the saved counter value against the current counter value and erases the cache if the values are different, or reuses the cache if they are the same.
-7.2. Exclusive Access Mode
+
+### Exclusive Access Mode
SQLite version 3.3.14 adds the concept of "Exclusive Access Mode". In exclusive access mode, SQLite retains the exclusive database lock at the conclusion of each transaction. This prevents other processes from accessing the database, but in many deployments only a single process is using a database so this is not a serious problem. The advantage of exclusive access mode is that disk I/O can be reduced in three ways:
- It is not necessary to increment the change counter in the database header for transactions after the first transaction. This will often save a write of page one to both the rollback journal and the main database file.
+- It is not necessary to increment the change counter in the database header for transactions after the first transaction. This will often save a write of page one to both the rollback journal and the main database file.
- No other processes can change the database so there is never a need to check the change counter and clear the user-space cache at the beginning of a transaction.
+- No other processes can change the database so there is never a need to check the change counter and clear the user-space cache at the beginning of a transaction.
- Each transaction can be committed by overwriting the rollback journal header with zeros rather than deleting the journal file. This avoids having to modify the directory entry for the journal file and it avoids having to deallocate disk sectors associated with the journal. Furthermore, the next transaction will overwrite existing journal file content rather than append new content and on most systems overwriting is much faster than appending.
+- Each transaction can be committed by overwriting the rollback journal header with zeros rather than deleting the journal file. This avoids having to modify the directory entry for the journal file and it avoids having to deallocate disk sectors associated with the journal. Furthermore, the next transaction will overwrite existing journal file content rather than append new content and on most systems overwriting is much faster than appending.
The third optimization, zeroing the journal file header rather than deleting the rollback journal file, does not depend on holding an exclusive lock at all times. This optimization can be set independently of exclusive lock mode using the journal_mode pragma as described in section 7.6 below.
-7.3. Do Not Journal Freelist Pages
+
+### Do Not Journal Freelist Pages
When information is deleted from an SQLite database, the pages used to hold the deleted information are added to a "freelist". Subsequent inserts will draw pages off of this freelist rather than expanding the database file.
Some freelist pages contain critical data; specifically the locations of other freelist pages. But most freelist pages contain nothing useful. These latter freelist pages are called "leaf" pages. We are free to modify the content of a leaf freelist page in the database without changing the meaning of the database in any way.
Because the content of leaf freelist pages is unimportant, SQLite avoids storing leaf freelist page content in the rollback journal in step 3.5 of the commit process. If a leaf freelist page is changed and that change does not get rolled back during a transaction recovery, the database is not harmed by the omission. Similarly, the content of a new freelist page is never written back into the database at step 3.9 nor read from the database at step 3.3. These optimizations can greatly reduce the amount of I/O that occurs when making changes to a database file that contains free space.
-7.4. Single Page Updates And Atomic Sector Writes
+
+### Single Page Updates And Atomic Sector Writes
Beginning in SQLite version 3.5.0, the new Virtual File System (VFS) interface contains a method named xDeviceCharacteristics which reports on special properties that the underlying mass storage device might have. Among the special properties that xDeviceCharacteristics might report is the ability of to do an atomic sector write.
@@ -320,67 +277,84 @@ Recall that by default SQLite assumes that sector writes are linear but not atom
We believe that most modern disk drives implement atomic sector writes. When power is lost, the drive uses energy stored in capacitors and/or the angular momentum of the disk platter to provide power to complete any operation in progress. Nevertheless, there are so many layers in between the write system call and the on-board disk drive electronics that we take the safe approach in both Unix and w32 VFS implementations and assume that sector writes are not atomic. On the other hand, device manufacturers with more control over their filesystems might want to consider enabling the atomic write property of xDeviceCharacteristics if their hardware really does do atomic writes.
When sector writes are atomic and the page size of a database is the same as a sector size, and when there is a database change that only touches a single database page, then SQLite skips the whole journaling and syncing process and simply writes the modified page directly into the database file. The change counter in the first page of the database file is modified separately since no harm is done if power is lost before the change counter can be updated.
-7.5. Filesystems With Safe Append Semantics
+
+### Filesystems With Safe Append Semantics
Another optimization introduced in SQLite version 3.5.0 makes use of "safe append" behavior of the underlying disk. Recall that SQLite assumes that when data is appended to a file (specifically to the rollback journal) that the size of the file is increased first and that the content is written second. So if power is lost after the file size is increased but before the content is written, the file is left containing invalid "garbage" data. The xDeviceCharacteristics method of the VFS might, however, indicate that the filesystem implements "safe append" semantics. This means that the content is written before the file size is increased so that it is impossible for garbage to be introduced into the rollback journal by a power loss or system crash.
When safe append semantics are indicated for a filesystem, SQLite always stores the special value of -1 for the page count in the header of the rollback journal. The -1 page count value tells any process attempting to rollback the journal that the number of pages in the journal should be computed from the journal size. This -1 value is never changed. So that when a commit occurs, we save a single flush operation and a sector write of the first page of the journal file. Furthermore, when a cache spill occurs we no longer need to append a new journal header to the end of the journal; we can simply continue appending new pages to the end of the existing journal.
-7.6. Persistent Rollback Journals
+
+### Persistent Rollback Journals
Deleting a file is an expensive operation on many systems. So as an optimization, SQLite can be configured to avoid the delete operation of section 3.11. Instead of deleting the journal file in order to commit a transaction, the file is either truncated to zero bytes in length or its header is overwritten with zeros. Truncating the file to zero length saves having to make modifications to the directory containing the file since the file is not removed from the directory. Overwriting the header has the additional savings of not having to update the length of the file (in the "inode" on many systems) and not having to deal with newly freed disk sectors. Furthermore, at the next transaction the journal will be created by overwriting existing content rather than appending new content onto the end of a file, and overwriting is often much faster than appending.
SQLite can be configured to commit transactions by overwriting the journal header with zeros instead of deleting the journal file by setting the "PERSIST" journaling mode using the journal_mode PRAGMA. For example:
- PRAGMA journal_mode=PERSIST;
+```sql
+PRAGMA journal_mode=PERSIST;
+```
The use of persistent journal mode provides a noticeable performance improvement on many systems. Of course, the drawback is that the journal files remain on the disk, using disk space and cluttering directories, long after the transaction commits. The only safe way to delete a persistent journal file is to commit a transaction with journaling mode set to DELETE:
- PRAGMA journal_mode=DELETE;
- BEGIN EXCLUSIVE;
- COMMIT;
+```sql
+PRAGMA journal_mode=DELETE;
+BEGIN EXCLUSIVE;
+COMMIT;
+```
Beware of deleting persistent journal files by any other means since the journal file might be hot, in which case deleting it will corrupt the corresponding database file.
Beginning in SQLite version 3.6.4 (2008-10-15), the TRUNCATE journal mode is also supported:
- PRAGMA journal_mode=TRUNCATE;
+```sql
+PRAGMA journal_mode=TRUNCATE;
+```
In truncate journal mode, the transaction is committed by truncating the journal file to zero length rather than deleting the journal file (as in DELETE mode) or by zeroing the header (as in PERSIST mode). TRUNCATE mode shares the advantage of PERSIST mode that the directory that contains the journal file and database does not need to be updated. Hence truncating a file is often faster than deleting it. TRUNCATE has the additional advantage that it is not followed by a system call (ex: fsync()) to synchronize the change to disk. It might be safer if it did. But on many modern filesystems, a truncate is an atomic and synchronous operation and so we think that TRUNCATE will usually be safe in the face of power failures. If you are uncertain about whether or not TRUNCATE will be synchronous and atomic on your filesystem and it is important to you that your database survive a power loss or operating system crash that occurs during the truncation operation, then you might consider using a different journaling mode.
On embedded systems with synchronous filesystems, TRUNCATE results in slower behavior than PERSIST. The commit operation is the same speed. But subsequent transactions are slower following a TRUNCATE because it is faster to overwrite existing content than to append to the end of a file. New journal file entries will always be appended following a TRUNCATE but will usually overwrite with PERSIST.
-8. Testing Atomic Commit Behavior
+
+## Testing Atomic Commit Behavior
The developers of SQLite are confident that it is robust in the face of power failures and system crashes because the automatic test procedures do extensive checks on the ability of SQLite to recover from simulated power loss. We call these the "crash tests".
Crash tests in SQLite use a modified VFS that can simulate the kinds of filesystem damage that occur during a power loss or operating system crash. The crash-test VFS can simulate incomplete sector writes, pages filled with garbage data because a write has not completed, and out of order writes, all occurring at varying points during a test scenario. Crash tests execute transactions over and over, varying the time at which a simulated power loss occurs and the properties of the damage inflicted. Each test then reopens the database after the simulated crash and verifies that the transaction either occurred completely or not at all and that the database is in a completely consistent state.
The crash tests in SQLite have discovered a number of very subtle bugs (now fixed) in the recovery mechanism. Some of these bugs were very obscure and unlikely to have been found using only code inspection and analysis techniques. From this experience, the developers of SQLite feel confident that any other database system that does not use a similar crash test system likely contains undetected bugs that will lead to database corruption following a system crash or power failure.
-9. Things That Can Go Wrong
+
+## Things That Can Go Wrong
The atomic commit mechanism in SQLite has proven to be robust, but it can be circumvented by a sufficiently creative adversary or a sufficiently broken operating system implementation. This section describes a few of the ways in which an SQLite database might be corrupted by a power failure or system crash. (See also: How To Corrupt Your Database Files.)
-9.1. Broken Locking Implementations
+
+### Broken Locking Implementations
SQLite uses filesystem locks to make sure that only one process and database connection is trying to modify the database at a time. The filesystem locking mechanism is implemented in the VFS layer and is different for every operating system. SQLite depends on this implementation being correct. If something goes wrong and two or more processes are able to write the same database file at the same time, severe damage can result.
We have received reports of implementations of both Windows network filesystems and NFS in which locking was subtly broken. We can not verify these reports, but as locking is difficult to get right on a network filesystem we have no reason to doubt them. You are advised to avoid using SQLite on a network filesystem in the first place, since performance will be slow. But if you must use a network filesystem to store SQLite database files, consider using a secondary locking mechanism to prevent simultaneous writes to the same database even if the native filesystem locking mechanism malfunctions.
The versions of SQLite that come preinstalled on Apple Mac OS X computers contain a version of SQLite that has been extended to use alternative locking strategies that work on all network filesystems that Apple supports. These extensions used by Apple work great as long as all processes are accessing the database file in the same way. Unfortunately, the locking mechanisms do not exclude one another, so if one process is accessing a file using (for example) AFP locking and another process (perhaps on a different machine) is using dot-file locks, the two processes might collide because AFP locks do not exclude dot-file locks or vice versa.
-9.2. Incomplete Disk Flushes
+
+### Incomplete Disk Flushes
SQLite uses the fsync() system call on Unix and the FlushFileBuffers() system call on w32 in order to sync the file system buffers onto disk oxide as shown in step 3.7 and step 3.10. Unfortunately, we have received reports that neither of these interfaces works as advertised on many systems. We hear that FlushFileBuffers() can be completely disabled using registry settings on some Windows versions. Some historical versions of Linux contain versions of fsync() which are no-ops on some filesystems, we are told. Even on systems where FlushFileBuffers() and fsync() are said to be working, often the IDE disk control lies and says that data has reached oxide while it is still held only in the volatile control cache.
On the Mac, you can set this pragma:
- PRAGMA fullfsync=ON;
+```sql
+PRAGMA fullfsync=ON;
+```
Setting fullfsync on a Mac will guarantee that data really does get pushed out to the disk platter on a flush. But the implementation of fullfsync involves resetting the disk controller. And so not only is it profoundly slow, it also slows down other unrelated disk I/O. So its use is not recommended.
-9.3. Partial File Deletions
+
+### Partial File Deletions
SQLite assumes that file deletion is an atomic operation from the point of view of a user process. If power fails in the middle of a file deletion, then after power is restored SQLite expects to see either the entire file with all of its original data intact, or it expects not to find the file at all. Transactions may not be atomic on systems that do not work this way.
-9.4. Garbage Written Into Files
+
+### Garbage Written Into Files
SQLite database files are ordinary disk files that can be opened and written by ordinary user processes. A rogue process can open an SQLite database and fill it with corrupt data. Corrupt data might also be introduced into an SQLite database by bugs in the operating system or disk controller; especially bugs triggered by a power failure. There is nothing SQLite can do to defend against these kinds of problems.
-9.5. Deleting Or Renaming A Hot Journal
+
+### Deleting Or Renaming A Hot Journal
If a crash or power loss does occur and a hot journal is left on the disk, it is essential that the original database file and the hot journal remain on disk with their original names until the database file is opened by another SQLite process and rolled back. During recovery at step 4.2 SQLite locates the hot journal by looking for a file in the same directory as the database being opened and whose name is derived from the name of the file being opened. If either the original database file or the hot journal have been moved or renamed, then the hot journal will not be seen and the database will not be rolled back.
@@ -389,10 +363,9 @@ We suspect that a common failure mode for SQLite recovery happens like this: A p
If there are multiple (hard or symbolic) links to a database file, the journal will be created using the name of the link through which the file was opened. If a crash occurs and the database is opened again using a different link, the hot journal will not be located and no rollback will occur.
Sometimes a power failure will cause a filesystem to be corrupted such that recently changed filenames are forgotten and the file is moved into a "/lost+found" directory. When that happens, the hot journal will not be found and recovery will not occur. SQLite tries to prevent this by opening and syncing the directory containing the rollback journal at the same time it syncs the journal file itself. However, the movement of files into /lost+found can be caused by unrelated processes creating unrelated files in the same directory as the main database file. And since this is out from under the control of SQLite, there is nothing that SQLite can do to prevent it. If you are running on a system that is vulnerable to this kind of filesystem namespace corruption (most modern journalling filesystems are immune, we believe) then you might want to consider putting each SQLite database file in its own private subdirectory.
-10. Future Directions And Conclusion
+
+## Future Directions And Conclusion
Every now and then someone discovers a new failure mode for the atomic commit mechanism in SQLite and the developers have to put in a patch. This is happening less and less and the failure modes are becoming more and more obscure. But it would still be foolish to suppose that the atomic commit logic of SQLite is entirely bug-free. The developers are committed to fixing these bugs as quickly as they might be found.
The developers are also on the lookout for new ways to optimize the commit mechanism. The current VFS implementations for Unix (Linux and Mac OS X) and Windows make pessimistic assumptions about the behavior of those systems. After consultation with experts on how these systems work, we might be able to relax some of the assumptions on these systems and allow them to run faster. In particular, we suspect that most modern filesystems exhibit the safe append property and that many of them might support atomic sector writes. But until this is known for certain, SQLite will take the conservative approach and assume the worst.
-
-This page last modified on 2022-12-31 21:51:03 UTC
diff --git a/design/VIRTUALMACHINE.md b/design/VIRTUALMACHINE.md
index be939e6..7cbdbb9 100644
--- a/design/VIRTUALMACHINE.md
+++ b/design/VIRTUALMACHINE.md
@@ -1,4 +1,6 @@
-# The EpilogLite Bytecode Engine
+# The EpilogLite Bytecode Engine
+
+status: draft
## Executive Summary
@@ -42,13 +44,13 @@ The ResultRow opcode causes the bytecode engine to pause and the corresponding E
Every bytecode program has a fixed (but potentially large) number of registers. A single register can hold a variety of objects:
-- A NULL value
-- A signed 64-bit integer
-- An IEEE double-precision (64-bit) floating point number
-- An arbitrary length string
-- An arbitrary length BLOB
-- A RowSet object (See the RowSetAdd, RowSetRead, and RowSetTest opcodes)
-- A Frame object (Used by subprograms - see Program)
+- A NULL value
+- A signed 64-bit integer
+- An IEEE double-precision (64-bit) floating point number
+- An arbitrary length string
+- An arbitrary length BLOB
+- A RowSet object (See the RowSetAdd, RowSetRead, and RowSetTest opcodes)
+- A Frame object (Used by subprograms - see Program)
A register can also be "Undefined" meaning that it holds no value at all. Undefined is different from NULL. Depending on compile-time options, an attempt to read an undefined register will usually cause a run-time error. If the code generator (EpilogLite3_prepare_v2()) ever generates a prepared statement that reads an Undefined register, that is a bug in the code generator.
@@ -56,11 +58,11 @@ Registers are numbered beginning with 0. Most opcodes refer to at least one regi
The number of registers in a single prepared statement is fixed at compile-time. The content of all registers is cleared when a prepared statement is reset or finalized.
-The internal Mem object stores the value for a single register. The abstract EpilogLite3_value object that is exposed in the API is really just a Mem object or register.
+The internal Mem object stores the value for a single register. The abstract EpilogLite3_value object that is exposed in the API is really just a Mem object or register.
## B-Tree Cursors
-A prepared statement can have zero or more open cursors. Each cursor is identified by a small integer, which is usually the P1 parameter to the opcode that uses the cursor. There can be multiple cursors open on the same index or table. All cursors operate independently, even cursors pointing to the same indices or tables. The only way for the virtual machine to interact with a database file is through a cursor. Instructions in the virtual machine can create a new cursor (ex: OpenRead or OpenWrite), read data from a cursor (Column), advance the cursor to the next entry in the table (ex: Next or Prev), and so forth. All cursors are automatically closed when the prepared statement is reset or finalized.
+A prepared statement can have zero or more open cursors. Each cursor is identified by a small integer, which is usually the P1 parameter to the opcode that uses the cursor. There can be multiple cursors open on the same index or table. All cursors operate independently, even cursors pointing to the same indices or tables. The only way for the virtual machine to interact with a database file is through a cursor. Instructions in the virtual machine can create a new cursor (ex: OpenRead or OpenWrite), read data from a cursor (Column), advance the cursor to the next entry in the table (ex: Next or Prev), and so forth. All cursors are automatically closed when the prepared statement is reset or finalized.
## Subroutines, Coroutines, and Subprograms
@@ -70,11 +72,11 @@ The Gosub opcode stores the current program counter into register P1 then jumps
The Yield opcode swaps the value of the program counter with the integer value in register P1. This opcode is used to implement coroutines. Coroutines are often used to implement subqueries from which content is pulled on an as-needed basis.
-Triggers need to be reentrant. Since bytecode subroutines are not reentrant a different mechanism must be used to implement triggers. Each trigger is implemented using a separate bytecode program with its own opcodes, program counter, and register set. The Program opcode invokes the trigger subprogram. The Program instruction allocates and initializes a fresh register set for each invocation of the subprogram, so subprograms can be reentrant and recursive. The Param opcode is used by subprograms to access content in registers of the calling bytecode program.
+Triggers need to be reentrant. Since bytecode subroutines are not reentrant a different mechanism must be used to implement triggers. Each trigger is implemented using a separate bytecode program with its own opcodes, program counter, and register set. The Program opcode invokes the trigger subprogram. The Program instruction allocates and initializes a fresh register set for each invocation of the subprogram, so subprograms can be reentrant and recursive. The Param opcode is used by subprograms to access content in registers of the calling bytecode program.
## Self-Altering Code
-Some opcodes are self-altering. For example, the Init opcode (which is always the first opcode in every bytecode program) increments its P1 operand. Subsequent Once opcodes compare their P1 operands to the P1 value for the Init opcode in order to determine if the one-time initialization code that follows should be skipped. Another example is the String8 opcode which converts its P4 operand from UTF-8 into the correct database string encoding, then converts itself into a String opcode.
+Some opcodes are self-altering. For example, the Init opcode (which is always the first opcode in every bytecode program) increments its P1 operand. Subsequent Once opcodes compare their P1 operands to the P1 value for the Init opcode in order to determine if the one-time initialization code that follows should be skipped. Another example is the String8 opcode which converts its P4 operand from UTF-8 into the correct database string encoding, then converts itself into a String opcode.
# Viewing The Bytecode
@@ -83,23 +85,23 @@ Every SQL statement that EpilogLite interprets results in a program for the virt
```shell
$ EpilogLite3 ex1.db
EpilogLite> explain delete from tbl1 where two<20;
-addr opcode p1 p2 p3 p4 p5 comment
+addr opcode p1 p2 p3 p4 p5 comment
---- ------------- ---- ---- ---- ------------- -- -------------
-0 Init 0 12 0 00 Start at 12
-1 Null 0 1 0 00 r[1]=NULL
+0 Init 0 12 0 00 Start at 12
+1 Null 0 1 0 00 r\\[1]=NULL
2 OpenWrite 0 2 0 3 00 root=2 iDb=0; tbl1
-3 Rewind 0 10 0 00
-4 Column 0 1 2 00 r[2]=tbl1.two
-5 Ge 3 9 2 (BINARY) 51 if r[2]>=r[3] goto 9
-6 Rowid 0 4 0 00 r[4]=rowid
-7 Once 0 8 0 00
-8 Delete 0 1 0 tbl1 02
-9 Next 0 4 0 01
-10 Noop 0 0 0 00
-11 Halt 0 0 0 00
+3 Rewind 0 10 0 00
+4 Column 0 1 2 00 r\\[2]=tbl1.two
+5 Ge 3 9 2 (BINARY) 51 if r\\[2]>=r\\[3] goto 9
+6 Rowid 0 4 0 00 r\\[4]=rowid
+7 Once 0 8 0 00
+8 Delete 0 1 0 tbl1 02
+9 Next 0 4 0 01
+10 Noop 0 0 0 00
+11 Halt 0 0 0 00
12 Transaction 0 1 1 0 01 usesStmtJournal=0
13 TableLock 0 2 1 tbl1 00 iDb=0 root=2 write=1
-14 Integer 20 3 0 00 r[3]=20
+14 Integer 20 3 0 00 r\\[3]=20
15 Goto 0 1 0 00
```
@@ -107,11 +109,11 @@ Any application can run an EXPLAIN query to get output similar to the above. How
When EpilogLite is compiled with the EpilogLite_DEBUG compile-time option, extra PRAGMA commands are available that are useful for debugging and for exploring the operation of the VDBE. For example the vdbe_trace pragma can be enabled to cause a disassembly of each VDBE opcode to be printed on standard output as the opcode is executed. These debugging pragmas include:
-- PRAGMA parser_trace
-- PRAGMA vdbe_addoptrace
-- PRAGMA vdbe_debug
-- PRAGMA vdbe_listing
-- PRAGMA vdbe_trace
+- PRAGMA parser_trace
+- PRAGMA vdbe_addoptrace
+- PRAGMA vdbe_debug
+- PRAGMA vdbe_listing
+- PRAGMA vdbe_trace
## The Opcodes
@@ -119,778 +121,778 @@ There are currently 190 opcodes defined by the virtual machine. All currently de
Remember: The VDBE opcodes are not part of the interface definition for EpilogLite. The number of opcodes and their names and meanings change from one release of EpilogLite to the next. The opcodes shown in the table below are valid for EpilogLite version 3.47.2 check-in 2aabe05e2e8ca dated 2024-12-07.
-- Opcode Name Description
-- Abortable Verify that an Abort can happen. Assert if an Abort at this point might cause database corruption. This opcode only appears in debugging builds.
+- Opcode Name Description
+- Abortable Verify that an Abort can happen. Assert if an Abort at this point might cause database corruption. This opcode only appears in debugging builds.
-- An Abort is safe if either there have been no writes, or if there is an active statement journal.
-- Add Add the value in register P1 to the value in register P2 and store the result in register P3. If either input is NULL, the result is NULL.
-- AddImm Add the constant P2 to the value in register P1. The result is always an integer.
+- An Abort is safe if either there have been no writes, or if there is an active statement journal.
+- Add Add the value in register P1 to the value in register P2 and store the result in register P3. If either input is NULL, the result is NULL.
+- AddImm Add the constant P2 to the value in register P1. The result is always an integer.
-- To force any register to be an integer, just add 0.
-- Affinity Apply affinities to a range of P2 registers starting with P1.
+- To force any register to be an integer, just add 0.
+- Affinity Apply affinities to a range of P2 registers starting with P1.
-- P4 is a string that is P2 characters long. The N-th character of the string indicates the column affinity that should be used for the N-th memory cell in the range.
-- AggFinal P1 is the memory location that is the accumulator for an aggregate or window function. Execute the finalizer function for an aggregate and store the result in P1.
+- P4 is a string that is P2 characters long. The N-th character of the string indicates the column affinity that should be used for the N-th memory cell in the range.
+- AggFinal P1 is the memory location that is the accumulator for an aggregate or window function. Execute the finalizer function for an aggregate and store the result in P1.
-- P2 is the number of arguments that the step function takes and P4 is a pointer to the FuncDef for this function. The P2 argument is not used by this opcode. It is only there to disambiguate functions that can take varying numbers of arguments. The P4 argument is only needed for the case where the step function was not previously called.
-- AggInverse Execute the xInverse function for an aggregate. The function has P5 arguments. P4 is a pointer to the FuncDef structure that specifies the function. Register P3 is the accumulator.
+- P2 is the number of arguments that the step function takes and P4 is a pointer to the FuncDef for this function. The P2 argument is not used by this opcode. It is only there to disambiguate functions that can take varying numbers of arguments. The P4 argument is only needed for the case where the step function was not previously called.
+- AggInverse Execute the xInverse function for an aggregate. The function has P5 arguments. P4 is a pointer to the FuncDef structure that specifies the function. Register P3 is the accumulator.
-- The P5 arguments are taken from register P2 and its successors.
-- AggStep Execute the xStep function for an aggregate. The function has P5 arguments. P4 is a pointer to the FuncDef structure that specifies the function. Register P3 is the accumulator.
+- The P5 arguments are taken from register P2 and its successors.
+- AggStep Execute the xStep function for an aggregate. The function has P5 arguments. P4 is a pointer to the FuncDef structure that specifies the function. Register P3 is the accumulator.
-- The P5 arguments are taken from register P2 and its successors.
-- AggStep1 Execute the xStep (if P1==0) or xInverse (if P1!=0) function for an aggregate. The function has P5 arguments. P4 is a pointer to the FuncDef structure that specifies the function. Register P3 is the accumulator.
+- The P5 arguments are taken from register P2 and its successors.
+- AggStep1 Execute the xStep (if P1==0) or xInverse (if P1!=0) function for an aggregate. The function has P5 arguments. P4 is a pointer to the FuncDef structure that specifies the function. Register P3 is the accumulator.
-- The P5 arguments are taken from register P2 and its successors.
+- The P5 arguments are taken from register P2 and its successors.
-- This opcode is initially coded as OP_AggStep0. On first evaluation, the FuncDef stored in P4 is converted into an EpilogLite3_context and the opcode is changed. In this way, the initialization of the EpilogLite3_context only happens once, instead of on each call to the step function.
-- AggValue Invoke the xValue() function and store the result in register P3.
+- This opcode is initially coded as OP_AggStep0. On first evaluation, the FuncDef stored in P4 is converted into an EpilogLite3_context and the opcode is changed. In this way, the initialization of the EpilogLite3_context only happens once, instead of on each call to the step function.
+- AggValue Invoke the xValue() function and store the result in register P3.
-- P2 is the number of arguments that the step function takes and P4 is a pointer to the FuncDef for this function. The P2 argument is not used by this opcode. It is only there to disambiguate functions that can take varying numbers of arguments. The P4 argument is only needed for the case where the step function was not previously called.
-- And Take the logical AND of the values in registers P1 and P2 and write the result into register P3.
+- P2 is the number of arguments that the step function takes and P4 is a pointer to the FuncDef for this function. The P2 argument is not used by this opcode. It is only there to disambiguate functions that can take varying numbers of arguments. The P4 argument is only needed for the case where the step function was not previously called.
+- And Take the logical AND of the values in registers P1 and P2 and write the result into register P3.
-- If either P1 or P2 is 0 (false) then the result is 0 even if the other input is NULL. A NULL and true or two NULLs give a NULL output.
-- AutoCommit Set the database auto-commit flag to P1 (1 or 0). If P2 is true, roll back any currently active btree transactions. If there are any active VMs (apart from this one), then a ROLLBACK fails. A COMMIT fails if there are active writing VMs or active VMs that use shared cache.
+- If either P1 or P2 is 0 (false) then the result is 0 even if the other input is NULL. A NULL and true or two NULLs give a NULL output.
+- AutoCommit Set the database auto-commit flag to P1 (1 or 0). If P2 is true, roll back any currently active btree transactions. If there are any active VMs (apart from this one), then a ROLLBACK fails. A COMMIT fails if there are active writing VMs or active VMs that use shared cache.
-- This instruction causes the VM to halt.
-- BeginSubrtn Mark the beginning of a subroutine that can be entered in-line or that can be called using Gosub. The subroutine should be terminated by an Return instruction that has a P1 operand that is the same as the P2 operand to this opcode and that has P3 set to 1. If the subroutine is entered in-line, then the Return will simply fall through. But if the subroutine is entered using Gosub, then the Return will jump back to the first instruction after the Gosub.
+- This instruction causes the VM to halt.
+- BeginSubrtn Mark the beginning of a subroutine that can be entered in-line or that can be called using Gosub. The subroutine should be terminated by an Return instruction that has a P1 operand that is the same as the P2 operand to this opcode and that has P3 set to 1. If the subroutine is entered in-line, then the Return will simply fall through. But if the subroutine is entered using Gosub, then the Return will jump back to the first instruction after the Gosub.
-- This routine works by loading a NULL into the P2 register. When the return address register contains a NULL, the Return instruction is a no-op that simply falls through to the next instruction (assuming that the Return opcode has a P3 value of 1). Thus if the subroutine is entered in-line, then the Return will cause in-line execution to continue. But if the subroutine is entered via Gosub, then the Return will cause a return to the address following the Gosub.
+- This routine works by loading a NULL into the P2 register. When the return address register contains a NULL, the Return instruction is a no-op that simply falls through to the next instruction (assuming that the Return opcode has a P3 value of 1). Thus if the subroutine is entered in-line, then the Return will cause in-line execution to continue. But if the subroutine is entered via Gosub, then the Return will cause a return to the address following the Gosub.
-- This opcode is identical to Null. It has a different name only to make the byte code easier to read and verify.
-- BitAnd Take the bit-wise AND of the values in register P1 and P2 and store the result in register P3. If either input is NULL, the result is NULL.
-- BitNot Interpret the content of register P1 as an integer. Store the ones-complement of the P1 value into register P2. If P1 holds a NULL then store a NULL in P2.
-- BitOr Take the bit-wise OR of the values in register P1 and P2 and store the result in register P3. If either input is NULL, the result is NULL.
-- Blob P4 points to a blob of data P1 bytes long. Store this blob in register P2. If P4 is a NULL pointer, then construct a zero-filled blob that is P1 bytes long in P2.
-- Cast Force the value in register P1 to be the type defined by P2.
+- This opcode is identical to Null. It has a different name only to make the byte code easier to read and verify.
+- BitAnd Take the bit-wise AND of the values in register P1 and P2 and store the result in register P3. If either input is NULL, the result is NULL.
+- BitNot Interpret the content of register P1 as an integer. Store the ones-complement of the P1 value into register P2. If P1 holds a NULL then store a NULL in P2.
+- BitOr Take the bit-wise OR of the values in register P1 and P2 and store the result in register P3. If either input is NULL, the result is NULL.
+- Blob P4 points to a blob of data P1 bytes long. Store this blob in register P2. If P4 is a NULL pointer, then construct a zero-filled blob that is P1 bytes long in P2.
+- Cast Force the value in register P1 to be the type defined by P2.
-- P2=='A' → BLOB
-- P2=='B' → TEXT
-- P2=='C' → NUMERIC
-- P2=='D' → INTEGER
-- P2=='E' → REAL
+- P2=='A' → BLOB
+- P2=='B' → TEXT
+- P2=='C' → NUMERIC
+- P2=='D' → INTEGER
+- P2=='E' → REAL
-- A NULL value is not changed by this routine. It remains NULL.
-- Checkpoint Checkpoint database P1. This is a no-op if P1 is not currently in WAL mode. Parameter P2 is one of EpilogLite_CHECKPOINT_PASSIVE, FULL, RESTART, or TRUNCATE. Write 1 or 0 into mem[P3] if the checkpoint returns EpilogLite_BUSY or not, respectively. Write the number of pages in the WAL after the checkpoint into mem[P3+1] and the number of pages in the WAL that have been checkpointed after the checkpoint completes into mem[P3+2]. However on an error, mem[P3+1] and mem[P3+2] are initialized to -1.
-- Clear Delete all contents of the database table or index whose root page in the database file is given by P1. But, unlike Destroy, do not remove the table or index from the database file.
+- A NULL value is not changed by this routine. It remains NULL.
+- Checkpoint Checkpoint database P1. This is a no-op if P1 is not currently in WAL mode. Parameter P2 is one of EpilogLite_CHECKPOINT_PASSIVE, FULL, RESTART, or TRUNCATE. Write 1 or 0 into mem\\[P3] if the checkpoint returns EpilogLite_BUSY or not, respectively. Write the number of pages in the WAL after the checkpoint into mem\\[P3+1] and the number of pages in the WAL that have been checkpointed after the checkpoint completes into mem\\[P3+2]. However on an error, mem\\[P3+1] and mem\\[P3+2] are initialized to -1.
+- Clear Delete all contents of the database table or index whose root page in the database file is given by P1. But, unlike Destroy, do not remove the table or index from the database file.
-- The table being cleared is in the main database file if P2==0. If P2==1 then the table to be cleared is in the auxiliary database file that is used to store tables create using CREATE TEMPORARY TABLE.
+- The table being cleared is in the main database file if P2==0. If P2==1 then the table to be cleared is in the auxiliary database file that is used to store tables create using CREATE TEMPORARY TABLE.
-- If the P3 value is non-zero, then the row change count is incremented by the number of rows in the table being cleared. If P3 is greater than zero, then the value stored in register P3 is also incremented by the number of rows in the table being cleared.
+- If the P3 value is non-zero, then the row change count is incremented by the number of rows in the table being cleared. If P3 is greater than zero, then the value stored in register P3 is also incremented by the number of rows in the table being cleared.
-- See also: Destroy
-- Close Close a cursor previously opened as P1. If P1 is not currently open, this instruction is a no-op.
-- ClrSubtype Clear the subtype from register P1.
-- CollSeq P4 is a pointer to a CollSeq object. If the next call to a user function or aggregate calls EpilogLite3GetFuncCollSeq(), this collation sequence will be returned. This is used by the built-in min(), max() and nullif() functions.
+- See also: Destroy
+- Close Close a cursor previously opened as P1. If P1 is not currently open, this instruction is a no-op.
+- ClrSubtype Clear the subtype from register P1.
+- CollSeq P4 is a pointer to a CollSeq object. If the next call to a user function or aggregate calls EpilogLite3GetFuncCollSeq(), this collation sequence will be returned. This is used by the built-in min(), max() and nullif() functions.
-- If P1 is not zero, then it is a register that a subsequent min() or max() aggregate will set to 1 if the current row is not the minimum or maximum. The P1 register is initialized to 0 by this instruction.
+- If P1 is not zero, then it is a register that a subsequent min() or max() aggregate will set to 1 if the current row is not the minimum or maximum. The P1 register is initialized to 0 by this instruction.
-- The interface used by the implementation of the aforementioned functions to retrieve the collation sequence set by this opcode is not available publicly. Only built-in functions have access to this feature.
-- Column Interpret the data that cursor P1 points to as a structure built using the MakeRecord instruction. (See the MakeRecord opcode for additional information about the format of the data.) Extract the P2-th column from this record. If there are less than (P2+1) values in the record, extract a NULL.
+- The interface used by the implementation of the aforementioned functions to retrieve the collation sequence set by this opcode is not available publicly. Only built-in functions have access to this feature.
+- Column Interpret the data that cursor P1 points to as a structure built using the MakeRecord instruction. (See the MakeRecord opcode for additional information about the format of the data.) Extract the P2-th column from this record. If there are less than (P2+1) values in the record, extract a NULL.
-- The value extracted is stored in register P3.
+- The value extracted is stored in register P3.
-- If the record contains fewer than P2 fields, then extract a NULL. Or, if the P4 argument is a P4_MEM use the value of the P4 argument as the result.
+- If the record contains fewer than P2 fields, then extract a NULL. Or, if the P4 argument is a P4_MEM use the value of the P4 argument as the result.
-- If the OPFLAG_LENGTHARG bit is set in P5 then the result is guaranteed to only be used by the length() function or the equivalent. The content of large blobs is not loaded, thus saving CPU cycles. If the OPFLAG_TYPEOFARG bit is set then the result will only be used by the typeof() function or the IS NULL or IS NOT NULL operators or the equivalent. In this case, all content loading can be omitted.
-- ColumnsUsed This opcode (which only exists if EpilogLite was compiled with EpilogLite_ENABLE_COLUMN_USED_MASK) identifies which columns of the table or index for cursor P1 are used. P4 is a 64-bit integer (P4_INT64) in which the first 63 bits are one for each of the first 63 columns of the table or index that are actually used by the cursor. The high-order bit is set if any column after the 64th is used.
-- Compare Compare two vectors of registers in reg(P1)..reg(P1+P3-1) (call this vector "A") and in reg(P2)..reg(P2+P3-1) ("B"). Save the result of the comparison for use by the next Jump instruct.
+- If the OPFLAG_LENGTHARG bit is set in P5 then the result is guaranteed to only be used by the length() function or the equivalent. The content of large blobs is not loaded, thus saving CPU cycles. If the OPFLAG_TYPEOFARG bit is set then the result will only be used by the typeof() function or the IS NULL or IS NOT NULL operators or the equivalent. In this case, all content loading can be omitted.
+- ColumnsUsed This opcode (which only exists if EpilogLite was compiled with EpilogLite_ENABLE_COLUMN_USED_MASK) identifies which columns of the table or index for cursor P1 are used. P4 is a 64-bit integer (P4_INT64) in which the first 63 bits are one for each of the first 63 columns of the table or index that are actually used by the cursor. The high-order bit is set if any column after the 64th is used.
+- Compare Compare two vectors of registers in reg(P1)..reg(P1+P3-1) (call this vector "A") and in reg(P2)..reg(P2+P3-1) ("B"). Save the result of the comparison for use by the next Jump instruct.
-- If P5 has the OPFLAG_PERMUTE bit set, then the order of comparison is determined by the most recent Permutation operator. If the OPFLAG_PERMUTE bit is clear, then register are compared in sequential order.
+- If P5 has the OPFLAG_PERMUTE bit set, then the order of comparison is determined by the most recent Permutation operator. If the OPFLAG_PERMUTE bit is clear, then register are compared in sequential order.
-- P4 is a KeyInfo structure that defines collating sequences and sort orders for the comparison. The permutation applies to registers only. The KeyInfo elements are used sequentially.
+- P4 is a KeyInfo structure that defines collating sequences and sort orders for the comparison. The permutation applies to registers only. The KeyInfo elements are used sequentially.
-- The comparison is a sort comparison, so NULLs compare equal, NULLs are less than numbers, numbers are less than strings, and strings are less than blobs.
+- The comparison is a sort comparison, so NULLs compare equal, NULLs are less than numbers, numbers are less than strings, and strings are less than blobs.
-- This opcode must be immediately followed by an Jump opcode.
-- Concat Add the text in register P1 onto the end of the text in register P2 and store the result in register P3. If either the P1 or P2 text are NULL then store NULL in P3.
+- This opcode must be immediately followed by an Jump opcode.
+- Concat Add the text in register P1 onto the end of the text in register P2 and store the result in register P3. If either the P1 or P2 text are NULL then store NULL in P3.
-- P3 = P2 || P1
+- P3 = P2 || P1
-- It is illegal for P1 and P3 to be the same register. Sometimes, if P3 is the same register as P2, the implementation is able to avoid a memcpy().
-- Copy Make a copy of registers P1..P1+P3 into registers P2..P2+P3.
+- It is illegal for P1 and P3 to be the same register. Sometimes, if P3 is the same register as P2, the implementation is able to avoid a memcpy().
+- Copy Make a copy of registers P1..P1+P3 into registers P2..P2+P3.
-- If the 0x0002 bit of P5 is set then also clear the MEM_Subtype flag in the destination. The 0x0001 bit of P5 indicates that this Copy opcode cannot be merged. The 0x0001 bit is used by the query planner and does not come into play during query execution.
+- If the 0x0002 bit of P5 is set then also clear the MEM_Subtype flag in the destination. The 0x0001 bit of P5 indicates that this Copy opcode cannot be merged. The 0x0001 bit is used by the query planner and does not come into play during query execution.
-- This instruction makes a deep copy of the value. A duplicate is made of any string or blob constant. See also SCopy.
-- Count Store the number of entries (an integer value) in the table or index opened by cursor P1 in register P2.
+- This instruction makes a deep copy of the value. A duplicate is made of any string or blob constant. See also SCopy.
+- Count Store the number of entries (an integer value) in the table or index opened by cursor P1 in register P2.
-- If P3==0, then an exact count is obtained, which involves visiting every btree page of the table. But if P3 is non-zero, an estimate is returned based on the current cursor position.
-- CreateBtree Allocate a new b-tree in the main database file if P1==0 or in the TEMP database file if P1==1 or in an attached database if P1>1. The P3 argument must be 1 (BTREE_INTKEY) for a rowid table it must be 2 (BTREE_BLOBKEY) for an index or WITHOUT ROWID table. The root page number of the new b-tree is stored in register P2.
-- CursorHint Provide a hint to cursor P1 that it only needs to return rows that satisfy the Expr in P4. TK_REGISTER terms in the P4 expression refer to values currently held in registers. TK_COLUMN terms in the P4 expression refer to columns in the b-tree to which cursor P1 is pointing.
-- CursorLock Lock the btree to which cursor P1 is pointing so that the btree cannot be written by an other cursor.
-- CursorUnlock Unlock the btree to which cursor P1 is pointing so that it can be written by other cursors.
-- DecrJumpZero Register P1 must hold an integer. Decrement the value in P1 and jump to P2 if the new value is exactly zero.
-- DeferredSeek P1 is an open index cursor and P3 is a cursor on the corresponding table. This opcode does a deferred seek of the P3 table cursor to the row that corresponds to the current row of P1.
+- If P3==0, then an exact count is obtained, which involves visiting every btree page of the table. But if P3 is non-zero, an estimate is returned based on the current cursor position.
+- CreateBtree Allocate a new b-tree in the main database file if P1==0 or in the TEMP database file if P1==1 or in an attached database if P1>1. The P3 argument must be 1 (BTREE_INTKEY) for a rowid table it must be 2 (BTREE_BLOBKEY) for an index or WITHOUT ROWID table. The root page number of the new b-tree is stored in register P2.
+- CursorHint Provide a hint to cursor P1 that it only needs to return rows that satisfy the Expr in P4. TK_REGISTER terms in the P4 expression refer to values currently held in registers. TK_COLUMN terms in the P4 expression refer to columns in the b-tree to which cursor P1 is pointing.
+- CursorLock Lock the btree to which cursor P1 is pointing so that the btree cannot be written by an other cursor.
+- CursorUnlock Unlock the btree to which cursor P1 is pointing so that it can be written by other cursors.
+- DecrJumpZero Register P1 must hold an integer. Decrement the value in P1 and jump to P2 if the new value is exactly zero.
+- DeferredSeek P1 is an open index cursor and P3 is a cursor on the corresponding table. This opcode does a deferred seek of the P3 table cursor to the row that corresponds to the current row of P1.
-- This is a deferred seek. Nothing actually happens until the cursor is used to read a record. That way, if no reads occur, no unnecessary I/O happens.
+- This is a deferred seek. Nothing actually happens until the cursor is used to read a record. That way, if no reads occur, no unnecessary I/O happens.
-- P4 may be an array of integers (type P4_INTARRAY) containing one entry for each column in the P3 table. If array entry a(i) is non-zero, then reading column a(i)-1 from cursor P3 is equivalent to performing the deferred seek and then reading column i from P1. This information is stored in P3 and used to redirect reads against P3 over to P1, thus possibly avoiding the need to seek and read cursor P3.
-- Delete Delete the record at which the P1 cursor is currently pointing.
+- P4 may be an array of integers (type P4_INTARRAY) containing one entry for each column in the P3 table. If array entry a(i) is non-zero, then reading column a(i)-1 from cursor P3 is equivalent to performing the deferred seek and then reading column i from P1. This information is stored in P3 and used to redirect reads against P3 over to P1, thus possibly avoiding the need to seek and read cursor P3.
+- Delete Delete the record at which the P1 cursor is currently pointing.
-- If the OPFLAG_SAVEPOSITION bit of the P5 parameter is set, then the cursor will be left pointing at either the next or the previous record in the table. If it is left pointing at the next record, then the next Next instruction will be a no-op. As a result, in this case it is ok to delete a record from within a Next loop. If OPFLAG_SAVEPOSITION bit of P5 is clear, then the cursor will be left in an undefined state.
+- If the OPFLAG_SAVEPOSITION bit of the P5 parameter is set, then the cursor will be left pointing at either the next or the previous record in the table. If it is left pointing at the next record, then the next Next instruction will be a no-op. As a result, in this case it is ok to delete a record from within a Next loop. If OPFLAG_SAVEPOSITION bit of P5 is clear, then the cursor will be left in an undefined state.
-- If the OPFLAG_AUXDELETE bit is set on P5, that indicates that this delete is one of several associated with deleting a table row and all its associated index entries. Exactly one of those deletes is the "primary" delete. The others are all on OPFLAG_FORDELETE cursors or else are marked with the AUXDELETE flag.
+- If the OPFLAG_AUXDELETE bit is set on P5, that indicates that this delete is one of several associated with deleting a table row and all its associated index entries. Exactly one of those deletes is the "primary" delete. The others are all on OPFLAG_FORDELETE cursors or else are marked with the AUXDELETE flag.
-- If the OPFLAG_NCHANGE (0x01) flag of P2 (NB: P2 not P5) is set, then the row change count is incremented (otherwise not).
+- If the OPFLAG_NCHANGE (0x01) flag of P2 (NB: P2 not P5) is set, then the row change count is incremented (otherwise not).
-- If the OPFLAG_ISNOOP (0x40) flag of P2 (not P5!) is set, then the pre-update-hook for deletes is run, but the btree is otherwise unchanged. This happens when the Delete is to be shortly followed by an Insert with the same key, causing the btree entry to be overwritten.
+- If the OPFLAG_ISNOOP (0x40) flag of P2 (not P5!) is set, then the pre-update-hook for deletes is run, but the btree is otherwise unchanged. This happens when the Delete is to be shortly followed by an Insert with the same key, causing the btree entry to be overwritten.
-- P1 must not be pseudo-table. It has to be a real table with multiple rows.
+- P1 must not be pseudo-table. It has to be a real table with multiple rows.
-- If P4 is not NULL then it points to a Table object. In this case either the update or pre-update hook, or both, may be invoked. The P1 cursor must have been positioned using NotFound prior to invoking this opcode in this case. Specifically, if one is configured, the pre-update hook is invoked if P4 is not NULL. The update-hook is invoked if one is configured, P4 is not NULL, and the OPFLAG_NCHANGE flag is set in P2.
+- If P4 is not NULL then it points to a Table object. In this case either the update or pre-update hook, or both, may be invoked. The P1 cursor must have been positioned using NotFound prior to invoking this opcode in this case. Specifically, if one is configured, the pre-update hook is invoked if P4 is not NULL. The update-hook is invoked if one is configured, P4 is not NULL, and the OPFLAG_NCHANGE flag is set in P2.
-- If the OPFLAG_ISUPDATE flag is set in P2, then P3 contains the address of the memory cell that contains the value that the rowid of the row will be set to by the update.
-- Destroy Delete an entire database table or index whose root page in the database file is given by P1.
+- If the OPFLAG_ISUPDATE flag is set in P2, then P3 contains the address of the memory cell that contains the value that the rowid of the row will be set to by the update.
+- Destroy Delete an entire database table or index whose root page in the database file is given by P1.
-- The table being destroyed is in the main database file if P3==0. If P3==1 then the table to be destroyed is in the auxiliary database file that is used to store tables create using CREATE TEMPORARY TABLE.
+- The table being destroyed is in the main database file if P3==0. If P3==1 then the table to be destroyed is in the auxiliary database file that is used to store tables create using CREATE TEMPORARY TABLE.
-- If AUTOVACUUM is enabled then it is possible that another root page might be moved into the newly deleted root page in order to keep all root pages contiguous at the beginning of the database. The former value of the root page that moved - its value before the move occurred - is stored in register P2. If no page movement was required (because the table being dropped was already the last one in the database) then a zero is stored in register P2. If AUTOVACUUM is disabled then a zero is stored in register P2.
+- If AUTOVACUUM is enabled then it is possible that another root page might be moved into the newly deleted root page in order to keep all root pages contiguous at the beginning of the database. The former value of the root page that moved - its value before the move occurred - is stored in register P2. If no page movement was required (because the table being dropped was already the last one in the database) then a zero is stored in register P2. If AUTOVACUUM is disabled then a zero is stored in register P2.
-- This opcode throws an error if there are any active reader VMs when it is invoked. This is done to avoid the difficulty associated with updating existing cursors when a root page is moved in an AUTOVACUUM database. This error is thrown even if the database is not an AUTOVACUUM db in order to avoid introducing an incompatibility between autovacuum and non-autovacuum modes.
+- This opcode throws an error if there are any active reader VMs when it is invoked. This is done to avoid the difficulty associated with updating existing cursors when a root page is moved in an AUTOVACUUM database. This error is thrown even if the database is not an AUTOVACUUM db in order to avoid introducing an incompatibility between autovacuum and non-autovacuum modes.
-- See also: Clear
-- Divide Divide the value in register P1 by the value in register P2 and store the result in register P3 (P3=P2/P1). If the value in register P1 is zero, then the result is NULL. If either input is NULL, the result is NULL.
-- DropIndex Remove the internal (in-memory) data structures that describe the index named P4 in database P1. This is called after an index is dropped from disk (using the Destroy opcode) in order to keep the internal representation of the schema consistent with what is on disk.
-- DropTable Remove the internal (in-memory) data structures that describe the table named P4 in database P1. This is called after a table is dropped from disk (using the Destroy opcode) in order to keep the internal representation of the schema consistent with what is on disk.
-- DropTrigger Remove the internal (in-memory) data structures that describe the trigger named P4 in database P1. This is called after a trigger is dropped from disk (using the Destroy opcode) in order to keep the internal representation of the schema consistent with what is on disk.
-- ElseEq This opcode must follow an Lt or Gt comparison operator. There can be zero or more OP_ReleaseReg opcodes intervening, but no other opcodes are allowed to occur between this instruction and the previous Lt or Gt.
+- See also: Clear
+- Divide Divide the value in register P1 by the value in register P2 and store the result in register P3 (P3=P2/P1). If the value in register P1 is zero, then the result is NULL. If either input is NULL, the result is NULL.
+- DropIndex Remove the internal (in-memory) data structures that describe the index named P4 in database P1. This is called after an index is dropped from disk (using the Destroy opcode) in order to keep the internal representation of the schema consistent with what is on disk.
+- DropTable Remove the internal (in-memory) data structures that describe the table named P4 in database P1. This is called after a table is dropped from disk (using the Destroy opcode) in order to keep the internal representation of the schema consistent with what is on disk.
+- DropTrigger Remove the internal (in-memory) data structures that describe the trigger named P4 in database P1. This is called after a trigger is dropped from disk (using the Destroy opcode) in order to keep the internal representation of the schema consistent with what is on disk.
+- ElseEq This opcode must follow an Lt or Gt comparison operator. There can be zero or more OP_ReleaseReg opcodes intervening, but no other opcodes are allowed to occur between this instruction and the previous Lt or Gt.
-- If the result of an Eq comparison on the same two operands as the prior Lt or Gt would have been true, then jump to P2. If the result of an Eq comparison on the two previous operands would have been false or NULL, then fall through.
-- EndCoroutine The instruction at the address in register P1 is a Yield. Jump to the P2 parameter of that Yield. After the jump, the value register P1 is left with a value such that subsequent OP_Yields go back to the this same EndCoroutine instruction.
+- If the result of an Eq comparison on the same two operands as the prior Lt or Gt would have been true, then jump to P2. If the result of an Eq comparison on the two previous operands would have been false or NULL, then fall through.
+- EndCoroutine The instruction at the address in register P1 is a Yield. Jump to the P2 parameter of that Yield. After the jump, the value register P1 is left with a value such that subsequent OP_Yields go back to the this same EndCoroutine instruction.
-- See also: InitCoroutine
-- Eq Compare the values in register P1 and P3. If reg(P3)==reg(P1) then jump to address P2.
+- See also: InitCoroutine
+- Eq Compare the values in register P1 and P3. If reg(P3)==reg(P1) then jump to address P2.
-- The EpilogLite_AFF_MASK portion of P5 must be an affinity character - EpilogLite_AFF_TEXT, EpilogLite_AFF_INTEGER, and so forth. An attempt is made to coerce both inputs according to this affinity before the comparison is made. If the EpilogLite_AFF_MASK is 0x00, then numeric affinity is used. Note that the affinity conversions are stored back into the input registers P1 and P3. So this opcode can cause persistent changes to registers P1 and P3.
+- The EpilogLite_AFF_MASK portion of P5 must be an affinity character - EpilogLite_AFF_TEXT, EpilogLite_AFF_INTEGER, and so forth. An attempt is made to coerce both inputs according to this affinity before the comparison is made. If the EpilogLite_AFF_MASK is 0x00, then numeric affinity is used. Note that the affinity conversions are stored back into the input registers P1 and P3. So this opcode can cause persistent changes to registers P1 and P3.
-- Once any conversions have taken place, and neither value is NULL, the values are compared. If both values are blobs then memcmp() is used to determine the results of the comparison. If both values are text, then the appropriate collating function specified in P4 is used to do the comparison. If P4 is not specified then memcmp() is used to compare text string. If both values are numeric, then a numeric comparison is used. If the two values are of different types, then numbers are considered less than strings and strings are considered less than blobs.
+- Once any conversions have taken place, and neither value is NULL, the values are compared. If both values are blobs then memcmp() is used to determine the results of the comparison. If both values are text, then the appropriate collating function specified in P4 is used to do the comparison. If P4 is not specified then memcmp() is used to compare text string. If both values are numeric, then a numeric comparison is used. If the two values are of different types, then numbers are considered less than strings and strings are considered less than blobs.
-- If EpilogLite_NULLEQ is set in P5 then the result of comparison is always either true or false and is never NULL. If both operands are NULL then the result of comparison is true. If either operand is NULL then the result is false. If neither operand is NULL the result is the same as it would be if the EpilogLite_NULLEQ flag were omitted from P5.
+- If EpilogLite_NULLEQ is set in P5 then the result of comparison is always either true or false and is never NULL. If both operands are NULL then the result of comparison is true. If either operand is NULL then the result is false. If neither operand is NULL the result is the same as it would be if the EpilogLite_NULLEQ flag were omitted from P5.
-- This opcode saves the result of comparison for use by the new Jump opcode.
-- Expire Cause precompiled statements to expire. When an expired statement is executed using EpilogLite3_step() it will either automatically reprepare itself (if it was originally created using EpilogLite3_prepare_v2()) or it will fail with EpilogLite_SCHEMA.
+- This opcode saves the result of comparison for use by the new Jump opcode.
+- Expire Cause precompiled statements to expire. When an expired statement is executed using EpilogLite3_step() it will either automatically reprepare itself (if it was originally created using EpilogLite3_prepare_v2()) or it will fail with EpilogLite_SCHEMA.
-- If P1 is 0, then all SQL statements become expired. If P1 is non-zero, then only the currently executing statement is expired.
+- If P1 is 0, then all SQL statements become expired. If P1 is non-zero, then only the currently executing statement is expired.
-- If P2 is 0, then SQL statements are expired immediately. If P2 is 1, then running SQL statements are allowed to continue to run to completion. The P2==1 case occurs when a CREATE INDEX or similar schema change happens that might help the statement run faster but which does not affect the correctness of operation.
-- Explain This is the same as Noop during normal query execution. The purpose of this opcode is to hold information about the query plan for the purpose of EXPLAIN QUERY PLAN output.
+- If P2 is 0, then SQL statements are expired immediately. If P2 is 1, then running SQL statements are allowed to continue to run to completion. The P2==1 case occurs when a CREATE INDEX or similar schema change happens that might help the statement run faster but which does not affect the correctness of operation.
+- Explain This is the same as Noop during normal query execution. The purpose of this opcode is to hold information about the query plan for the purpose of EXPLAIN QUERY PLAN output.
-- The P4 value is human-readable text that describes the query plan element. Something like "SCAN t1" or "SEARCH t2 USING INDEX t2x1".
+- The P4 value is human-readable text that describes the query plan element. Something like "SCAN t1" or "SEARCH t2 USING INDEX t2x1".
-- The P1 value is the ID of the current element and P2 is the parent element for the case of nested query plan elements. If P2 is zero then this element is a top-level element.
+- The P1 value is the ID of the current element and P2 is the parent element for the case of nested query plan elements. If P2 is zero then this element is a top-level element.
-- For loop elements, P3 is the estimated code of each invocation of this element.
+- For loop elements, P3 is the estimated code of each invocation of this element.
-- As with all opcodes, the meanings of the parameters for Explain are subject to change from one release to the next. Applications should not attempt to interpret or use any of the information contained in the Explain opcode. The information provided by this opcode is intended for testing and debugging use only.
-- Filter Compute a hash on the key contained in the P4 registers starting with r[P3]. Check to see if that hash is found in the bloom filter hosted by register P1. If it is not present then maybe jump to P2. Otherwise fall through.
+- As with all opcodes, the meanings of the parameters for Explain are subject to change from one release to the next. Applications should not attempt to interpret or use any of the information contained in the Explain opcode. The information provided by this opcode is intended for testing and debugging use only.
+- Filter Compute a hash on the key contained in the P4 registers starting with r\\[P3]. Check to see if that hash is found in the bloom filter hosted by register P1. If it is not present then maybe jump to P2. Otherwise fall through.
-- False negatives are harmless. It is always safe to fall through, even if the value is in the bloom filter. A false negative causes more CPU cycles to be used, but it should still yield the correct answer. However, an incorrect answer may well arise from a false positive - if the jump is taken when it should fall through.
-- FilterAdd Compute a hash on the P4 registers starting with r[P3] and add that hash to the bloom filter contained in r[P1].
-- FinishSeek If cursor P1 was previously moved via DeferredSeek, complete that seek operation now, without further delay. If the cursor seek has already occurred, this instruction is a no-op.
-- FkCheck Halt with an EpilogLite_CONSTRAINT error if there are any unresolved foreign key constraint violations. If there are no foreign key constraint violations, this is a no-op.
+- False negatives are harmless. It is always safe to fall through, even if the value is in the bloom filter. A false negative causes more CPU cycles to be used, but it should still yield the correct answer. However, an incorrect answer may well arise from a false positive - if the jump is taken when it should fall through.
+- FilterAdd Compute a hash on the P4 registers starting with r\\[P3] and add that hash to the bloom filter contained in r\\[P1].
+- FinishSeek If cursor P1 was previously moved via DeferredSeek, complete that seek operation now, without further delay. If the cursor seek has already occurred, this instruction is a no-op.
+- FkCheck Halt with an EpilogLite_CONSTRAINT error if there are any unresolved foreign key constraint violations. If there are no foreign key constraint violations, this is a no-op.
-- FK constraint violations are also checked when the prepared statement exits. This opcode is used to raise foreign key constraint errors prior to returning results such as a row change count or the result of a RETURNING clause.
-- FkCounter Increment a "constraint counter" by P2 (P2 may be negative or positive). If P1 is non-zero, the database constraint counter is incremented (deferred foreign key constraints). Otherwise, if P1 is zero, the statement counter is incremented (immediate foreign key constraints).
-- FkIfZero This opcode tests if a foreign key constraint-counter is currently zero. If so, jump to instruction P2. Otherwise, fall through to the next instruction.
+- FK constraint violations are also checked when the prepared statement exits. This opcode is used to raise foreign key constraint errors prior to returning results such as a row change count or the result of a RETURNING clause.
+- FkCounter Increment a "constraint counter" by P2 (P2 may be negative or positive). If P1 is non-zero, the database constraint counter is incremented (deferred foreign key constraints). Otherwise, if P1 is zero, the statement counter is incremented (immediate foreign key constraints).
+- FkIfZero This opcode tests if a foreign key constraint-counter is currently zero. If so, jump to instruction P2. Otherwise, fall through to the next instruction.
-- If P1 is non-zero, then the jump is taken if the database constraint-counter is zero (the one that counts deferred constraint violations). If P1 is zero, the jump is taken if the statement constraint-counter is zero (immediate foreign key constraint violations).
-- Found If P4==0 then register P3 holds a blob constructed by MakeRecord. If P4>0 then register P3 is the first of P4 registers that form an unpacked record.
+- If P1 is non-zero, then the jump is taken if the database constraint-counter is zero (the one that counts deferred constraint violations). If P1 is zero, the jump is taken if the statement constraint-counter is zero (immediate foreign key constraint violations).
+- Found If P4==0 then register P3 holds a blob constructed by MakeRecord. If P4>0 then register P3 is the first of P4 registers that form an unpacked record.
-- Cursor P1 is on an index btree. If the record identified by P3 and P4 is a prefix of any entry in P1 then a jump is made to P2 and P1 is left pointing at the matching entry.
+- Cursor P1 is on an index btree. If the record identified by P3 and P4 is a prefix of any entry in P1 then a jump is made to P2 and P1 is left pointing at the matching entry.
-- This operation leaves the cursor in a state where it can be advanced in the forward direction. The Next instruction will work, but not the Prev instruction.
+- This operation leaves the cursor in a state where it can be advanced in the forward direction. The Next instruction will work, but not the Prev instruction.
-- See also: NotFound, NoConflict, NotExists. SeekGe
-- Function Invoke a user function (P4 is a pointer to an EpilogLite3_context object that contains a pointer to the function to be run) with arguments taken from register P2 and successors. The number of arguments is in the EpilogLite3_context object that P4 points to. The result of the function is stored in register P3. Register P3 must not be one of the function inputs.
+- See also: NotFound, NoConflict, NotExists. SeekGe
+- Function Invoke a user function (P4 is a pointer to an EpilogLite3_context object that contains a pointer to the function to be run) with arguments taken from register P2 and successors. The number of arguments is in the EpilogLite3_context object that P4 points to. The result of the function is stored in register P3. Register P3 must not be one of the function inputs.
-- P1 is a 32-bit bitmask indicating whether or not each argument to the function was determined to be constant at compile time. If the first argument was constant then bit 0 of P1 is set. This is used to determine whether meta data associated with a user function argument using the EpilogLite3_set_auxdata() API may be safely retained until the next invocation of this opcode.
+- P1 is a 32-bit bitmask indicating whether or not each argument to the function was determined to be constant at compile time. If the first argument was constant then bit 0 of P1 is set. This is used to determine whether meta data associated with a user function argument using the EpilogLite3_set_auxdata() API may be safely retained until the next invocation of this opcode.
-- See also: AggStep, AggFinal, PureFunc
-- Ge This works just like the Lt opcode except that the jump is taken if the content of register P3 is greater than or equal to the content of register P1. See the Lt opcode for additional information.
-- GetSubtype Extract the subtype value from register P1 and write that subtype into register P2. If P1 has no subtype, then P1 gets a NULL.
-- Gosub Write the current address onto register P1 and then jump to address P2.
-- Goto An unconditional jump to address P2. The next instruction executed will be the one at index P2 from the beginning of the program.
+- See also: AggStep, AggFinal, PureFunc
+- Ge This works just like the Lt opcode except that the jump is taken if the content of register P3 is greater than or equal to the content of register P1. See the Lt opcode for additional information.
+- GetSubtype Extract the subtype value from register P1 and write that subtype into register P2. If P1 has no subtype, then P1 gets a NULL.
+- Gosub Write the current address onto register P1 and then jump to address P2.
+- Goto An unconditional jump to address P2. The next instruction executed will be the one at index P2 from the beginning of the program.
-- The P1 parameter is not actually used by this opcode. However, it is sometimes set to 1 instead of 0 as a hint to the command-line shell that this Goto is the bottom of a loop and that the lines from P2 down to the current line should be indented for EXPLAIN output.
-- Gt This works just like the Lt opcode except that the jump is taken if the content of register P3 is greater than the content of register P1. See the Lt opcode for additional information.
-- Halt Exit immediately. All open cursors, etc are closed automatically.
+- The P1 parameter is not actually used by this opcode. However, it is sometimes set to 1 instead of 0 as a hint to the command-line shell that this Goto is the bottom of a loop and that the lines from P2 down to the current line should be indented for EXPLAIN output.
+- Gt This works just like the Lt opcode except that the jump is taken if the content of register P3 is greater than the content of register P1. See the Lt opcode for additional information.
+- Halt Exit immediately. All open cursors, etc are closed automatically.
-- P1 is the result code returned by EpilogLite3_exec(), EpilogLite3_reset(), or EpilogLite3_finalize(). For a normal halt, this should be EpilogLite_OK (0). For errors, it can be some other value. If P1!=0 then P2 will determine whether or not to rollback the current transaction. Do not rollback if P2==OE_Fail. Do the rollback if P2==OE_Rollback. If P2==OE_Abort, then back out all changes that have occurred during this execution of the VDBE, but do not rollback the transaction.
+- P1 is the result code returned by EpilogLite3_exec(), EpilogLite3_reset(), or EpilogLite3_finalize(). For a normal halt, this should be EpilogLite_OK (0). For errors, it can be some other value. If P1!=0 then P2 will determine whether or not to rollback the current transaction. Do not rollback if P2==OE_Fail. Do the rollback if P2==OE_Rollback. If P2==OE_Abort, then back out all changes that have occurred during this execution of the VDBE, but do not rollback the transaction.
-- If P3 is not zero and P4 is NULL, then P3 is a register that holds the text of an error message.
+- If P3 is not zero and P4 is NULL, then P3 is a register that holds the text of an error message.
-- If P3 is zero and P4 is not null then the error message string is held in P4.
+- If P3 is zero and P4 is not null then the error message string is held in P4.
-- P5 is a value between 1 and 4, inclusive, then the P4 error message string is modified as follows:
+- P5 is a value between 1 and 4, inclusive, then the P4 error message string is modified as follows:
-- 1: NOT NULL constraint failed: P4 2: UNIQUE constraint failed: P4 3: CHECK constraint failed: P4 4: FOREIGN KEY constraint failed: P4
+- 1: NOT NULL constraint failed: P4 2: UNIQUE constraint failed: P4 3: CHECK constraint failed: P4 4: FOREIGN KEY constraint failed: P4
-- If P3 is zero and P5 is not zero and P4 is NULL, then everything after the ":" is omitted.
+- If P3 is zero and P5 is not zero and P4 is NULL, then everything after the ":" is omitted.
-- There is an implied "Halt 0 0 0" instruction inserted at the very end of every program. So a jump past the last instruction of the program is the same as executing Halt.
-- HaltIfNull Check the value in register P3. If it is NULL then Halt using parameter P1, P2, and P4 as if this were a Halt instruction. If the value in register P3 is not NULL, then this routine is a no-op. The P5 parameter should be 1.
-- IdxDelete The content of P3 registers starting at register P2 form an unpacked index key. This opcode removes that entry from the index opened by cursor P1.
+- There is an implied "Halt 0 0 0" instruction inserted at the very end of every program. So a jump past the last instruction of the program is the same as executing Halt.
+- HaltIfNull Check the value in register P3. If it is NULL then Halt using parameter P1, P2, and P4 as if this were a Halt instruction. If the value in register P3 is not NULL, then this routine is a no-op. The P5 parameter should be 1.
+- IdxDelete The content of P3 registers starting at register P2 form an unpacked index key. This opcode removes that entry from the index opened by cursor P1.
-- If P5 is not zero, then raise an EpilogLite_CORRUPT_INDEX error if no matching index entry is found. This happens when running an UPDATE or DELETE statement and the index entry to be updated or deleted is not found. For some uses of IdxDelete (example: the EXCEPT operator) it does not matter that no matching entry is found. For those cases, P5 is zero. Also, do not raise this (self-correcting and non-critical) error if in writable_schema mode.
-- IdxGE The P4 register values beginning with P3 form an unpacked index key that omits the PRIMARY KEY. Compare this key value against the index that P1 is currently pointing to, ignoring the PRIMARY KEY or ROWID fields at the end.
+- If P5 is not zero, then raise an EpilogLite_CORRUPT_INDEX error if no matching index entry is found. This happens when running an UPDATE or DELETE statement and the index entry to be updated or deleted is not found. For some uses of IdxDelete (example: the EXCEPT operator) it does not matter that no matching entry is found. For those cases, P5 is zero. Also, do not raise this (self-correcting and non-critical) error if in writable_schema mode.
+- IdxGE The P4 register values beginning with P3 form an unpacked index key that omits the PRIMARY KEY. Compare this key value against the index that P1 is currently pointing to, ignoring the PRIMARY KEY or ROWID fields at the end.
-- If the P1 index entry is greater than or equal to the key value then jump to P2. Otherwise fall through to the next instruction.
-- IdxGT The P4 register values beginning with P3 form an unpacked index key that omits the PRIMARY KEY. Compare this key value against the index that P1 is currently pointing to, ignoring the PRIMARY KEY or ROWID fields at the end.
+- If the P1 index entry is greater than or equal to the key value then jump to P2. Otherwise fall through to the next instruction.
+- IdxGT The P4 register values beginning with P3 form an unpacked index key that omits the PRIMARY KEY. Compare this key value against the index that P1 is currently pointing to, ignoring the PRIMARY KEY or ROWID fields at the end.
-- If the P1 index entry is greater than the key value then jump to P2. Otherwise fall through to the next instruction.
-- IdxInsert Register P2 holds an SQL index key made using the MakeRecord instructions. This opcode writes that key into the index P1. Data for the entry is nil.
+- If the P1 index entry is greater than the key value then jump to P2. Otherwise fall through to the next instruction.
+- IdxInsert Register P2 holds an SQL index key made using the MakeRecord instructions. This opcode writes that key into the index P1. Data for the entry is nil.
-- If P4 is not zero, then it is the number of values in the unpacked key of reg(P2). In that case, P3 is the index of the first register for the unpacked key. The availability of the unpacked key can sometimes be an optimization.
+- If P4 is not zero, then it is the number of values in the unpacked key of reg(P2). In that case, P3 is the index of the first register for the unpacked key. The availability of the unpacked key can sometimes be an optimization.
-- If P5 has the OPFLAG_APPEND bit set, that is a hint to the b-tree layer that this insert is likely to be an append.
+- If P5 has the OPFLAG_APPEND bit set, that is a hint to the b-tree layer that this insert is likely to be an append.
-- If P5 has the OPFLAG_NCHANGE bit set, then the change counter is incremented by this instruction. If the OPFLAG_NCHANGE bit is clear, then the change counter is unchanged.
+- If P5 has the OPFLAG_NCHANGE bit set, then the change counter is incremented by this instruction. If the OPFLAG_NCHANGE bit is clear, then the change counter is unchanged.
-- If the OPFLAG_USESEEKRESULT flag of P5 is set, the implementation might run faster by avoiding an unnecessary seek on cursor P1. However, the OPFLAG_USESEEKRESULT flag must only be set if there have been no prior seeks on the cursor or if the most recent seek used a key equivalent to P2.
+- If the OPFLAG_USESEEKRESULT flag of P5 is set, the implementation might run faster by avoiding an unnecessary seek on cursor P1. However, the OPFLAG_USESEEKRESULT flag must only be set if there have been no prior seeks on the cursor or if the most recent seek used a key equivalent to P2.
-- This instruction only works for indices. The equivalent instruction for tables is Insert.
-- IdxLE The P4 register values beginning with P3 form an unpacked index key that omits the PRIMARY KEY or ROWID. Compare this key value against the index that P1 is currently pointing to, ignoring the PRIMARY KEY or ROWID on the P1 index.
+- This instruction only works for indices. The equivalent instruction for tables is Insert.
+- IdxLE The P4 register values beginning with P3 form an unpacked index key that omits the PRIMARY KEY or ROWID. Compare this key value against the index that P1 is currently pointing to, ignoring the PRIMARY KEY or ROWID on the P1 index.
-- If the P1 index entry is less than or equal to the key value then jump to P2. Otherwise fall through to the next instruction.
-- IdxLT The P4 register values beginning with P3 form an unpacked index key that omits the PRIMARY KEY or ROWID. Compare this key value against the index that P1 is currently pointing to, ignoring the PRIMARY KEY or ROWID on the P1 index.
+- If the P1 index entry is less than or equal to the key value then jump to P2. Otherwise fall through to the next instruction.
+- IdxLT The P4 register values beginning with P3 form an unpacked index key that omits the PRIMARY KEY or ROWID. Compare this key value against the index that P1 is currently pointing to, ignoring the PRIMARY KEY or ROWID on the P1 index.
-- If the P1 index entry is less than the key value then jump to P2. Otherwise fall through to the next instruction.
-- IdxRowid Write into register P2 an integer which is the last entry in the record at the end of the index key pointed to by cursor P1. This integer should be the rowid of the table entry to which this index entry points.
+- If the P1 index entry is less than the key value then jump to P2. Otherwise fall through to the next instruction.
+- IdxRowid Write into register P2 an integer which is the last entry in the record at the end of the index key pointed to by cursor P1. This integer should be the rowid of the table entry to which this index entry points.
-- See also: Rowid, MakeRecord.
-- If Jump to P2 if the value in register P1 is true. The value is considered true if it is numeric and non-zero. If the value in P1 is NULL then take the jump if and only if P3 is non-zero.
-- IfNoHope Register P3 is the first of P4 registers that form an unpacked record. Cursor P1 is an index btree. P2 is a jump destination. In other words, the operands to this opcode are the same as the operands to NotFound and IdxGT.
+- See also: Rowid, MakeRecord.
+- If Jump to P2 if the value in register P1 is true. The value is considered true if it is numeric and non-zero. If the value in P1 is NULL then take the jump if and only if P3 is non-zero.
+- IfNoHope Register P3 is the first of P4 registers that form an unpacked record. Cursor P1 is an index btree. P2 is a jump destination. In other words, the operands to this opcode are the same as the operands to NotFound and IdxGT.
-- This opcode is an optimization attempt only. If this opcode always falls through, the correct answer is still obtained, but extra work is performed.
+- This opcode is an optimization attempt only. If this opcode always falls through, the correct answer is still obtained, but extra work is performed.
-- A value of N in the seekHit flag of cursor P1 means that there exists a key P3:N that will match some record in the index. We want to know if it is possible for a record P3:P4 to match some record in the index. If it is not possible, we can skip some work. So if seekHit is less than P4, attempt to find out if a match is possible by running NotFound.
+- A value of N in the seekHit flag of cursor P1 means that there exists a key P3:N that will match some record in the index. We want to know if it is possible for a record P3:P4 to match some record in the index. If it is not possible, we can skip some work. So if seekHit is less than P4, attempt to find out if a match is possible by running NotFound.
-- This opcode is used in IN clause processing for a multi-column key. If an IN clause is attached to an element of the key other than the left-most element, and if there are no matches on the most recent seek over the whole key, then it might be that one of the key element to the left is prohibiting a match, and hence there is "no hope" of any match regardless of how many IN clause elements are checked. In such a case, we abandon the IN clause search early, using this opcode. The opcode name comes from the fact that the jump is taken if there is "no hope" of achieving a match.
+- This opcode is used in IN clause processing for a multi-column key. If an IN clause is attached to an element of the key other than the left-most element, and if there are no matches on the most recent seek over the whole key, then it might be that one of the key element to the left is prohibiting a match, and hence there is "no hope" of any match regardless of how many IN clause elements are checked. In such a case, we abandon the IN clause search early, using this opcode. The opcode name comes from the fact that the jump is taken if there is "no hope" of achieving a match.
-- See also: NotFound, SeekHit
-- IfNot Jump to P2 if the value in register P1 is False. The value is considered false if it has a numeric value of zero. If the value in P1 is NULL then take the jump if and only if P3 is non-zero.
-- IfNotOpen If cursor P1 is not open or if P1 is set to a NULL row using the NullRow opcode, then jump to instruction P2. Otherwise, fall through.
-- IfNotZero Register P1 must contain an integer. If the content of register P1 is initially greater than zero, then decrement the value in register P1. If it is non-zero (negative or positive) and then also jump to P2. If register P1 is initially zero, leave it unchanged and fall through.
-- IfNullRow Check the cursor P1 to see if it is currently pointing at a NULL row. If it is, then set register P3 to NULL and jump immediately to P2. If P1 is not on a NULL row, then fall through without making any changes.
+- See also: NotFound, SeekHit
+- IfNot Jump to P2 if the value in register P1 is False. The value is considered false if it has a numeric value of zero. If the value in P1 is NULL then take the jump if and only if P3 is non-zero.
+- IfNotOpen If cursor P1 is not open or if P1 is set to a NULL row using the NullRow opcode, then jump to instruction P2. Otherwise, fall through.
+- IfNotZero Register P1 must contain an integer. If the content of register P1 is initially greater than zero, then decrement the value in register P1. If it is non-zero (negative or positive) and then also jump to P2. If register P1 is initially zero, leave it unchanged and fall through.
+- IfNullRow Check the cursor P1 to see if it is currently pointing at a NULL row. If it is, then set register P3 to NULL and jump immediately to P2. If P1 is not on a NULL row, then fall through without making any changes.
-- If P1 is not an open cursor, then this opcode is a no-op.
-- IfPos Register P1 must contain an integer. If the value of register P1 is 1 or greater, subtract P3 from the value in P1 and jump to P2.
+- If P1 is not an open cursor, then this opcode is a no-op.
+- IfPos Register P1 must contain an integer. If the value of register P1 is 1 or greater, subtract P3 from the value in P1 and jump to P2.
-- If the initial value of register P1 is less than 1, then the value is unchanged and control passes through to the next instruction.
-- IfSizeBetween Let N be the approximate number of rows in the table or index with cursor P1 and let X be 10*log2(N) if N is positive or -1 if N is zero.
+- If the initial value of register P1 is less than 1, then the value is unchanged and control passes through to the next instruction.
+- IfSizeBetween Let N be the approximate number of rows in the table or index with cursor P1 and let X be 10\*log2(N) if N is positive or -1 if N is zero.
-- Jump to P2 if X is in between P3 and P4, inclusive.
-- IncrVacuum Perform a single step of the incremental vacuum procedure on the P1 database. If the vacuum has finished, jump to instruction P2. Otherwise, fall through to the next instruction.
-- Init Programs contain a single instance of this opcode as the very first opcode.
+- Jump to P2 if X is in between P3 and P4, inclusive.
+- IncrVacuum Perform a single step of the incremental vacuum procedure on the P1 database. If the vacuum has finished, jump to instruction P2. Otherwise, fall through to the next instruction.
+- Init Programs contain a single instance of this opcode as the very first opcode.
-- If tracing is enabled (by the EpilogLite3_trace()) interface, then the UTF-8 string contained in P4 is emitted on the trace callback. Or if P4 is blank, use the string returned by EpilogLite3_sql().
+- If tracing is enabled (by the EpilogLite3_trace()) interface, then the UTF-8 string contained in P4 is emitted on the trace callback. Or if P4 is blank, use the string returned by EpilogLite3_sql().
-- If P2 is not zero, jump to instruction P2.
+- If P2 is not zero, jump to instruction P2.
-- Increment the value of P1 so that Once opcodes will jump the first time they are evaluated for this run.
+- Increment the value of P1 so that Once opcodes will jump the first time they are evaluated for this run.
-- If P3 is not zero, then it is an address to jump to if an EpilogLite_CORRUPT error is encountered.
-- InitCoroutine Set up register P1 so that it will Yield to the coroutine located at address P3.
+- If P3 is not zero, then it is an address to jump to if an EpilogLite_CORRUPT error is encountered.
+- InitCoroutine Set up register P1 so that it will Yield to the coroutine located at address P3.
-- If P2!=0 then the coroutine implementation immediately follows this opcode. So jump over the coroutine implementation to address P2.
+- If P2!=0 then the coroutine implementation immediately follows this opcode. So jump over the coroutine implementation to address P2.
-- See also: EndCoroutine
-- Insert Write an entry into the table of cursor P1. A new entry is created if it doesn't already exist or the data for an existing entry is overwritten. The data is the value MEM_Blob stored in register number P2. The key is stored in register P3. The key must be a MEM_Int.
+- See also: EndCoroutine
+- Insert Write an entry into the table of cursor P1. A new entry is created if it doesn't already exist or the data for an existing entry is overwritten. The data is the value MEM_Blob stored in register number P2. The key is stored in register P3. The key must be a MEM_Int.
-- If the OPFLAG_NCHANGE flag of P5 is set, then the row change count is incremented (otherwise not). If the OPFLAG_LASTROWID flag of P5 is set, then rowid is stored for subsequent return by the EpilogLite3_last_insert_rowid() function (otherwise it is unmodified).
+- If the OPFLAG_NCHANGE flag of P5 is set, then the row change count is incremented (otherwise not). If the OPFLAG_LASTROWID flag of P5 is set, then rowid is stored for subsequent return by the EpilogLite3_last_insert_rowid() function (otherwise it is unmodified).
-- If the OPFLAG_USESEEKRESULT flag of P5 is set, the implementation might run faster by avoiding an unnecessary seek on cursor P1. However, the OPFLAG_USESEEKRESULT flag must only be set if there have been no prior seeks on the cursor or if the most recent seek used a key equal to P3.
+- If the OPFLAG_USESEEKRESULT flag of P5 is set, the implementation might run faster by avoiding an unnecessary seek on cursor P1. However, the OPFLAG_USESEEKRESULT flag must only be set if there have been no prior seeks on the cursor or if the most recent seek used a key equal to P3.
-- If the OPFLAG_ISUPDATE flag is set, then this opcode is part of an UPDATE operation. Otherwise (if the flag is clear) then this opcode is part of an INSERT operation. The difference is only important to the update hook.
+- If the OPFLAG_ISUPDATE flag is set, then this opcode is part of an UPDATE operation. Otherwise (if the flag is clear) then this opcode is part of an INSERT operation. The difference is only important to the update hook.
-- Parameter P4 may point to a Table structure, or may be NULL. If it is not NULL, then the update-hook (EpilogLite3.xUpdateCallback) is invoked following a successful insert.
+- Parameter P4 may point to a Table structure, or may be NULL. If it is not NULL, then the update-hook (EpilogLite3.xUpdateCallback) is invoked following a successful insert.
-- (WARNING/TODO: If P1 is a pseudo-cursor and P2 is dynamically allocated, then ownership of P2 is transferred to the pseudo-cursor and register P2 becomes ephemeral. If the cursor is changed, the value of register P2 will then change. Make sure this does not cause any problems.)
+- (WARNING/TODO: If P1 is a pseudo-cursor and P2 is dynamically allocated, then ownership of P2 is transferred to the pseudo-cursor and register P2 becomes ephemeral. If the cursor is changed, the value of register P2 will then change. Make sure this does not cause any problems.)
-- This instruction only works on tables. The equivalent instruction for indices is IdxInsert.
-- Int64 P4 is a pointer to a 64-bit integer value. Write that value into register P2.
-- IntCopy Transfer the integer value held in register P1 into register P2.
+- This instruction only works on tables. The equivalent instruction for indices is IdxInsert.
+- Int64 P4 is a pointer to a 64-bit integer value. Write that value into register P2.
+- IntCopy Transfer the integer value held in register P1 into register P2.
-- This is an optimized version of SCopy that works only for integer values.
-- Integer The 32-bit integer value P1 is written into register P2.
-- IntegrityCk Do an analysis of the currently open database. Store in register (P1+1) the text of an error message describing any problems. If no problems are found, store a NULL in register (P1+1).
+- This is an optimized version of SCopy that works only for integer values.
+- Integer The 32-bit integer value P1 is written into register P2.
+- IntegrityCk Do an analysis of the currently open database. Store in register (P1+1) the text of an error message describing any problems. If no problems are found, store a NULL in register (P1+1).
-- The register (P1) contains one less than the maximum number of allowed errors. At most reg(P1) errors will be reported. In other words, the analysis stops as soon as reg(P1) errors are seen. Reg(P1) is updated with the number of errors remaining.
+- The register (P1) contains one less than the maximum number of allowed errors. At most reg(P1) errors will be reported. In other words, the analysis stops as soon as reg(P1) errors are seen. Reg(P1) is updated with the number of errors remaining.
-- The root page numbers of all tables in the database are integers stored in P4_INTARRAY argument.
+- The root page numbers of all tables in the database are integers stored in P4_INTARRAY argument.
-- If P5 is not zero, the check is done on the auxiliary database file, not the main database file.
+- If P5 is not zero, the check is done on the auxiliary database file, not the main database file.
-- This opcode is used to implement the integrity_check pragma.
-- IsNull Jump to P2 if the value in register P1 is NULL.
-- IsTrue This opcode implements the IS TRUE, IS FALSE, IS NOT TRUE, and IS NOT FALSE operators.
+- This opcode is used to implement the integrity_check pragma.
+- IsNull Jump to P2 if the value in register P1 is NULL.
+- IsTrue This opcode implements the IS TRUE, IS FALSE, IS NOT TRUE, and IS NOT FALSE operators.
-- Interpret the value in register P1 as a boolean value. Store that boolean (a 0 or 1) in register P2. Or if the value in register P1 is NULL, then the P3 is stored in register P2. Invert the answer if P4 is 1.
+- Interpret the value in register P1 as a boolean value. Store that boolean (a 0 or 1) in register P2. Or if the value in register P1 is NULL, then the P3 is stored in register P2. Invert the answer if P4 is 1.
-- The logic is summarized like this:
+- The logic is summarized like this:
-- If P3==0 and P4==0 then r[P2] := r[P1] IS TRUE
-- If P3==1 and P4==1 then r[P2] := r[P1] IS FALSE
-- If P3==0 and P4==1 then r[P2] := r[P1] IS NOT TRUE
-- If P3==1 and P4==0 then r[P2] := r[P1] IS NOT FALSE
+- If P3==0 and P4==0 then r\\[P2] := r\\[P1] IS TRUE
+- If P3==1 and P4==1 then r\\[P2] := r\\[P1] IS FALSE
+- If P3==0 and P4==1 then r\\[P2] := r\\[P1] IS NOT TRUE
+- If P3==1 and P4==0 then r\\[P2] := r\\[P1] IS NOT FALSE
-- IsType Jump to P2 if the type of a column in a btree is one of the types specified by the P5 bitmask.
+- IsType Jump to P2 if the type of a column in a btree is one of the types specified by the P5 bitmask.
-- P1 is normally a cursor on a btree for which the row decode cache is valid through at least column P3. In other words, there should have been a prior Column for column P3 or greater. If the cursor is not valid, then this opcode might give spurious results. The the btree row has fewer than P3 columns, then use P4 as the datatype.
+- P1 is normally a cursor on a btree for which the row decode cache is valid through at least column P3. In other words, there should have been a prior Column for column P3 or greater. If the cursor is not valid, then this opcode might give spurious results. The the btree row has fewer than P3 columns, then use P4 as the datatype.
-- If P1 is -1, then P3 is a register number and the datatype is taken from the value in that register.
+- If P1 is -1, then P3 is a register number and the datatype is taken from the value in that register.
-- P5 is a bitmask of data types. EpilogLite_INTEGER is the least significant (0x01) bit. EpilogLite_FLOAT is the 0x02 bit. EpilogLite_TEXT is 0x04. EpilogLite_BLOB is 0x08. EpilogLite_NULL is 0x10.
+- P5 is a bitmask of data types. EpilogLite_INTEGER is the least significant (0x01) bit. EpilogLite_FLOAT is the 0x02 bit. EpilogLite_TEXT is 0x04. EpilogLite_BLOB is 0x08. EpilogLite_NULL is 0x10.
-- WARNING: This opcode does not reliably distinguish between NULL and REAL when P1>=0. If the database contains a NaN value, this opcode will think that the datatype is REAL when it should be NULL. When P1<0 and the value is already stored in register P3, then this opcode does reliably distinguish between NULL and REAL. The problem only arises then P1>=0.
+- WARNING: This opcode does not reliably distinguish between NULL and REAL when P1>=0. If the database contains a NaN value, this opcode will think that the datatype is REAL when it should be NULL. When P1<0 and the value is already stored in register P3, then this opcode does reliably distinguish between NULL and REAL. The problem only arises then P1>=0.
-- Take the jump to address P2 if and only if the datatype of the value determined by P1 and P3 corresponds to one of the bits in the P5 bitmask.
-- JournalMode Change the journal mode of database P1 to P3. P3 must be one of the PAGER_JOURNALMODE_XXX values. If changing between the various rollback modes (delete, truncate, persist, off and memory), this is a simple operation. No IO is required.
+- Take the jump to address P2 if and only if the datatype of the value determined by P1 and P3 corresponds to one of the bits in the P5 bitmask.
+- JournalMode Change the journal mode of database P1 to P3. P3 must be one of the PAGER_JOURNALMODE_XXX values. If changing between the various rollback modes (delete, truncate, persist, off and memory), this is a simple operation. No IO is required.
-- If changing into or out of WAL mode the procedure is more complicated.
+- If changing into or out of WAL mode the procedure is more complicated.
-- Write a string containing the final journal-mode to register P2.
-- Jump Jump to the instruction at address P1, P2, or P3 depending on whether in the most recent Compare instruction the P1 vector was less than, equal to, or greater than the P2 vector, respectively.
+- Write a string containing the final journal-mode to register P2.
+- Jump Jump to the instruction at address P1, P2, or P3 depending on whether in the most recent Compare instruction the P1 vector was less than, equal to, or greater than the P2 vector, respectively.
-- This opcode must immediately follow an Compare opcode.
-- Last The next use of the Rowid or Column or Prev instruction for P1 will refer to the last entry in the database table or index. If the table or index is empty and P2>0, then jump immediately to P2. If P2 is 0 or if the table or index is not empty, fall through to the following instruction.
+- This opcode must immediately follow an Compare opcode.
+- Last The next use of the Rowid or Column or Prev instruction for P1 will refer to the last entry in the database table or index. If the table or index is empty and P2>0, then jump immediately to P2. If P2 is 0 or if the table or index is not empty, fall through to the following instruction.
-- This opcode leaves the cursor configured to move in reverse order, from the end toward the beginning. In other words, the cursor is configured to use Prev, not Next.
-- Le This works just like the Lt opcode except that the jump is taken if the content of register P3 is less than or equal to the content of register P1. See the Lt opcode for additional information.
-- LoadAnalysis Read the EpilogLite_stat1 table for database P1 and load the content of that table into the internal index hash table. This will cause the analysis to be used when preparing all subsequent queries.
-- Lt Compare the values in register P1 and P3. If reg(P3)0 then P3 is a register in the root frame of this VDBE that holds the largest previously generated record number. No new record numbers are allowed to be less than this value. When this value reaches its maximum, an EpilogLite_FULL error is generated. The P3 register is updated with the ' generated record number. This P3 mechanism is used to help implement the AUTOINCREMENT feature.
-- Next Advance cursor P1 so that it points to the next key/data pair in its table or index. If there are no more key/value pairs then fall through to the following instruction. But if the cursor advance was successful, jump immediately to P2.
+- If P3>0 then P3 is a register in the root frame of this VDBE that holds the largest previously generated record number. No new record numbers are allowed to be less than this value. When this value reaches its maximum, an EpilogLite_FULL error is generated. The P3 register is updated with the ' generated record number. This P3 mechanism is used to help implement the AUTOINCREMENT feature.
+- Next Advance cursor P1 so that it points to the next key/data pair in its table or index. If there are no more key/value pairs then fall through to the following instruction. But if the cursor advance was successful, jump immediately to P2.
-- The Next opcode is only valid following an SeekGT, SeekGE, or Rewind opcode used to position the cursor. Next is not allowed to follow SeekLT, SeekLE, or Last.
+- The Next opcode is only valid following an SeekGT, SeekGE, or Rewind opcode used to position the cursor. Next is not allowed to follow SeekLT, SeekLE, or Last.
-- The P1 cursor must be for a real table, not a pseudo-table. P1 must have been opened prior to this opcode or the program will segfault.
+- The P1 cursor must be for a real table, not a pseudo-table. P1 must have been opened prior to this opcode or the program will segfault.
-- The P3 value is a hint to the btree implementation. If P3==1, that means P1 is an SQL index and that this instruction could have been omitted if that index had been unique. P3 is usually 0. P3 is always either 0 or 1.
+- The P3 value is a hint to the btree implementation. If P3==1, that means P1 is an SQL index and that this instruction could have been omitted if that index had been unique. P3 is usually 0. P3 is always either 0 or 1.
-- If P5 is positive and the jump is taken, then event counter number P5-1 in the prepared statement is incremented.
+- If P5 is positive and the jump is taken, then event counter number P5-1 in the prepared statement is incremented.
-- See also: Prev
-- NoConflict If P4==0 then register P3 holds a blob constructed by MakeRecord. If P4>0 then register P3 is the first of P4 registers that form an unpacked record.
+- See also: Prev
+- NoConflict If P4==0 then register P3 holds a blob constructed by MakeRecord. If P4>0 then register P3 is the first of P4 registers that form an unpacked record.
-- Cursor P1 is on an index btree. If the record identified by P3 and P4 contains any NULL value, jump immediately to P2. If all terms of the record are not-NULL then a check is done to determine if any row in the P1 index btree has a matching key prefix. If there are no matches, jump immediately to P2. If there is a match, fall through and leave the P1 cursor pointing to the matching row.
+- Cursor P1 is on an index btree. If the record identified by P3 and P4 contains any NULL value, jump immediately to P2. If all terms of the record are not-NULL then a check is done to determine if any row in the P1 index btree has a matching key prefix. If there are no matches, jump immediately to P2. If there is a match, fall through and leave the P1 cursor pointing to the matching row.
-- This opcode is similar to NotFound with the exceptions that the branch is always taken if any part of the search key input is NULL.
+- This opcode is similar to NotFound with the exceptions that the branch is always taken if any part of the search key input is NULL.
-- This operation leaves the cursor in a state where it cannot be advanced in either direction. In other words, the Next and Prev opcodes do not work after this operation.
+- This operation leaves the cursor in a state where it cannot be advanced in either direction. In other words, the Next and Prev opcodes do not work after this operation.
-- See also: NotFound, Found, NotExists
-- Noop Do nothing. Continue downward to the next opcode.
-- Not Interpret the value in register P1 as a boolean value. Store the boolean complement in register P2. If the value in register P1 is NULL, then a NULL is stored in P2.
-- NotExists P1 is the index of a cursor open on an SQL table btree (with integer keys). P3 is an integer rowid. If P1 does not contain a record with rowid P3 then jump immediately to P2. Or, if P2 is 0, raise an EpilogLite_CORRUPT error. If P1 does contain a record with rowid P3 then leave the cursor pointing at that record and fall through to the next instruction.
+- See also: NotFound, Found, NotExists
+- Noop Do nothing. Continue downward to the next opcode.
+- Not Interpret the value in register P1 as a boolean value. Store the boolean complement in register P2. If the value in register P1 is NULL, then a NULL is stored in P2.
+- NotExists P1 is the index of a cursor open on an SQL table btree (with integer keys). P3 is an integer rowid. If P1 does not contain a record with rowid P3 then jump immediately to P2. Or, if P2 is 0, raise an EpilogLite_CORRUPT error. If P1 does contain a record with rowid P3 then leave the cursor pointing at that record and fall through to the next instruction.
-- The SeekRowid opcode performs the same operation but also allows the P3 register to contain a non-integer value, in which case the jump is always taken. This opcode requires that P3 always contain an integer.
+- The SeekRowid opcode performs the same operation but also allows the P3 register to contain a non-integer value, in which case the jump is always taken. This opcode requires that P3 always contain an integer.
-- The NotFound opcode performs the same operation on index btrees (with arbitrary multi-value keys).
+- The NotFound opcode performs the same operation on index btrees (with arbitrary multi-value keys).
-- This opcode leaves the cursor in a state where it cannot be advanced in either direction. In other words, the Next and Prev opcodes will not work following this opcode.
+- This opcode leaves the cursor in a state where it cannot be advanced in either direction. In other words, the Next and Prev opcodes will not work following this opcode.
-- See also: Found, NotFound, NoConflict, SeekRowid
-- NotFound If P4==0 then register P3 holds a blob constructed by MakeRecord. If P4>0 then register P3 is the first of P4 registers that form an unpacked record.
+- See also: Found, NotFound, NoConflict, SeekRowid
+- NotFound If P4==0 then register P3 holds a blob constructed by MakeRecord. If P4>0 then register P3 is the first of P4 registers that form an unpacked record.
-- Cursor P1 is on an index btree. If the record identified by P3 and P4 is not the prefix of any entry in P1 then a jump is made to P2. If P1 does contain an entry whose prefix matches the P3/P4 record then control falls through to the next instruction and P1 is left pointing at the matching entry.
+- Cursor P1 is on an index btree. If the record identified by P3 and P4 is not the prefix of any entry in P1 then a jump is made to P2. If P1 does contain an entry whose prefix matches the P3/P4 record then control falls through to the next instruction and P1 is left pointing at the matching entry.
-- This operation leaves the cursor in a state where it cannot be advanced in either direction. In other words, the Next and Prev opcodes do not work after this operation.
+- This operation leaves the cursor in a state where it cannot be advanced in either direction. In other words, the Next and Prev opcodes do not work after this operation.
-- See also: Found, NotExists, NoConflict, IfNoHope
-- NotNull Jump to P2 if the value in register P1 is not NULL.
-- Null Write a NULL into registers P2. If P3 greater than P2, then also write NULL into register P3 and every register in between P2 and P3. If P3 is less than P2 (typically P3 is zero) then only register P2 is set to NULL.
+- See also: Found, NotExists, NoConflict, IfNoHope
+- NotNull Jump to P2 if the value in register P1 is not NULL.
+- Null Write a NULL into registers P2. If P3 greater than P2, then also write NULL into register P3 and every register in between P2 and P3. If P3 is less than P2 (typically P3 is zero) then only register P2 is set to NULL.
-- If the P1 value is non-zero, then also set the MEM_Cleared flag so that NULL values will not compare equal even if EpilogLite_NULLEQ is set on Ne or Eq.
-- NullRow Move the cursor P1 to a null row. Any Column operations that occur while the cursor is on the null row will always write a NULL.
+- If the P1 value is non-zero, then also set the MEM_Cleared flag so that NULL values will not compare equal even if EpilogLite_NULLEQ is set on Ne or Eq.
+- NullRow Move the cursor P1 to a null row. Any Column operations that occur while the cursor is on the null row will always write a NULL.
-- If cursor P1 is not previously opened, open it now to a special pseudo-cursor that always returns NULL for every column.
-- Offset Store in register r[P3] the byte offset into the database file that is the start of the payload for the record at which that cursor P1 is currently pointing.
+- If cursor P1 is not previously opened, open it now to a special pseudo-cursor that always returns NULL for every column.
+- Offset Store in register r\\[P3] the byte offset into the database file that is the start of the payload for the record at which that cursor P1 is currently pointing.
-- P2 is the column number for the argument to the EpilogLite_offset() function. This opcode does not use P2 itself, but the P2 value is used by the code generator. The P1, P2, and P3 operands to this opcode are the same as for Column.
+- P2 is the column number for the argument to the EpilogLite_offset() function. This opcode does not use P2 itself, but the P2 value is used by the code generator. The P1, P2, and P3 operands to this opcode are the same as for Column.
-- This opcode is only available if EpilogLite is compiled with the -DEpilogLite_ENABLE_OFFSET_SQL_FUNC option.
-- OffsetLimit This opcode performs a commonly used computation associated with LIMIT and OFFSET processing. r[P1] holds the limit counter. r[P3] holds the offset counter. The opcode computes the combined value of the LIMIT and OFFSET and stores that value in r[P2]. The r[P2] value computed is the total number of rows that will need to be visited in order to complete the query.
+- This opcode is only available if EpilogLite is compiled with the -DEpilogLite_ENABLE_OFFSET_SQL_FUNC option.
+- OffsetLimit This opcode performs a commonly used computation associated with LIMIT and OFFSET processing. r\\[P1] holds the limit counter. r\\[P3] holds the offset counter. The opcode computes the combined value of the LIMIT and OFFSET and stores that value in r\\[P2]. The r\\[P2] value computed is the total number of rows that will need to be visited in order to complete the query.
-- If r[P3] is zero or negative, that means there is no OFFSET and r[P2] is set to be the value of the LIMIT, r[P1].
+- If r\\\[P3] is zero or negative, that means there is no OFFSET and r\\[P2] is set to be the value of the LIMIT, r\\[P1].
-- if r[P1] is zero or negative, that means there is no LIMIT and r[P2] is set to -1.
+- if r\\[P1] is zero or negative, that means there is no LIMIT and r\\[P2] is set to -1.
-- Otherwise, r[P2] is set to the sum of r[P1] and r[P3].
-- Once Fall through to the next instruction the first time this opcode is encountered on each invocation of the byte-code program. Jump to P2 on the second and all subsequent encounters during the same invocation.
+- Otherwise, r\\[P2] is set to the sum of r\\[P1] and r\\[P3].
+- Once Fall through to the next instruction the first time this opcode is encountered on each invocation of the byte-code program. Jump to P2 on the second and all subsequent encounters during the same invocation.
-- Top-level programs determine first invocation by comparing the P1 operand against the P1 operand on the Init opcode at the beginning of the program. If the P1 values differ, then fall through and make the P1 of this opcode equal to the P1 of Init. If P1 values are the same then take the jump.
+- Top-level programs determine first invocation by comparing the P1 operand against the P1 operand on the Init opcode at the beginning of the program. If the P1 values differ, then fall through and make the P1 of this opcode equal to the P1 of Init. If P1 values are the same then take the jump.
-- For subprograms, there is a bitmask in the VdbeFrame that determines whether or not the jump should be taken. The bitmask is necessary because the self-altering code trick does not work for recursive triggers.
-- OpenAutoindex This opcode works the same as OpenEphemeral. It has a different name to distinguish its use. Tables created using by this opcode will be used for automatically created transient indices in joins.
-- OpenDup Open a new cursor P1 that points to the same ephemeral table as cursor P2. The P2 cursor must have been opened by a prior OpenEphemeral opcode. Only ephemeral cursors may be duplicated.
+- For subprograms, there is a bitmask in the VdbeFrame that determines whether or not the jump should be taken. The bitmask is necessary because the self-altering code trick does not work for recursive triggers.
+- OpenAutoindex This opcode works the same as OpenEphemeral. It has a different name to distinguish its use. Tables created using by this opcode will be used for automatically created transient indices in joins.
+- OpenDup Open a new cursor P1 that points to the same ephemeral table as cursor P2. The P2 cursor must have been opened by a prior OpenEphemeral opcode. Only ephemeral cursors may be duplicated.
-- Duplicate ephemeral cursors are used for self-joins of materialized views.
-- OpenEphemeral Open a new cursor P1 to a transient table. The cursor is always opened read/write even if the main database is read-only. The ephemeral table is deleted automatically when the cursor is closed.
+- Duplicate ephemeral cursors are used for self-joins of materialized views.
+- OpenEphemeral Open a new cursor P1 to a transient table. The cursor is always opened read/write even if the main database is read-only. The ephemeral table is deleted automatically when the cursor is closed.
-- If the cursor P1 is already opened on an ephemeral table, the table is cleared (all content is erased).
+- If the cursor P1 is already opened on an ephemeral table, the table is cleared (all content is erased).
-- P2 is the number of columns in the ephemeral table. The cursor points to a BTree table if P4==0 and to a BTree index if P4 is not 0. If P4 is not NULL, it points to a KeyInfo structure that defines the format of keys in the index.
+- P2 is the number of columns in the ephemeral table. The cursor points to a BTree table if P4==0 and to a BTree index if P4 is not 0. If P4 is not NULL, it points to a KeyInfo structure that defines the format of keys in the index.
-- The P5 parameter can be a mask of the BTREE_* flags defined in btree.h. These flags control aspects of the operation of the btree. The BTREE_OMIT_JOURNAL and BTREE_SINGLE flags are added automatically.
+- The P5 parameter can be a mask of the BTREE\_\* flags defined in btree.h. These flags control aspects of the operation of the btree. The BTREE_OMIT_JOURNAL and BTREE_SINGLE flags are added automatically.
-- If P3 is positive, then reg[P3] is modified slightly so that it can be used as zero-length data for Insert. This is an optimization that avoids an extra Blob opcode to initialize that register.
-- OpenPseudo Open a new cursor that points to a fake table that contains a single row of data. The content of that one row is the content of memory register P2. In other words, cursor P1 becomes an alias for the MEM_Blob content contained in register P2.
+- If P3 is positive, then reg\\[P3] is modified slightly so that it can be used as zero-length data for Insert. This is an optimization that avoids an extra Blob opcode to initialize that register.
+- OpenPseudo Open a new cursor that points to a fake table that contains a single row of data. The content of that one row is the content of memory register P2. In other words, cursor P1 becomes an alias for the MEM_Blob content contained in register P2.
-- A pseudo-table created by this opcode is used to hold a single row output from the sorter so that the row can be decomposed into individual columns using the Column opcode. The Column opcode is the only cursor opcode that works with a pseudo-table.
+- A pseudo-table created by this opcode is used to hold a single row output from the sorter so that the row can be decomposed into individual columns using the Column opcode. The Column opcode is the only cursor opcode that works with a pseudo-table.
-- P3 is the number of fields in the records that will be stored by the pseudo-table. If P2 is 0 or negative then the pseudo-cursor will return NULL for every column.
-- OpenRead Open a read-only cursor for the database table whose root page is P2 in a database file. The database file is determined by P3. P3==0 means the main database, P3==1 means the database used for temporary tables, and P3>1 means used the corresponding attached database. Give the new cursor an identifier of P1. The P1 values need not be contiguous but all P1 values should be small integers. It is an error for P1 to be negative.
+- P3 is the number of fields in the records that will be stored by the pseudo-table. If P2 is 0 or negative then the pseudo-cursor will return NULL for every column.
+- OpenRead Open a read-only cursor for the database table whose root page is P2 in a database file. The database file is determined by P3. P3==0 means the main database, P3==1 means the database used for temporary tables, and P3>1 means used the corresponding attached database. Give the new cursor an identifier of P1. The P1 values need not be contiguous but all P1 values should be small integers. It is an error for P1 to be negative.
-- Allowed P5 bits:
+- Allowed P5 bits:
-- 0x02 OPFLAG_SEEKEQ: This cursor will only be used for equality lookups (implemented as a pair of opcodes SeekGE/IdxGT of SeekLE/IdxLT)
+- 0x02 OPFLAG_SEEKEQ: This cursor will only be used for equality lookups (implemented as a pair of opcodes SeekGE/IdxGT of SeekLE/IdxLT)
-- The P4 value may be either an integer (P4_INT32) or a pointer to a KeyInfo structure (P4_KEYINFO). If it is a pointer to a KeyInfo object, then table being opened must be an index b-tree where the KeyInfo object defines the content and collating sequence of that index b-tree. Otherwise, if P4 is an integer value, then the table being opened must be a table b-tree with a number of columns no less than the value of P4.
+- The P4 value may be either an integer (P4_INT32) or a pointer to a KeyInfo structure (P4_KEYINFO). If it is a pointer to a KeyInfo object, then table being opened must be an index b-tree where the KeyInfo object defines the content and collating sequence of that index b-tree. Otherwise, if P4 is an integer value, then the table being opened must be a table b-tree with a number of columns no less than the value of P4.
-- See also: OpenWrite, ReopenIdx
-- OpenWrite Open a read/write cursor named P1 on the table or index whose root page is P2 (or whose root page is held in register P2 if the OPFLAG_P2ISREG bit is set in P5 - see below).
+- See also: OpenWrite, ReopenIdx
+- OpenWrite Open a read/write cursor named P1 on the table or index whose root page is P2 (or whose root page is held in register P2 if the OPFLAG_P2ISREG bit is set in P5 - see below).
-- The P4 value may be either an integer (P4_INT32) or a pointer to a KeyInfo structure (P4_KEYINFO). If it is a pointer to a KeyInfo object, then table being opened must be an index b-tree where the KeyInfo object defines the content and collating sequence of that index b-tree. Otherwise, if P4 is an integer value, then the table being opened must be a table b-tree with a number of columns no less than the value of P4.
+- The P4 value may be either an integer (P4_INT32) or a pointer to a KeyInfo structure (P4_KEYINFO). If it is a pointer to a KeyInfo object, then table being opened must be an index b-tree where the KeyInfo object defines the content and collating sequence of that index b-tree. Otherwise, if P4 is an integer value, then the table being opened must be a table b-tree with a number of columns no less than the value of P4.
-- Allowed P5 bits:
+- Allowed P5 bits:
-- 0x02 OPFLAG_SEEKEQ: This cursor will only be used for equality lookups (implemented as a pair of opcodes SeekGE/IdxGT of SeekLE/IdxLT)
-- 0x08 OPFLAG_FORDELETE: This cursor is used only to seek and subsequently delete entries in an index btree. This is a hint to the storage engine that the storage engine is allowed to ignore. The hint is not used by the official EpilogLite b*tree storage engine, but is used by COMDB2.
-- 0x10 OPFLAG_P2ISREG: Use the content of register P2 as the root page, not the value of P2 itself.
+- 0x02 OPFLAG_SEEKEQ: This cursor will only be used for equality lookups (implemented as a pair of opcodes SeekGE/IdxGT of SeekLE/IdxLT)
+- 0x08 OPFLAG_FORDELETE: This cursor is used only to seek and subsequently delete entries in an index btree. This is a hint to the storage engine that the storage engine is allowed to ignore. The hint is not used by the official EpilogLite b\*tree storage engine, but is used by COMDB2.
+- 0x10 OPFLAG_P2ISREG: Use the content of register P2 as the root page, not the value of P2 itself.
-- This instruction works like OpenRead except that it opens the cursor in read/write mode.
+- This instruction works like OpenRead except that it opens the cursor in read/write mode.
-- See also: OpenRead, ReopenIdx
-- Or Take the logical OR of the values in register P1 and P2 and store the answer in register P3.
+- See also: OpenRead, ReopenIdx
+- Or Take the logical OR of the values in register P1 and P2 and store the answer in register P3.
-- If either P1 or P2 is nonzero (true) then the result is 1 (true) even if the other input is NULL. A NULL and false or two NULLs give a NULL output.
-- Pagecount Write the current number of pages in database P1 to memory cell P2.
-- Param This opcode is only ever present in sub-programs called via the Program instruction. Copy a value currently stored in a memory cell of the calling (parent) frame to cell P2 in the current frames address space. This is used by trigger programs to access the new.* and old.* values.
+- If either P1 or P2 is nonzero (true) then the result is 1 (true) even if the other input is NULL. A NULL and false or two NULLs give a NULL output.
+- Pagecount Write the current number of pages in database P1 to memory cell P2.
+- Param This opcode is only ever present in sub-programs called via the Program instruction. Copy a value currently stored in a memory cell of the calling (parent) frame to cell P2 in the current frames address space. This is used by trigger programs to access the new._ and old._ values.
-- The address of the cell in the parent frame is determined by adding the value of the P1 argument to the value of the P1 argument to the calling Program instruction.
-- ParseSchema Read and parse all entries from the schema table of database P1 that match the WHERE clause P4. If P4 is a NULL pointer, then the entire schema for P1 is reparsed.
+- The address of the cell in the parent frame is determined by adding the value of the P1 argument to the value of the P1 argument to the calling Program instruction.
+- ParseSchema Read and parse all entries from the schema table of database P1 that match the WHERE clause P4. If P4 is a NULL pointer, then the entire schema for P1 is reparsed.
-- This opcode invokes the parser to create a new virtual machine, then runs the new virtual machine. It is thus a re-entrant opcode.
-- Permutation Set the permutation used by the Compare operator in the next instruction. The permutation is stored in the P4 operand.
+- This opcode invokes the parser to create a new virtual machine, then runs the new virtual machine. It is thus a re-entrant opcode.
+- Permutation Set the permutation used by the Compare operator in the next instruction. The permutation is stored in the P4 operand.
-- The permutation is only valid for the next opcode which must be an Compare that has the OPFLAG_PERMUTE bit set in P5.
+- The permutation is only valid for the next opcode which must be an Compare that has the OPFLAG_PERMUTE bit set in P5.
-- The first integer in the P4 integer array is the length of the array and does not become part of the permutation.
-- Prev Back up cursor P1 so that it points to the previous key/data pair in its table or index. If there is no previous key/value pairs then fall through to the following instruction. But if the cursor backup was successful, jump immediately to P2.
+- The first integer in the P4 integer array is the length of the array and does not become part of the permutation.
+- Prev Back up cursor P1 so that it points to the previous key/data pair in its table or index. If there is no previous key/value pairs then fall through to the following instruction. But if the cursor backup was successful, jump immediately to P2.
-- The Prev opcode is only valid following an SeekLT, SeekLE, or Last opcode used to position the cursor. Prev is not allowed to follow SeekGT, SeekGE, or Rewind.
+- The Prev opcode is only valid following an SeekLT, SeekLE, or Last opcode used to position the cursor. Prev is not allowed to follow SeekGT, SeekGE, or Rewind.
-- The P1 cursor must be for a real table, not a pseudo-table. If P1 is not open then the behavior is undefined.
+- The P1 cursor must be for a real table, not a pseudo-table. If P1 is not open then the behavior is undefined.
-- The P3 value is a hint to the btree implementation. If P3==1, that means P1 is an SQL index and that this instruction could have been omitted if that index had been unique. P3 is usually 0. P3 is always either 0 or 1.
+- The P3 value is a hint to the btree implementation. If P3==1, that means P1 is an SQL index and that this instruction could have been omitted if that index had been unique. P3 is usually 0. P3 is always either 0 or 1.
-- If P5 is positive and the jump is taken, then event counter number P5-1 in the prepared statement is incremented.
-- Program Execute the trigger program passed as P4 (type P4_SUBPROGRAM).
+- If P5 is positive and the jump is taken, then event counter number P5-1 in the prepared statement is incremented.
+- Program Execute the trigger program passed as P4 (type P4_SUBPROGRAM).
-- P1 contains the address of the memory cell that contains the first memory cell in an array of values used as arguments to the sub-program. P2 contains the address to jump to if the sub-program throws an IGNORE exception using the RAISE() function. P2 might be zero, if there is no possibility that an IGNORE exception will be raised. Register P3 contains the address of a memory cell in this (the parent) VM that is used to allocate the memory required by the sub-vdbe at runtime.
+- P1 contains the address of the memory cell that contains the first memory cell in an array of values used as arguments to the sub-program. P2 contains the address to jump to if the sub-program throws an IGNORE exception using the RAISE() function. P2 might be zero, if there is no possibility that an IGNORE exception will be raised. Register P3 contains the address of a memory cell in this (the parent) VM that is used to allocate the memory required by the sub-vdbe at runtime.
-- P4 is a pointer to the VM containing the trigger program.
+- P4 is a pointer to the VM containing the trigger program.
-- If P5 is non-zero, then recursive program invocation is enabled.
-- PureFunc Invoke a user function (P4 is a pointer to an EpilogLite3_context object that contains a pointer to the function to be run) with arguments taken from register P2 and successors. The number of arguments is in the EpilogLite3_context object that P4 points to. The result of the function is stored in register P3. Register P3 must not be one of the function inputs.
+- If P5 is non-zero, then recursive program invocation is enabled.
+- PureFunc Invoke a user function (P4 is a pointer to an EpilogLite3_context object that contains a pointer to the function to be run) with arguments taken from register P2 and successors. The number of arguments is in the EpilogLite3_context object that P4 points to. The result of the function is stored in register P3. Register P3 must not be one of the function inputs.
-- P1 is a 32-bit bitmask indicating whether or not each argument to the function was determined to be constant at compile time. If the first argument was constant then bit 0 of P1 is set. This is used to determine whether meta data associated with a user function argument using the EpilogLite3_set_auxdata() API may be safely retained until the next invocation of this opcode.
+- P1 is a 32-bit bitmask indicating whether or not each argument to the function was determined to be constant at compile time. If the first argument was constant then bit 0 of P1 is set. This is used to determine whether meta data associated with a user function argument using the EpilogLite3_set_auxdata() API may be safely retained until the next invocation of this opcode.
-- This opcode works exactly like Function. The only difference is in its name. This opcode is used in places where the function must be purely non-deterministic. Some built-in date/time functions can be either deterministic of non-deterministic, depending on their arguments. When those function are used in a non-deterministic way, they will check to see if they were called using PureFunc instead of Function, and if they were, they throw an error.
+- This opcode works exactly like Function. The only difference is in its name. This opcode is used in places where the function must be purely non-deterministic. Some built-in date/time functions can be either deterministic of non-deterministic, depending on their arguments. When those function are used in a non-deterministic way, they will check to see if they were called using PureFunc instead of Function, and if they were, they throw an error.
-- See also: AggStep, AggFinal, Function
-- ReadCookie Read cookie number P3 from database P1 and write it into register P2. P3==1 is the schema version. P3==2 is the database format. P3==3 is the recommended pager cache size, and so forth. P1==0 is the main database file and P1==1 is the database file used to store temporary tables.
+- See also: AggStep, AggFinal, Function
+- ReadCookie Read cookie number P3 from database P1 and write it into register P2. P3==1 is the schema version. P3==2 is the database format. P3==3 is the recommended pager cache size, and so forth. P1==0 is the main database file and P1==1 is the database file used to store temporary tables.
-- There must be a read-lock on the database (either a transaction must be started or there must be an open cursor) before executing this instruction.
-- Real P4 is a pointer to a 64-bit floating point value. Write that value into register P2.
-- RealAffinity If register P1 holds an integer convert it to a real value.
+- There must be a read-lock on the database (either a transaction must be started or there must be an open cursor) before executing this instruction.
+- Real P4 is a pointer to a 64-bit floating point value. Write that value into register P2.
+- RealAffinity If register P1 holds an integer convert it to a real value.
-- This opcode is used when extracting information from a column that has REAL affinity. Such column values may still be stored as integers, for space efficiency, but after extraction we want them to have only a real value.
-- ReleaseReg Release registers from service. Any content that was in the the registers is unreliable after this opcode completes.
+- This opcode is used when extracting information from a column that has REAL affinity. Such column values may still be stored as integers, for space efficiency, but after extraction we want them to have only a real value.
+- ReleaseReg Release registers from service. Any content that was in the the registers is unreliable after this opcode completes.
-- The registers released will be the P2 registers starting at P1, except if bit ii of P3 set, then do not release register P1+ii. In other words, P3 is a mask of registers to preserve.
+- The registers released will be the P2 registers starting at P1, except if bit ii of P3 set, then do not release register P1+ii. In other words, P3 is a mask of registers to preserve.
-- Releasing a register clears the Mem.pScopyFrom pointer. That means that if the content of the released register was set using SCopy, a change to the value of the source register for the SCopy will no longer generate an assertion fault in EpilogLite3VdbeMemAboutToChange().
+- Releasing a register clears the Mem.pScopyFrom pointer. That means that if the content of the released register was set using SCopy, a change to the value of the source register for the SCopy will no longer generate an assertion fault in EpilogLite3VdbeMemAboutToChange().
-- If P5 is set, then all released registers have their type set to MEM_Undefined so that any subsequent attempt to read the released register (before it is reinitialized) will generate an assertion fault.
+- If P5 is set, then all released registers have their type set to MEM_Undefined so that any subsequent attempt to read the released register (before it is reinitialized) will generate an assertion fault.
-- P5 ought to be set on every call to this opcode. However, there are places in the code generator will release registers before their are used, under the (valid) assumption that the registers will not be reallocated for some other purpose before they are used and hence are safe to release.
+- P5 ought to be set on every call to this opcode. However, there are places in the code generator will release registers before their are used, under the (valid) assumption that the registers will not be reallocated for some other purpose before they are used and hence are safe to release.
-- This opcode is only available in testing and debugging builds. It is not generated for release builds. The purpose of this opcode is to help validate the generated bytecode. This opcode does not actually contribute to computing an answer.
-- Remainder Compute the remainder after integer register P2 is divided by register P1 and store the result in register P3. If the value in register P1 is zero the result is NULL. If either operand is NULL, the result is NULL.
-- ReopenIdx The ReopenIdx opcode works like OpenRead except that it first checks to see if the cursor on P1 is already open on the same b-tree and if it is this opcode becomes a no-op. In other words, if the cursor is already open, do not reopen it.
+- This opcode is only available in testing and debugging builds. It is not generated for release builds. The purpose of this opcode is to help validate the generated bytecode. This opcode does not actually contribute to computing an answer.
+- Remainder Compute the remainder after integer register P2 is divided by register P1 and store the result in register P3. If the value in register P1 is zero the result is NULL. If either operand is NULL, the result is NULL.
+- ReopenIdx The ReopenIdx opcode works like OpenRead except that it first checks to see if the cursor on P1 is already open on the same b-tree and if it is this opcode becomes a no-op. In other words, if the cursor is already open, do not reopen it.
-- The ReopenIdx opcode may only be used with P5==0 or P5==OPFLAG_SEEKEQ and with P4 being a P4_KEYINFO object. Furthermore, the P3 value must be the same as every other ReopenIdx or OpenRead for the same cursor number.
+- The ReopenIdx opcode may only be used with P5==0 or P5==OPFLAG_SEEKEQ and with P4 being a P4_KEYINFO object. Furthermore, the P3 value must be the same as every other ReopenIdx or OpenRead for the same cursor number.
-- Allowed P5 bits:
+- Allowed P5 bits:
-- 0x02 OPFLAG_SEEKEQ: This cursor will only be used for equality lookups (implemented as a pair of opcodes SeekGE/IdxGT of SeekLE/IdxLT)
+- 0x02 OPFLAG_SEEKEQ: This cursor will only be used for equality lookups (implemented as a pair of opcodes SeekGE/IdxGT of SeekLE/IdxLT)
-- See also: OpenRead, OpenWrite
-- ResetCount The value of the change counter is copied to the database handle change counter (returned by subsequent calls to EpilogLite3_changes()). Then the VMs internal change counter resets to 0. This is used by trigger programs.
-- ResetSorter Delete all contents from the ephemeral table or sorter that is open on cursor P1.
+- See also: OpenRead, OpenWrite
+- ResetCount The value of the change counter is copied to the database handle change counter (returned by subsequent calls to EpilogLite3_changes()). Then the VMs internal change counter resets to 0. This is used by trigger programs.
+- ResetSorter Delete all contents from the ephemeral table or sorter that is open on cursor P1.
-- This opcode only works for cursors used for sorting and opened with OpenEphemeral or SorterOpen.
-- ResultRow The registers P1 through P1+P2-1 contain a single row of results. This opcode causes the EpilogLite3_step() call to terminate with an EpilogLite_ROW return code and it sets up the EpilogLite3_stmt structure to provide access to the r(P1)..r(P1+P2-1) values as the result row.
-- Return Jump to the address stored in register P1. If P1 is a return address register, then this accomplishes a return from a subroutine.
+- This opcode only works for cursors used for sorting and opened with OpenEphemeral or SorterOpen.
+- ResultRow The registers P1 through P1+P2-1 contain a single row of results. This opcode causes the EpilogLite3_step() call to terminate with an EpilogLite_ROW return code and it sets up the EpilogLite3_stmt structure to provide access to the r(P1)..r(P1+P2-1) values as the result row.
+- Return Jump to the address stored in register P1. If P1 is a return address register, then this accomplishes a return from a subroutine.
-- If P3 is 1, then the jump is only taken if register P1 holds an integer values, otherwise execution falls through to the next opcode, and the Return becomes a no-op. If P3 is 0, then register P1 must hold an integer or else an assert() is raised. P3 should be set to 1 when this opcode is used in combination with BeginSubrtn, and set to 0 otherwise.
+- If P3 is 1, then the jump is only taken if register P1 holds an integer values, otherwise execution falls through to the next opcode, and the Return becomes a no-op. If P3 is 0, then register P1 must hold an integer or else an assert() is raised. P3 should be set to 1 when this opcode is used in combination with BeginSubrtn, and set to 0 otherwise.
-- The value in register P1 is unchanged by this opcode.
+- The value in register P1 is unchanged by this opcode.
-- P2 is not used by the byte-code engine. However, if P2 is positive and also less than the current address, then the "EXPLAIN" output formatter in the CLI will indent all opcodes from the P2 opcode up to be not including the current Return. P2 should be the first opcode in the subroutine from which this opcode is returning. Thus the P2 value is a byte-code indentation hint. See tag-20220407a in wherecode.c and shell.c.
-- Rewind The next use of the Rowid or Column or Next instruction for P1 will refer to the first entry in the database table or index. If the table or index is empty, jump immediately to P2. If the table or index is not empty, fall through to the following instruction.
+- P2 is not used by the byte-code engine. However, if P2 is positive and also less than the current address, then the "EXPLAIN" output formatter in the CLI will indent all opcodes from the P2 opcode up to be not including the current Return. P2 should be the first opcode in the subroutine from which this opcode is returning. Thus the P2 value is a byte-code indentation hint. See tag-20220407a in wherecode.c and shell.c.
+- Rewind The next use of the Rowid or Column or Next instruction for P1 will refer to the first entry in the database table or index. If the table or index is empty, jump immediately to P2. If the table or index is not empty, fall through to the following instruction.
-- If P2 is zero, that is an assertion that the P1 table is never empty and hence the jump will never be taken.
+- If P2 is zero, that is an assertion that the P1 table is never empty and hence the jump will never be taken.
-- This opcode leaves the cursor configured to move in forward order, from the beginning toward the end. In other words, the cursor is configured to use Next, not Prev.
-- RowCell P1 and P2 are both open cursors. Both must be opened on the same type of table - intkey or index. This opcode is used as part of copying the current row from P2 into P1. If the cursors are opened on intkey tables, register P3 contains the rowid to use with the new record in P1. If they are opened on index tables, P3 is not used.
+- This opcode leaves the cursor configured to move in forward order, from the beginning toward the end. In other words, the cursor is configured to use Next, not Prev.
+- RowCell P1 and P2 are both open cursors. Both must be opened on the same type of table - intkey or index. This opcode is used as part of copying the current row from P2 into P1. If the cursors are opened on intkey tables, register P3 contains the rowid to use with the new record in P1. If they are opened on index tables, P3 is not used.
-- This opcode must be followed by either an Insert or InsertIdx opcode with the OPFLAG_PREFORMAT flag set to complete the insert operation.
-- RowData Write into register P2 the complete row content for the row at which cursor P1 is currently pointing. There is no interpretation of the data. It is just copied onto the P2 register exactly as it is found in the database file.
+- This opcode must be followed by either an Insert or InsertIdx opcode with the OPFLAG_PREFORMAT flag set to complete the insert operation.
+- RowData Write into register P2 the complete row content for the row at which cursor P1 is currently pointing. There is no interpretation of the data. It is just copied onto the P2 register exactly as it is found in the database file.
-- If cursor P1 is an index, then the content is the key of the row. If cursor P2 is a table, then the content extracted is the data.
+- If cursor P1 is an index, then the content is the key of the row. If cursor P2 is a table, then the content extracted is the data.
-- If the P1 cursor must be pointing to a valid row (not a NULL row) of a real table, not a pseudo-table.
+- If the P1 cursor must be pointing to a valid row (not a NULL row) of a real table, not a pseudo-table.
-- If P3!=0 then this opcode is allowed to make an ephemeral pointer into the database page. That means that the content of the output register will be invalidated as soon as the cursor moves - including moves caused by other cursors that "save" the current cursors position in order that they can write to the same table. If P3==0 then a copy of the data is made into memory. P3!=0 is faster, but P3==0 is safer.
+- If P3!=0 then this opcode is allowed to make an ephemeral pointer into the database page. That means that the content of the output register will be invalidated as soon as the cursor moves - including moves caused by other cursors that "save" the current cursors position in order that they can write to the same table. If P3==0 then a copy of the data is made into memory. P3!=0 is faster, but P3==0 is safer.
-- If P3!=0 then the content of the P2 register is unsuitable for use in OP_Result and any OP_Result will invalidate the P2 register content. The P2 register content is invalidated by opcodes like Function or by any use of another cursor pointing to the same table.
-- Rowid Store in register P2 an integer which is the key of the table entry that P1 is currently point to.
+- If P3!=0 then the content of the P2 register is unsuitable for use in OP_Result and any OP_Result will invalidate the P2 register content. The P2 register content is invalidated by opcodes like Function or by any use of another cursor pointing to the same table.
+- Rowid Store in register P2 an integer which is the key of the table entry that P1 is currently point to.
-- P1 can be either an ordinary table or a virtual table. There used to be a separate OP_VRowid opcode for use with virtual tables, but this one opcode now works for both table types.
-- RowSetAdd Insert the integer value held by register P2 into a RowSet object held in register P1.
+- P1 can be either an ordinary table or a virtual table. There used to be a separate OP_VRowid opcode for use with virtual tables, but this one opcode now works for both table types.
+- RowSetAdd Insert the integer value held by register P2 into a RowSet object held in register P1.
-- An assertion fails if P2 is not an integer.
-- RowSetRead Extract the smallest value from the RowSet object in P1 and put that value into register P3. Or, if RowSet object P1 is initially empty, leave P3 unchanged and jump to instruction P2.
-- RowSetTest Register P3 is assumed to hold a 64-bit integer value. If register P1 contains a RowSet object and that RowSet object contains the value held in P3, jump to register P2. Otherwise, insert the integer in P3 into the RowSet and continue on to the next opcode.
+- An assertion fails if P2 is not an integer.
+- RowSetRead Extract the smallest value from the RowSet object in P1 and put that value into register P3. Or, if RowSet object P1 is initially empty, leave P3 unchanged and jump to instruction P2.
+- RowSetTest Register P3 is assumed to hold a 64-bit integer value. If register P1 contains a RowSet object and that RowSet object contains the value held in P3, jump to register P2. Otherwise, insert the integer in P3 into the RowSet and continue on to the next opcode.
-- The RowSet object is optimized for the case where sets of integers are inserted in distinct phases, which each set contains no duplicates. Each set is identified by a unique P4 value. The first set must have P4==0, the final set must have P4==-1, and for all other sets must have P4>0.
+- The RowSet object is optimized for the case where sets of integers are inserted in distinct phases, which each set contains no duplicates. Each set is identified by a unique P4 value. The first set must have P4==0, the final set must have P4==-1, and for all other sets must have P4>0.
-- This allows optimizations: (a) when P4==0 there is no need to test the RowSet object for P3, as it is guaranteed not to contain it, (b) when P4==-1 there is no need to insert the value, as it will never be tested for, and (c) when a value that is part of set X is inserted, there is no need to search to see if the same value was previously inserted as part of set X (only if it was previously inserted as part of some other set).
-- Savepoint Open, release or rollback the savepoint named by parameter P4, depending on the value of P1. To open a new savepoint set P1==0 (SAVEPOINT_BEGIN). To release (commit) an existing savepoint set P1==1 (SAVEPOINT_RELEASE). To rollback an existing savepoint set P1==2 (SAVEPOINT_ROLLBACK).
-- SCopy Make a shallow copy of register P1 into register P2.
+- This allows optimizations: (a) when P4==0 there is no need to test the RowSet object for P3, as it is guaranteed not to contain it, (b) when P4==-1 there is no need to insert the value, as it will never be tested for, and (c) when a value that is part of set X is inserted, there is no need to search to see if the same value was previously inserted as part of set X (only if it was previously inserted as part of some other set).
+- Savepoint Open, release or rollback the savepoint named by parameter P4, depending on the value of P1. To open a new savepoint set P1==0 (SAVEPOINT_BEGIN). To release (commit) an existing savepoint set P1==1 (SAVEPOINT_RELEASE). To rollback an existing savepoint set P1==2 (SAVEPOINT_ROLLBACK).
+- SCopy Make a shallow copy of register P1 into register P2.
-- This instruction makes a shallow copy of the value. If the value is a string or blob, then the copy is only a pointer to the original and hence if the original changes so will the copy. Worse, if the original is deallocated, the copy becomes invalid. Thus the program must guarantee that the original will not change during the lifetime of the copy. Use Copy to make a complete copy.
-- SeekEnd Position cursor P1 at the end of the btree for the purpose of appending a new entry onto the btree.
+- This instruction makes a shallow copy of the value. If the value is a string or blob, then the copy is only a pointer to the original and hence if the original changes so will the copy. Worse, if the original is deallocated, the copy becomes invalid. Thus the program must guarantee that the original will not change during the lifetime of the copy. Use Copy to make a complete copy.
+- SeekEnd Position cursor P1 at the end of the btree for the purpose of appending a new entry onto the btree.
-- It is assumed that the cursor is used only for appending and so if the cursor is valid, then the cursor must already be pointing at the end of the btree and so no changes are made to the cursor.
-- SeekGE If cursor P1 refers to an SQL table (B-Tree that uses integer keys), use the value in register P3 as the key. If cursor P1 refers to an SQL index, then P3 is the first in an array of P4 registers that are used as an unpacked index key.
+- It is assumed that the cursor is used only for appending and so if the cursor is valid, then the cursor must already be pointing at the end of the btree and so no changes are made to the cursor.
+- SeekGE If cursor P1 refers to an SQL table (B-Tree that uses integer keys), use the value in register P3 as the key. If cursor P1 refers to an SQL index, then P3 is the first in an array of P4 registers that are used as an unpacked index key.
-- Reposition cursor P1 so that it points to the smallest entry that is greater than or equal to the key value. If there are no records greater than or equal to the key and P2 is not zero, then jump to P2.
+- Reposition cursor P1 so that it points to the smallest entry that is greater than or equal to the key value. If there are no records greater than or equal to the key and P2 is not zero, then jump to P2.
-- If the cursor P1 was opened using the OPFLAG_SEEKEQ flag, then this opcode will either land on a record that exactly matches the key, or else it will cause a jump to P2. When the cursor is OPFLAG_SEEKEQ, this opcode must be followed by an IdxLE opcode with the same arguments. The IdxGT opcode will be skipped if this opcode succeeds, but the IdxGT opcode will be used on subsequent loop iterations. The OPFLAG_SEEKEQ flags is a hint to the btree layer to say that this is an equality search.
+- If the cursor P1 was opened using the OPFLAG_SEEKEQ flag, then this opcode will either land on a record that exactly matches the key, or else it will cause a jump to P2. When the cursor is OPFLAG_SEEKEQ, this opcode must be followed by an IdxLE opcode with the same arguments. The IdxGT opcode will be skipped if this opcode succeeds, but the IdxGT opcode will be used on subsequent loop iterations. The OPFLAG_SEEKEQ flags is a hint to the btree layer to say that this is an equality search.
-- This opcode leaves the cursor configured to move in forward order, from the beginning toward the end. In other words, the cursor is configured to use Next, not Prev.
+- This opcode leaves the cursor configured to move in forward order, from the beginning toward the end. In other words, the cursor is configured to use Next, not Prev.
-- See also: Found, NotFound, SeekLt, SeekGt, SeekLe
-- SeekGT If cursor P1 refers to an SQL table (B-Tree that uses integer keys), use the value in register P3 as a key. If cursor P1 refers to an SQL index, then P3 is the first in an array of P4 registers that are used as an unpacked index key.
+- See also: Found, NotFound, SeekLt, SeekGt, SeekLe
+- SeekGT If cursor P1 refers to an SQL table (B-Tree that uses integer keys), use the value in register P3 as a key. If cursor P1 refers to an SQL index, then P3 is the first in an array of P4 registers that are used as an unpacked index key.
-- Reposition cursor P1 so that it points to the smallest entry that is greater than the key value. If there are no records greater than the key and P2 is not zero, then jump to P2.
+- Reposition cursor P1 so that it points to the smallest entry that is greater than the key value. If there are no records greater than the key and P2 is not zero, then jump to P2.
-- This opcode leaves the cursor configured to move in forward order, from the beginning toward the end. In other words, the cursor is configured to use Next, not Prev.
+- This opcode leaves the cursor configured to move in forward order, from the beginning toward the end. In other words, the cursor is configured to use Next, not Prev.
-- See also: Found, NotFound, SeekLt, SeekGe, SeekLe
-- SeekHit Increase or decrease the seekHit value for cursor P1, if necessary, so that it is no less than P2 and no greater than P3.
+- See also: Found, NotFound, SeekLt, SeekGe, SeekLe
+- SeekHit Increase or decrease the seekHit value for cursor P1, if necessary, so that it is no less than P2 and no greater than P3.
-- The seekHit integer represents the maximum of terms in an index for which there is known to be at least one match. If the seekHit value is smaller than the total number of equality terms in an index lookup, then the IfNoHope opcode might run to see if the IN loop can be abandoned early, thus saving work. This is part of the IN-early-out optimization.
+- The seekHit integer represents the maximum of terms in an index for which there is known to be at least one match. If the seekHit value is smaller than the total number of equality terms in an index lookup, then the IfNoHope opcode might run to see if the IN loop can be abandoned early, thus saving work. This is part of the IN-early-out optimization.
-- P1 must be a valid b-tree cursor.
-- SeekLE If cursor P1 refers to an SQL table (B-Tree that uses integer keys), use the value in register P3 as a key. If cursor P1 refers to an SQL index, then P3 is the first in an array of P4 registers that are used as an unpacked index key.
+- P1 must be a valid b-tree cursor.
+- SeekLE If cursor P1 refers to an SQL table (B-Tree that uses integer keys), use the value in register P3 as a key. If cursor P1 refers to an SQL index, then P3 is the first in an array of P4 registers that are used as an unpacked index key.
-- Reposition cursor P1 so that it points to the largest entry that is less than or equal to the key value. If there are no records less than or equal to the key and P2 is not zero, then jump to P2.
+- Reposition cursor P1 so that it points to the largest entry that is less than or equal to the key value. If there are no records less than or equal to the key and P2 is not zero, then jump to P2.
-- This opcode leaves the cursor configured to move in reverse order, from the end toward the beginning. In other words, the cursor is configured to use Prev, not Next.
+- This opcode leaves the cursor configured to move in reverse order, from the end toward the beginning. In other words, the cursor is configured to use Prev, not Next.
-- If the cursor P1 was opened using the OPFLAG_SEEKEQ flag, then this opcode will either land on a record that exactly matches the key, or else it will cause a jump to P2. When the cursor is OPFLAG_SEEKEQ, this opcode must be followed by an IdxLE opcode with the same arguments. The IdxGE opcode will be skipped if this opcode succeeds, but the IdxGE opcode will be used on subsequent loop iterations. The OPFLAG_SEEKEQ flags is a hint to the btree layer to say that this is an equality search.
+- If the cursor P1 was opened using the OPFLAG_SEEKEQ flag, then this opcode will either land on a record that exactly matches the key, or else it will cause a jump to P2. When the cursor is OPFLAG_SEEKEQ, this opcode must be followed by an IdxLE opcode with the same arguments. The IdxGE opcode will be skipped if this opcode succeeds, but the IdxGE opcode will be used on subsequent loop iterations. The OPFLAG_SEEKEQ flags is a hint to the btree layer to say that this is an equality search.
-- See also: Found, NotFound, SeekGt, SeekGe, SeekLt
-- SeekLT If cursor P1 refers to an SQL table (B-Tree that uses integer keys), use the value in register P3 as a key. If cursor P1 refers to an SQL index, then P3 is the first in an array of P4 registers that are used as an unpacked index key.
+- See also: Found, NotFound, SeekGt, SeekGe, SeekLt
+- SeekLT If cursor P1 refers to an SQL table (B-Tree that uses integer keys), use the value in register P3 as a key. If cursor P1 refers to an SQL index, then P3 is the first in an array of P4 registers that are used as an unpacked index key.
-- Reposition cursor P1 so that it points to the largest entry that is less than the key value. If there are no records less than the key and P2 is not zero, then jump to P2.
+- Reposition cursor P1 so that it points to the largest entry that is less than the key value. If there are no records less than the key and P2 is not zero, then jump to P2.
-- This opcode leaves the cursor configured to move in reverse order, from the end toward the beginning. In other words, the cursor is configured to use Prev, not Next.
+- This opcode leaves the cursor configured to move in reverse order, from the end toward the beginning. In other words, the cursor is configured to use Prev, not Next.
-- See also: Found, NotFound, SeekGt, SeekGe, SeekLe
-- SeekRowid P1 is the index of a cursor open on an SQL table btree (with integer keys). If register P3 does not contain an integer or if P1 does not contain a record with rowid P3 then jump immediately to P2. Or, if P2 is 0, raise an EpilogLite_CORRUPT error. If P1 does contain a record with rowid P3 then leave the cursor pointing at that record and fall through to the next instruction.
+- See also: Found, NotFound, SeekGt, SeekGe, SeekLe
+- SeekRowid P1 is the index of a cursor open on an SQL table btree (with integer keys). If register P3 does not contain an integer or if P1 does not contain a record with rowid P3 then jump immediately to P2. Or, if P2 is 0, raise an EpilogLite_CORRUPT error. If P1 does contain a record with rowid P3 then leave the cursor pointing at that record and fall through to the next instruction.
-- The NotExists opcode performs the same operation, but with NotExists the P3 register must be guaranteed to contain an integer value. With this opcode, register P3 might not contain an integer.
+- The NotExists opcode performs the same operation, but with NotExists the P3 register must be guaranteed to contain an integer value. With this opcode, register P3 might not contain an integer.
-- The NotFound opcode performs the same operation on index btrees (with arbitrary multi-value keys).
+- The NotFound opcode performs the same operation on index btrees (with arbitrary multi-value keys).
-- This opcode leaves the cursor in a state where it cannot be advanced in either direction. In other words, the Next and Prev opcodes will not work following this opcode.
+- This opcode leaves the cursor in a state where it cannot be advanced in either direction. In other words, the Next and Prev opcodes will not work following this opcode.
-- See also: Found, NotFound, NoConflict, SeekRowid
-- SeekScan This opcode is a prefix opcode to SeekGE. In other words, this opcode must be immediately followed by SeekGE. This constraint is checked by assert() statements.
+- See also: Found, NotFound, NoConflict, SeekRowid
+- SeekScan This opcode is a prefix opcode to SeekGE. In other words, this opcode must be immediately followed by SeekGE. This constraint is checked by assert() statements.
-- This opcode uses the P1 through P4 operands of the subsequent SeekGE. In the text that follows, the operands of the subsequent SeekGE opcode are denoted as SeekOP.P1 through SeekOP.P4. Only the P1, P2 and P5 operands of this opcode are also used, and are called This.P1, This.P2 and This.P5.
+- This opcode uses the P1 through P4 operands of the subsequent SeekGE. In the text that follows, the operands of the subsequent SeekGE opcode are denoted as SeekOP.P1 through SeekOP.P4. Only the P1, P2 and P5 operands of this opcode are also used, and are called This.P1, This.P2 and This.P5.
-- This opcode helps to optimize IN operators on a multi-column index where the IN operator is on the later terms of the index by avoiding unnecessary seeks on the btree, substituting steps to the next row of the b-tree instead. A correct answer is obtained if this opcode is omitted or is a no-op.
+- This opcode helps to optimize IN operators on a multi-column index where the IN operator is on the later terms of the index by avoiding unnecessary seeks on the btree, substituting steps to the next row of the b-tree instead. A correct answer is obtained if this opcode is omitted or is a no-op.
-- The SeekGE.P3 and SeekGE.P4 operands identify an unpacked key which is the desired entry that we want the cursor SeekGE.P1 to be pointing to. Call this SeekGE.P3/P4 row the "target".
+- The SeekGE.P3 and SeekGE.P4 operands identify an unpacked key which is the desired entry that we want the cursor SeekGE.P1 to be pointing to. Call this SeekGE.P3/P4 row the "target".
-- If the SeekGE.P1 cursor is not currently pointing to a valid row, then this opcode is a no-op and control passes through into the SeekGE.
+- If the SeekGE.P1 cursor is not currently pointing to a valid row, then this opcode is a no-op and control passes through into the SeekGE.
-- If the SeekGE.P1 cursor is pointing to a valid row, then that row might be the target row, or it might be near and slightly before the target row, or it might be after the target row. If the cursor is currently before the target row, then this opcode attempts to position the cursor on or after the target row by invoking EpilogLite3BtreeStep() on the cursor between 1 and This.P1 times.
+- If the SeekGE.P1 cursor is pointing to a valid row, then that row might be the target row, or it might be near and slightly before the target row, or it might be after the target row. If the cursor is currently before the target row, then this opcode attempts to position the cursor on or after the target row by invoking EpilogLite3BtreeStep() on the cursor between 1 and This.P1 times.
-- The This.P5 parameter is a flag that indicates what to do if the cursor ends up pointing at a valid row that is past the target row. If This.P5 is false (0) then a jump is made to SeekGE.P2. If This.P5 is true (non-zero) then a jump is made to This.P2. The P5==0 case occurs when there are no inequality constraints to the right of the IN constraint. The jump to SeekGE.P2 ends the loop. The P5!=0 case occurs when there are inequality constraints to the right of the IN operator. In that case, the This.P2 will point either directly to or to setup code prior to the IdxGT or IdxGE opcode that checks for loop terminate.
+- The This.P5 parameter is a flag that indicates what to do if the cursor ends up pointing at a valid row that is past the target row. If This.P5 is false (0) then a jump is made to SeekGE.P2. If This.P5 is true (non-zero) then a jump is made to This.P2. The P5==0 case occurs when there are no inequality constraints to the right of the IN constraint. The jump to SeekGE.P2 ends the loop. The P5!=0 case occurs when there are inequality constraints to the right of the IN operator. In that case, the This.P2 will point either directly to or to setup code prior to the IdxGT or IdxGE opcode that checks for loop terminate.
-- Possible outcomes from this opcode:
+- Possible outcomes from this opcode:
-- If the cursor is initially not pointed to any valid row, then fall through into the subsequent SeekGE opcode.
+- If the cursor is initially not pointed to any valid row, then fall through into the subsequent SeekGE opcode.
-- If the cursor is left pointing to a row that is before the target row, even after making as many as This.P1 calls to EpilogLite3BtreeNext(), then also fall through into SeekGE.
+- If the cursor is left pointing to a row that is before the target row, even after making as many as This.P1 calls to EpilogLite3BtreeNext(), then also fall through into SeekGE.
-- If the cursor is left pointing at the target row, either because it was at the target row to begin with or because one or more EpilogLite3BtreeNext() calls moved the cursor to the target row, then jump to This.P2..,
+- If the cursor is left pointing at the target row, either because it was at the target row to begin with or because one or more EpilogLite3BtreeNext() calls moved the cursor to the target row, then jump to This.P2..,
-- If the cursor started out before the target row and a call to to EpilogLite3BtreeNext() moved the cursor off the end of the index (indicating that the target row definitely does not exist in the btree) then jump to SeekGE.P2, ending the loop.
+- If the cursor started out before the target row and a call to to EpilogLite3BtreeNext() moved the cursor off the end of the index (indicating that the target row definitely does not exist in the btree) then jump to SeekGE.P2, ending the loop.
-- If the cursor ends up on a valid row that is past the target row (indicating that the target row does not exist in the btree) then jump to SeekOP.P2 if This.P5==0 or to This.P2 if This.P5>0.
+- If the cursor ends up on a valid row that is past the target row (indicating that the target row does not exist in the btree) then jump to SeekOP.P2 if This.P5==0 or to This.P2 if This.P5>0.
-- Sequence Find the next available sequence number for cursor P1. Write the sequence number into register P2. The sequence number on the cursor is incremented after this instruction.
-- SequenceTest P1 is a sorter cursor. If the sequence counter is currently zero, jump to P2. Regardless of whether or not the jump is taken, increment the the sequence value.
-- SetCookie Write the integer value P3 into cookie number P2 of database P1. P2==1 is the schema version. P2==2 is the database format. P2==3 is the recommended pager cache size, and so forth. P1==0 is the main database file and P1==1 is the database file used to store temporary tables.
+- Sequence Find the next available sequence number for cursor P1. Write the sequence number into register P2. The sequence number on the cursor is incremented after this instruction.
+- SequenceTest P1 is a sorter cursor. If the sequence counter is currently zero, jump to P2. Regardless of whether or not the jump is taken, increment the the sequence value.
+- SetCookie Write the integer value P3 into cookie number P2 of database P1. P2==1 is the schema version. P2==2 is the database format. P2==3 is the recommended pager cache size, and so forth. P1==0 is the main database file and P1==1 is the database file used to store temporary tables.
-- A transaction must be started before executing this opcode.
+- A transaction must be started before executing this opcode.
-- If P2 is the SCHEMA_VERSION cookie (cookie number 1) then the internal schema version is set to P3-P5. The "PRAGMA schema_version=N" statement has P5 set to 1, so that the internal schema version will be different from the database schema version, resulting in a schema reset.
-- SetSubtype Set the subtype value of register P2 to the integer from register P1. If P1 is NULL, clear the subtype from p2.
-- ShiftLeft Shift the integer value in register P2 to the left by the number of bits specified by the integer in register P1. Store the result in register P3. If either input is NULL, the result is NULL.
-- ShiftRight Shift the integer value in register P2 to the right by the number of bits specified by the integer in register P1. Store the result in register P3. If either input is NULL, the result is NULL.
-- SoftNull Set register P1 to have the value NULL as seen by the MakeRecord instruction, but do not free any string or blob memory associated with the register, so that if the value was a string or blob that was previously copied using SCopy, the copies will continue to be valid.
-- Sort This opcode does exactly the same thing as Rewind except that it increments an undocumented global variable used for testing.
+- If P2 is the SCHEMA_VERSION cookie (cookie number 1) then the internal schema version is set to P3-P5. The "PRAGMA schema_version=N" statement has P5 set to 1, so that the internal schema version will be different from the database schema version, resulting in a schema reset.
+- SetSubtype Set the subtype value of register P2 to the integer from register P1. If P1 is NULL, clear the subtype from p2.
+- ShiftLeft Shift the integer value in register P2 to the left by the number of bits specified by the integer in register P1. Store the result in register P3. If either input is NULL, the result is NULL.
+- ShiftRight Shift the integer value in register P2 to the right by the number of bits specified by the integer in register P1. Store the result in register P3. If either input is NULL, the result is NULL.
+- SoftNull Set register P1 to have the value NULL as seen by the MakeRecord instruction, but do not free any string or blob memory associated with the register, so that if the value was a string or blob that was previously copied using SCopy, the copies will continue to be valid.
+- Sort This opcode does exactly the same thing as Rewind except that it increments an undocumented global variable used for testing.
-- Sorting is accomplished by writing records into a sorting index, then rewinding that index and playing it back from beginning to end. We use the Sort opcode instead of Rewind to do the rewinding so that the global variable will be incremented and regression tests can determine whether or not the optimizer is correctly optimizing out sorts.
-- SorterCompare P1 is a sorter cursor. This instruction compares a prefix of the record blob in register P3 against a prefix of the entry that the sorter cursor currently points to. Only the first P4 fields of r[P3] and the sorter record are compared.
+- Sorting is accomplished by writing records into a sorting index, then rewinding that index and playing it back from beginning to end. We use the Sort opcode instead of Rewind to do the rewinding so that the global variable will be incremented and regression tests can determine whether or not the optimizer is correctly optimizing out sorts.
+- SorterCompare P1 is a sorter cursor. This instruction compares a prefix of the record blob in register P3 against a prefix of the entry that the sorter cursor currently points to. Only the first P4 fields of r\\[P3] and the sorter record are compared.
-- If either P3 or the sorter contains a NULL in one of their significant fields (not counting the P4 fields at the end which are ignored) then the comparison is assumed to be equal.
+- If either P3 or the sorter contains a NULL in one of their significant fields (not counting the P4 fields at the end which are ignored) then the comparison is assumed to be equal.
-- Fall through to next instruction if the two records compare equal to each other. Jump to P2 if they are different.
-- SorterData Write into register P2 the current sorter data for sorter cursor P1. Then clear the column header cache on cursor P3.
+- Fall through to next instruction if the two records compare equal to each other. Jump to P2 if they are different.
+- SorterData Write into register P2 the current sorter data for sorter cursor P1. Then clear the column header cache on cursor P3.
-- This opcode is normally used to move a record out of the sorter and into a register that is the source for a pseudo-table cursor created using OpenPseudo. That pseudo-table cursor is the one that is identified by parameter P3. Clearing the P3 column cache as part of this opcode saves us from having to issue a separate NullRow instruction to clear that cache.
-- SorterInsert Register P2 holds an SQL index key made using the MakeRecord instructions. This opcode writes that key into the sorter P1. Data for the entry is nil.
-- SorterNext This opcode works just like Next except that P1 must be a sorter object for which the SorterSort opcode has been invoked. This opcode advances the cursor to the next sorted record, or jumps to P2 if there are no more sorted records.
-- SorterOpen This opcode works like OpenEphemeral except that it opens a transient index that is specifically designed to sort large tables using an external merge-sort algorithm.
+- This opcode is normally used to move a record out of the sorter and into a register that is the source for a pseudo-table cursor created using OpenPseudo. That pseudo-table cursor is the one that is identified by parameter P3. Clearing the P3 column cache as part of this opcode saves us from having to issue a separate NullRow instruction to clear that cache.
+- SorterInsert Register P2 holds an SQL index key made using the MakeRecord instructions. This opcode writes that key into the sorter P1. Data for the entry is nil.
+- SorterNext This opcode works just like Next except that P1 must be a sorter object for which the SorterSort opcode has been invoked. This opcode advances the cursor to the next sorted record, or jumps to P2 if there are no more sorted records.
+- SorterOpen This opcode works like OpenEphemeral except that it opens a transient index that is specifically designed to sort large tables using an external merge-sort algorithm.
-- If argument P3 is non-zero, then it indicates that the sorter may assume that a stable sort considering the first P3 fields of each key is sufficient to produce the required results.
-- SorterSort After all records have been inserted into the Sorter object identified by P1, invoke this opcode to actually do the sorting. Jump to P2 if there are no records to be sorted.
+- If argument P3 is non-zero, then it indicates that the sorter may assume that a stable sort considering the first P3 fields of each key is sufficient to produce the required results.
+- SorterSort After all records have been inserted into the Sorter object identified by P1, invoke this opcode to actually do the sorting. Jump to P2 if there are no records to be sorted.
-- This opcode is an alias for Sort and Rewind that is used for Sorter objects.
-- SqlExec Run the SQL statement or statements specified in the P4 string.
+- This opcode is an alias for Sort and Rewind that is used for Sorter objects.
+- SqlExec Run the SQL statement or statements specified in the P4 string.
-- The P1 parameter is a bitmask of options:
+- The P1 parameter is a bitmask of options:
-- 0x0001 Disable Auth and Trace callbacks while the statements in P4 are running.
+- 0x0001 Disable Auth and Trace callbacks while the statements in P4 are running.
-- 0x0002 Set db->nAnalysisLimit to P2 while the statements in P4 are running.
-- String The string value P4 of length P1 (bytes) is stored in register P2.
+- 0x0002 Set db->nAnalysisLimit to P2 while the statements in P4 are running.
+- String The string value P4 of length P1 (bytes) is stored in register P2.
-- If P3 is not zero and the content of register P3 is equal to P5, then the datatype of the register P2 is converted to BLOB. The content is the same sequence of bytes, it is merely interpreted as a BLOB instead of a string, as if it had been CAST. In other words:
+- If P3 is not zero and the content of register P3 is equal to P5, then the datatype of the register P2 is converted to BLOB. The content is the same sequence of bytes, it is merely interpreted as a BLOB instead of a string, as if it had been CAST. In other words:
-- if( P3!=0 and reg[P3]==P5 ) reg[P2] := CAST(reg[P2] as BLOB)
-- String8 P4 points to a nul terminated UTF-8 string. This opcode is transformed into a String opcode before it is executed for the first time. During this transformation, the length of string P4 is computed and stored as the P1 parameter.
-- Subtract Subtract the value in register P1 from the value in register P2 and store the result in register P3. If either input is NULL, the result is NULL.
-- TableLock Obtain a lock on a particular table. This instruction is only used when the shared-cache feature is enabled.
+- if( P3!=0 and reg\\[P3]==P5 ) reg\\[P2] := CAST(reg\\[P2] as BLOB)
+- String8 P4 points to a nul terminated UTF-8 string. This opcode is transformed into a String opcode before it is executed for the first time. During this transformation, the length of string P4 is computed and stored as the P1 parameter.
+- Subtract Subtract the value in register P1 from the value in register P2 and store the result in register P3. If either input is NULL, the result is NULL.
+- TableLock Obtain a lock on a particular table. This instruction is only used when the shared-cache feature is enabled.
-- P1 is the index of the database in EpilogLite3.aDb[] of the database on which the lock is acquired. A readlock is obtained if P3==0 or a write lock if P3==1.
+- P1 is the index of the database in EpilogLite3.aDb\\[] of the database on which the lock is acquired. A readlock is obtained if P3==0 or a write lock if P3==1.
-- P2 contains the root-page of the table to lock.
+- P2 contains the root-page of the table to lock.
-- P4 contains a pointer to the name of the table being locked. This is only used to generate an error message if the lock cannot be obtained.
-- Trace Write P4 on the statement trace output if statement tracing is enabled.
+- P4 contains a pointer to the name of the table being locked. This is only used to generate an error message if the lock cannot be obtained.
+- Trace Write P4 on the statement trace output if statement tracing is enabled.
-- Operand P1 must be 0x7fffffff and P2 must positive.
-- Transaction Begin a transaction on database P1 if a transaction is not already active. If P2 is non-zero, then a write-transaction is started, or if a read-transaction is already active, it is upgraded to a write-transaction. If P2 is zero, then a read-transaction is started. If P2 is 2 or more then an exclusive transaction is started.
+- Operand P1 must be 0x7fffffff and P2 must positive.
+- Transaction Begin a transaction on database P1 if a transaction is not already active. If P2 is non-zero, then a write-transaction is started, or if a read-transaction is already active, it is upgraded to a write-transaction. If P2 is zero, then a read-transaction is started. If P2 is 2 or more then an exclusive transaction is started.
-- P1 is the index of the database file on which the transaction is started. Index 0 is the main database file and index 1 is the file used for temporary tables. Indices of 2 or more are used for attached databases.
+- P1 is the index of the database file on which the transaction is started. Index 0 is the main database file and index 1 is the file used for temporary tables. Indices of 2 or more are used for attached databases.
-- If a write-transaction is started and the Vdbe.usesStmtJournal flag is true (this flag is set if the Vdbe may modify more than one row and may throw an ABORT exception), a statement transaction may also be opened. More specifically, a statement transaction is opened iff the database connection is currently not in autocommit mode, or if there are other active statements. A statement transaction allows the changes made by this VDBE to be rolled back after an error without having to roll back the entire transaction. If no error is encountered, the statement transaction will automatically commit when the VDBE halts.
+- If a write-transaction is started and the Vdbe.usesStmtJournal flag is true (this flag is set if the Vdbe may modify more than one row and may throw an ABORT exception), a statement transaction may also be opened. More specifically, a statement transaction is opened iff the database connection is currently not in autocommit mode, or if there are other active statements. A statement transaction allows the changes made by this VDBE to be rolled back after an error without having to roll back the entire transaction. If no error is encountered, the statement transaction will automatically commit when the VDBE halts.
-- If P5!=0 then this opcode also checks the schema cookie against P3 and the schema generation counter against P4. The cookie changes its value whenever the database schema changes. This operation is used to detect when that the cookie has changed and that the current process needs to reread the schema. If the schema cookie in P3 differs from the schema cookie in the database header or if the schema generation counter in P4 differs from the current generation counter, then an EpilogLite_SCHEMA error is raised and execution halts. The EpilogLite3_step() wrapper function might then reprepare the statement and rerun it from the beginning.
-- TypeCheck Apply affinities to the range of P2 registers beginning with P1. Take the affinities from the Table object in P4. If any value cannot be coerced into the correct type, then raise an error.
+- If P5!=0 then this opcode also checks the schema cookie against P3 and the schema generation counter against P4. The cookie changes its value whenever the database schema changes. This operation is used to detect when that the cookie has changed and that the current process needs to reread the schema. If the schema cookie in P3 differs from the schema cookie in the database header or if the schema generation counter in P4 differs from the current generation counter, then an EpilogLite_SCHEMA error is raised and execution halts. The EpilogLite3_step() wrapper function might then reprepare the statement and rerun it from the beginning.
+- TypeCheck Apply affinities to the range of P2 registers beginning with P1. Take the affinities from the Table object in P4. If any value cannot be coerced into the correct type, then raise an error.
-- This opcode is similar to Affinity except that this opcode forces the register type to the Table column type. This is used to implement "strict affinity".
+- This opcode is similar to Affinity except that this opcode forces the register type to the Table column type. This is used to implement "strict affinity".
-- GENERATED ALWAYS AS ... STATIC columns are only checked if P3 is zero. When P3 is non-zero, no type checking occurs for static generated columns. Virtual columns are computed at query time and so they are never checked.
+- GENERATED ALWAYS AS ... STATIC columns are only checked if P3 is zero. When P3 is non-zero, no type checking occurs for static generated columns. Virtual columns are computed at query time and so they are never checked.
-- Preconditions:
+- Preconditions:
-- P2 should be the number of non-virtual columns in the table of P4.
-- Table P4 should be a STRICT table.
+- P2 should be the number of non-virtual columns in the table of P4.
+- Table P4 should be a STRICT table.
-- If any precondition is false, an assertion fault occurs.
-- Vacuum Vacuum the entire database P1. P1 is 0 for "main", and 2 or more for an attached database. The "temp" database may not be vacuumed.
+- If any precondition is false, an assertion fault occurs.
+- Vacuum Vacuum the entire database P1. P1 is 0 for "main", and 2 or more for an attached database. The "temp" database may not be vacuumed.
-- If P2 is not zero, then it is a register holding a string which is the file into which the result of vacuum should be written. When P2 is zero, the vacuum overwrites the original database.
-- Variable Transfer the values of bound parameter P1 into register P2
-- VBegin P4 may be a pointer to an EpilogLite3_vtab structure. If so, call the xBegin method for that table.
+- If P2 is not zero, then it is a register holding a string which is the file into which the result of vacuum should be written. When P2 is zero, the vacuum overwrites the original database.
+- Variable Transfer the values of bound parameter P1 into register P2
+- VBegin P4 may be a pointer to an EpilogLite3_vtab structure. If so, call the xBegin method for that table.
-- Also, whether or not P4 is set, check that this is not being called from within a callback to a virtual table xSync() method. If it is, the error code will be set to EpilogLite_LOCKED.
-- VCheck P4 is a pointer to a Table object that is a virtual table in schema P1 that supports the xIntegrity() method. This opcode runs the xIntegrity() method for that virtual table, using P3 as the integer argument. If an error is reported back, the table name is prepended to the error message and that message is stored in P2. If no errors are seen, register P2 is set to NULL.
-- VColumn Store in register P3 the value of the P2-th column of the current row of the virtual-table of cursor P1.
+- Also, whether or not P4 is set, check that this is not being called from within a callback to a virtual table xSync() method. If it is, the error code will be set to EpilogLite_LOCKED.
+- VCheck P4 is a pointer to a Table object that is a virtual table in schema P1 that supports the xIntegrity() method. This opcode runs the xIntegrity() method for that virtual table, using P3 as the integer argument. If an error is reported back, the table name is prepended to the error message and that message is stored in P2. If no errors are seen, register P2 is set to NULL.
+- VColumn Store in register P3 the value of the P2-th column of the current row of the virtual-table of cursor P1.
-- If the VColumn opcode is being used to fetch the value of an unchanging column during an UPDATE operation, then the P5 value is OPFLAG_NOCHNG. This will cause the EpilogLite3_vtab_nochange() function to return true inside the xColumn method of the virtual table implementation. The P5 column might also contain other bits (OPFLAG_LENGTHARG or OPFLAG_TYPEOFARG) but those bits are unused by VColumn.
-- VCreate P2 is a register that holds the name of a virtual table in database P1. Call the xCreate method for that table.
-- VDestroy P4 is the name of a virtual table in database P1. Call the xDestroy method of that table.
-- VFilter P1 is a cursor opened using VOpen. P2 is an address to jump to if the filtered result set is empty.
+- If the VColumn opcode is being used to fetch the value of an unchanging column during an UPDATE operation, then the P5 value is OPFLAG_NOCHNG. This will cause the EpilogLite3_vtab_nochange() function to return true inside the xColumn method of the virtual table implementation. The P5 column might also contain other bits (OPFLAG_LENGTHARG or OPFLAG_TYPEOFARG) but those bits are unused by VColumn.
+- VCreate P2 is a register that holds the name of a virtual table in database P1. Call the xCreate method for that table.
+- VDestroy P4 is the name of a virtual table in database P1. Call the xDestroy method of that table.
+- VFilter P1 is a cursor opened using VOpen. P2 is an address to jump to if the filtered result set is empty.
-- P4 is either NULL or a string that was generated by the xBestIndex method of the module. The interpretation of the P4 string is left to the module implementation.
+- P4 is either NULL or a string that was generated by the xBestIndex method of the module. The interpretation of the P4 string is left to the module implementation.
-- This opcode invokes the xFilter method on the virtual table specified by P1. The integer query plan parameter to xFilter is stored in register P3. Register P3+1 stores the argc parameter to be passed to the xFilter method. Registers P3+2..P3+1+argc are the argc additional parameters which are passed to xFilter as argv. Register P3+2 becomes argv[0] when passed to xFilter.
+- This opcode invokes the xFilter method on the virtual table specified by P1. The integer query plan parameter to xFilter is stored in register P3. Register P3+1 stores the argc parameter to be passed to the xFilter method. Registers P3+2..P3+1+argc are the argc additional parameters which are passed to xFilter as argv. Register P3+2 becomes argv\\[0] when passed to xFilter.
-- A jump is made to P2 if the result set after filtering would be empty.
-- VInitIn Set register P2 to be a pointer to a ValueList object for cursor P1 with cache register P3 and output register P3+1. This ValueList object can be used as the first argument to EpilogLite3_vtab_in_first() and EpilogLite3_vtab_in_next() to extract all of the values stored in the P1 cursor. Register P3 is used to hold the values returned by EpilogLite3_vtab_in_first() and EpilogLite3_vtab_in_next().
-- VNext Advance virtual table P1 to the next row in its result set and jump to instruction P2. Or, if the virtual table has reached the end of its result set, then fall through to the next instruction.
-- VOpen P4 is a pointer to a virtual table object, an EpilogLite3_vtab structure. P1 is a cursor number. This opcode opens a cursor to the virtual table and stores that cursor in P1.
-- VRename P4 is a pointer to a virtual table object, an EpilogLite3_vtab structure. This opcode invokes the corresponding xRename method. The value in register P1 is passed as the zName argument to the xRename method.
-- VUpdate P4 is a pointer to a virtual table object, an EpilogLite3_vtab structure. This opcode invokes the corresponding xUpdate method. P2 values are contiguous memory cells starting at P3 to pass to the xUpdate invocation. The value in register (P3+P2-1) corresponds to the p2th element of the argv array passed to xUpdate.
+- A jump is made to P2 if the result set after filtering would be empty.
+- VInitIn Set register P2 to be a pointer to a ValueList object for cursor P1 with cache register P3 and output register P3+1. This ValueList object can be used as the first argument to EpilogLite3_vtab_in_first() and EpilogLite3_vtab_in_next() to extract all of the values stored in the P1 cursor. Register P3 is used to hold the values returned by EpilogLite3_vtab_in_first() and EpilogLite3_vtab_in_next().
+- VNext Advance virtual table P1 to the next row in its result set and jump to instruction P2. Or, if the virtual table has reached the end of its result set, then fall through to the next instruction.
+- VOpen P4 is a pointer to a virtual table object, an EpilogLite3_vtab structure. P1 is a cursor number. This opcode opens a cursor to the virtual table and stores that cursor in P1.
+- VRename P4 is a pointer to a virtual table object, an EpilogLite3_vtab structure. This opcode invokes the corresponding xRename method. The value in register P1 is passed as the zName argument to the xRename method.
+- VUpdate P4 is a pointer to a virtual table object, an EpilogLite3_vtab structure. This opcode invokes the corresponding xUpdate method. P2 values are contiguous memory cells starting at P3 to pass to the xUpdate invocation. The value in register (P3+P2-1) corresponds to the p2th element of the argv array passed to xUpdate.
-- The xUpdate method will do a DELETE or an INSERT or both. The argv[0] element (which corresponds to memory cell P3) is the rowid of a row to delete. If argv[0] is NULL then no deletion occurs. The argv[1] element is the rowid of the new row. This can be NULL to have the virtual table select the new rowid for itself. The subsequent elements in the array are the values of columns in the new row.
+- The xUpdate method will do a DELETE or an INSERT or both. The argv\\[0] element (which corresponds to memory cell P3) is the rowid of a row to delete. If argv\\[0] is NULL then no deletion occurs. The argv\\[1] element is the rowid of the new row. This can be NULL to have the virtual table select the new rowid for itself. The subsequent elements in the array are the values of columns in the new row.
-- If P2==1 then no insert is performed. argv[0] is the rowid of a row to delete.
+- If P2==1 then no insert is performed. argv\\[0] is the rowid of a row to delete.
-- P1 is a boolean flag. If it is set to true and the xUpdate call is successful, then the value returned by EpilogLite3_last_insert_rowid() is set to the value of the rowid for the row just inserted.
+- P1 is a boolean flag. If it is set to true and the xUpdate call is successful, then the value returned by EpilogLite3_last_insert_rowid() is set to the value of the rowid for the row just inserted.
-- P5 is the error actions (OE_Replace, OE_Fail, OE_Ignore, etc) to apply in the case of a constraint failure on an insert or update.
-- Yield Swap the program counter with the value in register P1. This has the effect of yielding to a coroutine.
+- P5 is the error actions (OE_Replace, OE_Fail, OE_Ignore, etc) to apply in the case of a constraint failure on an insert or update.
+- Yield Swap the program counter with the value in register P1. This has the effect of yielding to a coroutine.
-- If the coroutine that is launched by this instruction ends with Yield or Return then continue to the next instruction. But if the coroutine launched by this instruction ends with EndCoroutine, then jump to P2 rather than continuing with the next instruction.
+- If the coroutine that is launched by this instruction ends with Yield or Return then continue to the next instruction. But if the coroutine launched by this instruction ends with EndCoroutine, then jump to P2 rather than continuing with the next instruction.
-- See also: InitCoroutine
-- ZeroOrNull If both registers P1 and P3 are NOT NULL, then store a zero in register P2. If either registers P1 or P3 are NULL then put a NULL in register P2.
+- See also: InitCoroutine
+- ZeroOrNull If both registers P1 and P3 are NOT NULL, then store a zero in register P2. If either registers P1 or P3 are NULL then put a NULL in register P2.
diff --git a/design/sql_syntax/Comments/Multi-line Comment.md b/design/sql_syntax/Comments/Multi-line Comment.md
new file mode 100644
index 0000000..b145dd1
--- /dev/null
+++ b/design/sql_syntax/Comments/Multi-line Comment.md
@@ -0,0 +1,18 @@
+---
+regex: '"/\*(?:[^\*]|\*[^/])*(?:\**/|$)"'
+title: Multi-line Comment
+---
+
+# Multi-line Comment
+
+```mermaid
+graph TB
+ st(( ))
+ stop(( ))
+
+ st --> open["/*"]
+ open --> not_close[/"Not */"\]
+ not_close --> not_close
+ not_close --> close["*/"]
+ close --> stop
+```
diff --git a/design/sql_syntax/Comments/Single Line Comment.md b/design/sql_syntax/Comments/Single Line Comment.md
new file mode 100644
index 0000000..08f72cf
--- /dev/null
+++ b/design/sql_syntax/Comments/Single Line Comment.md
@@ -0,0 +1,18 @@
+---
+regex: '"//[^\n]*"'
+title: Single Line Comment
+---
+
+# Single Line Comment
+
+```mermaid
+graph TB
+ st(( ))
+ stop(( ))
+
+ st --> open["//"]
+ open --> not_nl[/"Not newline"\]
+ not_nl --> not_nl
+ not_nl --> close["newline or EOF"]
+ close --> stop
+```
diff --git a/design/sql_syntax/Expressions/Aggregate.md b/design/sql_syntax/Expressions/Aggregate.md
new file mode 100644
index 0000000..681ba9e
--- /dev/null
+++ b/design/sql_syntax/Expressions/Aggregate.md
@@ -0,0 +1,44 @@
+---
+characters: [",", "(", ")", "*"]
+expressions: [Expression, Filter Clause, Ordering Term]
+identifiers: [Aggregate Function Name]
+keywords: [BY, DISTINCT, ORDER]
+title: Aggregate
+---
+
+# Aggregate
+
+```mermaid
+graph TB
+ st(( ))
+ lparen("(")
+ rparen(")")
+ stop(( ))
+ st --> aggregate_function([Aggregate Function Name])
+ aggregate_function --> lparen
+ lparen --> DISTINCT
+ lparen --> expression>Expression]
+ lparen --> ast("*")
+ lparen --> rparen
+ DISTINCT --> expression
+ expression -->|#quot;,#quot;| expression
+ expression --> order_by[ORDER BY]
+ expression --> rparen
+ order_by --> ordering_term>Ordering Term]
+ ordering_term -->|#quot;,#quot;| ordering_term
+ ordering_term --> rparen
+ ast --> rparen
+ rparen --> filter_clause>Filter Clause]
+ rparen --> stop
+ filter_clause --> stop
+```
+
+## Used by
+
+```dataview
+TABLE WITHOUT ID
+ split(file.path,"/")[length(split(file.path,"/"))-2] as Type,
+ file.link AS Element
+FROM "ba-Projects/EpilogLite/sql_syntax"
+WHERE contains(expressions, this.file.name)
+```
diff --git a/design/sql_syntax/Expressions/Column Definition.md b/design/sql_syntax/Expressions/Column Definition.md
new file mode 100644
index 0000000..f7b296b
--- /dev/null
+++ b/design/sql_syntax/Expressions/Column Definition.md
@@ -0,0 +1,15 @@
+---
+title: Column Definition
+---
+
+# Column Definition
+
+## Used by
+
+```dataview
+TABLE WITHOUT ID
+ split(file.path,"/")[length(split(file.path,"/"))-2] as Type,
+ file.link AS Element
+FROM "ba-Projects/EpilogLite/sql_syntax"
+WHERE contains(expressions, this.file.name)
+```
diff --git a/design/sql_syntax/Expressions/Common Table Expression.md b/design/sql_syntax/Expressions/Common Table Expression.md
new file mode 100644
index 0000000..a1df557
--- /dev/null
+++ b/design/sql_syntax/Expressions/Common Table Expression.md
@@ -0,0 +1,15 @@
+---
+title: Common Table Expression
+---
+
+# Common Table Expression
+
+## Used by
+
+```dataview
+TABLE WITHOUT ID
+ split(file.path,"/")[length(split(file.path,"/"))-2] as Type,
+ file.link AS Element
+FROM "ba-Projects/EpilogLite/sql_syntax"
+WHERE contains(expressions, this.file.name)
+```
diff --git a/design/sql_syntax/Expressions/Date Value.md b/design/sql_syntax/Expressions/Date Value.md
new file mode 100644
index 0000000..064a94c
--- /dev/null
+++ b/design/sql_syntax/Expressions/Date Value.md
@@ -0,0 +1,15 @@
+---
+title: Date Value
+---
+
+# Date Value
+
+## Used by
+
+```dataview
+TABLE WITHOUT ID
+ split(file.path,"/")[length(split(file.path,"/"))-2] as Type,
+ file.link AS Element
+FROM "ba-Projects/EpilogLite/sql_syntax"
+WHERE contains(expressions, this.file.name)
+```
diff --git a/design/sql_syntax/Expressions/Expression.md b/design/sql_syntax/Expressions/Expression.md
new file mode 100644
index 0000000..70b0367
--- /dev/null
+++ b/design/sql_syntax/Expressions/Expression.md
@@ -0,0 +1,155 @@
+---
+characters: [",", ";", ".", "(", ")"]
+expressions: [Binary Operator, Bind Value, Expression, Filter Clause, Function Arguments, Literal Value, Over Clause, Raise Function, Table Function, Unary Operator]
+identifiers: [Collation Name, Column Name, Function Name, Schema Name, Table Name, Type Name]
+keywords: [AND, AS, BETWEEN, CASE, CAST, COLLATE, DISTINCT, ELSE, END, ESCAPE, EXISTS, FROM, GLOB, IN, IS, ISNULL, LIKE, MATCH, NOT, NOTNULL, NULL, REGEXP, THEN, WHEN]
+statements: [SELECT]
+title: Expression
+---
+
+# Expression
+
+```mermaid
+graph LR
+ st(( ))
+ stop(( ))
+
+ st --> literal>Literal Value]
+ literal --> stop
+
+ st --> bind>Bind Value]
+ bind --> stop
+
+ st --> schema_name([Schema Name])
+ st --> table_name([Table Name])
+ st --> column_name([Column Name])
+ schema_name -->|#quot;.#quot;| table_name
+ table_name -->|#quot;.#quot;| column_name
+ column_name --> stop
+
+ st --> unary>Unary Operator]
+ unary --> unary_expression>Expression]
+ unary_expression --> stop
+
+ st --> expr_l>Expression]
+ expr_l --> binary>Binary Operator]
+ binary --> bin_expr_r>Expression]
+ bin_expr_r --> stop
+
+ st --> function_name([Function Name])
+ function_name -->|"#quot;(#quot;"| function_arguments>Function Arguments]
+ function_arguments -->|"#quot;)#quot;"| filter_clause>Filter Clause]
+ function_arguments -->|"#quot;)#quot;"| stop
+ filter_clause --> over_clause>Over Clause]
+ filter_clause --> stop
+ over_clause --> stop
+
+ st -->|"#quot;(#quot;"| expression>Expression]
+ expression -->|"#quot;)#quot;"| stop
+
+ st --> CAST
+ CAST -->|"#quot;(#quot;"| cast_expression>Expression]
+ cast_expression --> AS
+ AS --> type_name([Type Name])
+ type_name -->|"#quot;)#quot;"| stop
+
+ expr_l --> COLLATE
+ COLLATE --> collate_name([Collation Name])
+ collate_name --> stop
+
+ expr_l --> NOT
+ expr_l --> j0((+))
+ NOT --> j0
+ j0 --> LIKE
+ j0 --> GLOB
+ j0 --> REGEXP
+ j0 --> MATCH
+ LIKE --> like_expr>Expression]
+ like_expr --> ESCAPE
+ like_expr --> stop
+ ESCAPE --> escape_expr>Expression]
+ escape_expr --> stop
+ GLOB --> comp_expr_r>Expression]
+ REGEXP --> comp_expr_r
+ MATCH --> comp_expr_r
+ comp_expr_r --> stop
+
+ expr_l --> ISNULL
+ expr_l --> NOTNULL
+ NOT --> NULL
+ ISNULL --> stop
+ NOTNULL --> stop
+ NULL --> stop
+
+ expr_l --> IS
+ IS --> distinct_clause[DISTINCT FROM]
+ IS --> distinct_expr_r>Expression]
+ NOT --> distinct_clause
+ NOT --> distinct_expr_r
+ distinct_clause --> distinct_expr_r
+ distinct_expr_r --> stop
+
+ expr_l --> NOT
+ expr_l --> BETWEEN
+ NOT --> BETWEEN
+ BETWEEN --> bet_expr_l>Expression]
+ bet_expr_l --> AND
+ AND --> bet_expr_r>Expression]
+ bet_expr_r --> stop
+
+ expr_l --> IN
+ NOT --> IN
+ IN --> in_lparen("(")
+ IN --> in_schema_name([Schema Name])
+ IN --> in_table_name([Table Name])
+ IN --> table_function>Table Function]
+ in_lparen --> in_rparen(")")
+ in_rparen --> stop
+ in_lparen --> in_select_statement{{Select Statement}}
+ in_lparen --> in_expr>Expression]
+ in_select_statement -->|#quot;,#quot;| in_select_statement
+ in_select_statement -->|#quot;,#quot;| in_expr
+ in_select_statement --> in_rparen
+ in_expr -->|#quot;,#quot;| in_select_statement
+ in_expr --> in_rparen
+ in_expr -->|#quot;,#quot;| in_expr
+ in_schema_name -->|#quot;.#quot;| in_table_name
+ in_schema_name -->|#quot;.#quot;| table_function
+ in_table_name --> stop
+ table_function -->|"#quot;(#quot;"| in_expr2>Expression]
+ in_expr2 -->|#quot;,#quot;| in_expr2
+ in_expr2 -->|"#quot;)#quot;"| stop
+
+ st --> EXISTS
+ st -->|"#quot;(#quot;"| exists_select_statement{{Select Statement}}
+ NOT --> EXISTS
+ EXISTS --> exists_select_statement
+ exists_select_statement -->|"#quot;)#quot;"| stop
+
+ st --> CASE
+ CASE --> case_expr>Case Expression]
+ CASE --> WHEN
+ case_expr --> WHEN
+ WHEN --> when_expr>Expression]
+ when_expr --> THEN
+ THEN --> then_expr>Expression]
+ then_expr -->|#quot;,#quot;| WHEN
+ then_expr --> ELSE
+ then_expr --> END
+ ELSE --> else_expr>Expression]
+ else_expr --> END
+ END --> stop
+
+ st --> raise>Raise Function]
+ raise --> stop
+```
+
+## Used by
+
+```dataview
+TABLE WITHOUT ID
+ split(file.path,"/")[length(split(file.path,"/"))-2] as Type,
+ file.link AS Element
+FROM "ba-Projects/EpilogLite/sql_syntax"
+WHERE contains(expressions, this.file.name)
+```
diff --git a/design/sql_syntax/Expressions/Filter Clause.md b/design/sql_syntax/Expressions/Filter Clause.md
new file mode 100644
index 0000000..b45a12f
--- /dev/null
+++ b/design/sql_syntax/Expressions/Filter Clause.md
@@ -0,0 +1,27 @@
+---
+characters: ["(", ")"]
+expressions: [Expression]
+keywords: [FILTER]
+title: Filter Clause
+---
+
+# Filter Clause
+
+```mermaid
+graph TB
+ st(( ))
+ stop(( ))
+ st --> FILTER
+ FILTER -->|"#quot;(#quot;"| expression>Expression]
+ expression -->|"#quot;)#quot;"| stop
+```
+
+## Used by
+
+```dataview
+TABLE WITHOUT ID
+ split(file.path,"/")[length(split(file.path,"/"))-2] as Type,
+ file.link AS Element
+FROM "ba-Projects/EpilogLite/sql_syntax"
+WHERE contains(expressions, this.file.name)
+```
diff --git a/design/sql_syntax/Expressions/Function.md b/design/sql_syntax/Expressions/Function.md
new file mode 100644
index 0000000..5485458
--- /dev/null
+++ b/design/sql_syntax/Expressions/Function.md
@@ -0,0 +1,34 @@
+---
+characters: [",", "(", ")", "*"]
+expressions: [Expression]
+identifiers: [Function Name]
+title: Function
+---
+
+# Function
+
+```mermaid
+graph TB
+ st(( ))
+ stop(( ))
+
+ st --> function_name([Function Name])
+ function_name --> lparen("(")
+ lparen --> expression>Expression]
+ lparen --> ast(*)
+ lparen --> rparen(")")
+ expression --> |#quot;,#quot;| expression
+ expression --> rparen
+ ast --> rparen
+ rparen --> stop
+```
+
+## Used by
+
+```dataview
+TABLE WITHOUT ID
+ split(file.path,"/")[length(split(file.path,"/"))-2] as Type,
+ file.link AS Element
+FROM "ba-Projects/EpilogLite/sql_syntax"
+WHERE contains(expressions, this.file.name)
+```
diff --git a/design/sql_syntax/Expressions/Module Argument.md b/design/sql_syntax/Expressions/Module Argument.md
new file mode 100644
index 0000000..99c0617
--- /dev/null
+++ b/design/sql_syntax/Expressions/Module Argument.md
@@ -0,0 +1,15 @@
+---
+title: Module Argument
+---
+
+# Module Argument
+
+## Used by
+
+```dataview
+TABLE WITHOUT ID
+ split(file.path,"/")[length(split(file.path,"/"))-2] as Type,
+ file.link AS Element
+FROM "ba-Projects/EpilogLite/sql_syntax"
+WHERE contains(expressions, this.file.name)
+```
diff --git a/design/sql_syntax/Expressions/Qualified Table Name.md b/design/sql_syntax/Expressions/Qualified Table Name.md
new file mode 100644
index 0000000..77a16a5
--- /dev/null
+++ b/design/sql_syntax/Expressions/Qualified Table Name.md
@@ -0,0 +1,15 @@
+---
+title: Qualified Table Name
+---
+
+# Qualified Table Name
+
+## Used by
+
+```dataview
+TABLE WITHOUT ID
+ split(file.path,"/")[length(split(file.path,"/"))-2] as Type,
+ file.link AS Element
+FROM "ba-Projects/EpilogLite/sql_syntax"
+WHERE contains(expressions, this.file.name)
+```
diff --git a/design/sql_syntax/Expressions/Returning Clause.md b/design/sql_syntax/Expressions/Returning Clause.md
new file mode 100644
index 0000000..dee5e4f
--- /dev/null
+++ b/design/sql_syntax/Expressions/Returning Clause.md
@@ -0,0 +1,15 @@
+---
+title: Returning Clause
+---
+
+# Returning Clause
+
+## Used by
+
+```dataview
+TABLE WITHOUT ID
+ split(file.path,"/")[length(split(file.path,"/"))-2] as Type,
+ file.link AS Element
+FROM "ba-Projects/EpilogLite/sql_syntax"
+WHERE contains(expressions, this.file.name)
+```
diff --git a/design/sql_syntax/Expressions/Table Constraint.md b/design/sql_syntax/Expressions/Table Constraint.md
new file mode 100644
index 0000000..31cc1d1
--- /dev/null
+++ b/design/sql_syntax/Expressions/Table Constraint.md
@@ -0,0 +1,15 @@
+---
+title: Table Constraint
+---
+
+# Table Constraint
+
+## Used by
+
+```dataview
+TABLE WITHOUT ID
+ split(file.path,"/")[length(split(file.path,"/"))-2] as Type,
+ file.link AS Element
+FROM "ba-Projects/EpilogLite/sql_syntax"
+WHERE contains(expressions, this.file.name)
+```
diff --git a/design/sql_syntax/Expressions/Table Options.md b/design/sql_syntax/Expressions/Table Options.md
new file mode 100644
index 0000000..d2ffbd0
--- /dev/null
+++ b/design/sql_syntax/Expressions/Table Options.md
@@ -0,0 +1,15 @@
+---
+title: Table Options
+---
+
+# Table Options
+
+## Used by
+
+```dataview
+TABLE WITHOUT ID
+ split(file.path,"/")[length(split(file.path,"/"))-2] as Type,
+ file.link AS Element
+FROM "ba-Projects/EpilogLite/sql_syntax"
+WHERE contains(expressions, this.file.name)
+```
diff --git a/design/sql_syntax/Expressions/Time Value.md b/design/sql_syntax/Expressions/Time Value.md
new file mode 100644
index 0000000..6d1ca52
--- /dev/null
+++ b/design/sql_syntax/Expressions/Time Value.md
@@ -0,0 +1,15 @@
+---
+title: Time Value
+---
+
+# Time Value
+
+## Used by
+
+```dataview
+TABLE WITHOUT ID
+ split(file.path,"/")[length(split(file.path,"/"))-2] as Type,
+ file.link AS Element
+FROM "ba-Projects/EpilogLite/sql_syntax"
+WHERE contains(expressions, this.file.name)
+```
diff --git a/design/sql_syntax/Expressions/Upsert Clause.md b/design/sql_syntax/Expressions/Upsert Clause.md
new file mode 100644
index 0000000..19fda0f
--- /dev/null
+++ b/design/sql_syntax/Expressions/Upsert Clause.md
@@ -0,0 +1,15 @@
+---
+title: Upsert Clause
+---
+
+# Upsert Clause
+
+## Used by
+
+```dataview
+TABLE WITHOUT ID
+ split(file.path,"/")[length(split(file.path,"/"))-2] as Type,
+ file.link AS Element
+FROM "ba-Projects/EpilogLite/sql_syntax"
+WHERE contains(expressions, this.file.name)
+```
diff --git a/design/sql_syntax/Identifiers/Aggregate Function Name.md b/design/sql_syntax/Identifiers/Aggregate Function Name.md
new file mode 100644
index 0000000..0c98e16
--- /dev/null
+++ b/design/sql_syntax/Identifiers/Aggregate Function Name.md
@@ -0,0 +1,16 @@
+---
+keywords: [avg, count, group_concat, max, min, string_agg, sum, total]
+title: Aggregate Function Name
+---
+
+# Aggregate Function Name
+
+## Used by
+
+```dataview
+TABLE WITHOUT ID
+ split(file.path,"/")[length(split(file.path,"/"))-2] as Type,
+ file.link AS Element
+FROM "ba-Projects/EpilogLite/sql_syntax"
+WHERE contains(identifiers, this.file.name)
+```
diff --git a/design/sql_syntax/Identifiers/Collation Name.md b/design/sql_syntax/Identifiers/Collation Name.md
new file mode 100644
index 0000000..04e81e5
--- /dev/null
+++ b/design/sql_syntax/Identifiers/Collation Name.md
@@ -0,0 +1,15 @@
+---
+title: Collation Name
+---
+
+# Collation Name
+
+## Used by
+
+```dataview
+TABLE WITHOUT ID
+ split(file.path,"/")[length(split(file.path,"/"))-2] as Type,
+ file.link AS Element
+FROM "ba-Projects/EpilogLite/sql_syntax"
+WHERE contains(identifiers, this.file.name)
+```
diff --git a/design/sql_syntax/Identifiers/Column Name.md b/design/sql_syntax/Identifiers/Column Name.md
new file mode 100644
index 0000000..8e66021
--- /dev/null
+++ b/design/sql_syntax/Identifiers/Column Name.md
@@ -0,0 +1,15 @@
+---
+title: Column Name
+---
+
+# Column Name
+
+## Used by
+
+```dataview
+TABLE WITHOUT ID
+ split(file.path,"/")[length(split(file.path,"/"))-2] as Type,
+ file.link AS Element
+FROM "ba-Projects/EpilogLite/sql_syntax"
+WHERE contains(identifiers, this.file.name)
+```
diff --git a/design/sql_syntax/Identifiers/Function Name.md b/design/sql_syntax/Identifiers/Function Name.md
new file mode 100644
index 0000000..943846e
--- /dev/null
+++ b/design/sql_syntax/Identifiers/Function Name.md
@@ -0,0 +1,120 @@
+---
+keywords: [abs, changes, char, coalesce, concat, concat_ws, format, glob, hex, ifnull, iif, instr, last_insert_rowid, length, like, likelihood, likely, load_extension, lower, ltrim, max, min, nullif, octet_length, printf, quote, random, randomblob, replace, round, rtrim, sign, soundex, sqlite_compileoption_get, sqlite_compileoption_used, sqlite_offset, sqlite_source, sqlite_version, substr, substring, total_changes, trim, typeof, unhex, unicode, unlikely, upper, zeroblob]
+title: Function Name
+---
+
+# Function Name
+
+```mermaid
+graph LR
+ start(( ))
+ stop(( ))
+
+ st --> abs
+ st --> changes
+ st --> char
+ st --> coalesce
+ st --> concat
+ st --> concat_ws
+ st --> format
+ st --> glob
+ st --> hex
+ st --> ifnull
+ st --> iif
+ st --> instr
+ st --> last_insert_rowid
+ st --> length
+ st --> like
+ st --> likelihood
+ st --> likely
+ st --> load_extension
+ st --> lower
+ st --> ltrim
+ st --> max
+ st --> min
+ st --> nullif
+ st --> octet_length
+ st --> printf
+ st --> quote
+ st --> random
+ st --> randomblob
+ st --> replace
+ st --> round
+ st --> rtrim
+ st --> sign
+ st --> soundex
+ st --> sqlite_compileoption_get
+ st --> sqlite_compileoption_used
+ st --> sqlite_offset
+ st --> sqlite_source
+ st --> sqlite_version
+ st --> substr
+ st --> substring
+ st --> total_changes
+ st --> trim
+ st --> typeof
+ st --> unhex
+ st --> unicode
+ st --> unlikely
+ st --> upper
+ st --> zeroblob
+
+ abs --> stop
+ changes --> stop
+ char --> stop
+ coalesce --> stop
+ concat --> stop
+ concat_ws --> stop
+ format --> stop
+ glob --> stop
+ hex --> stop
+ ifnull --> stop
+ iif --> stop
+ instr --> stop
+ last_insert_rowid --> stop
+ length --> stop
+ like --> stop
+ likelihood --> stop
+ likely --> stop
+ load_extension --> stop
+ lower --> stop
+ ltrim --> stop
+ max --> stop
+ min --> stop
+ nullif --> stop
+ octet_length --> stop
+ printf --> stop
+ quote --> stop
+ random --> stop
+ randomblob --> stop
+ replace --> stop
+ round --> stop
+ rtrim --> stop
+ sign --> stop
+ soundex --> stop
+ sqlite_compileoption_get --> stop
+ sqlite_compileoption_used --> stop
+ sqlite_offset --> stop
+ sqlite_source --> stop
+ sqlite_version --> stop
+ substr --> stop
+ substring --> stop
+ total_changes --> stop
+ trim --> stop
+ typeof --> stop
+ unhex --> stop
+ unicode --> stop
+ unlikely --> stop
+ upper --> stop
+ zeroblob --> stop
+```
+
+## Used by
+
+```dataview
+TABLE WITHOUT ID
+ split(file.path,"/")[length(split(file.path,"/"))-2] as Type,
+ file.link AS Element
+FROM "ba-Projects/EpilogLite/sql_syntax"
+WHERE contains(identifiers, this.file.name)
+```
diff --git a/design/sql_syntax/Identifiers/Index Name.md b/design/sql_syntax/Identifiers/Index Name.md
new file mode 100644
index 0000000..f8cea72
--- /dev/null
+++ b/design/sql_syntax/Identifiers/Index Name.md
@@ -0,0 +1,15 @@
+---
+title: Index Name
+---
+
+# Index Name
+
+## Used by
+
+```dataview
+TABLE WITHOUT ID
+ split(file.path,"/")[length(split(file.path,"/"))-2] as Type,
+ file.link AS Element
+FROM "ba-Projects/EpilogLite/sql_syntax"
+WHERE contains(identifiers, this.file.name)
+```
diff --git a/design/sql_syntax/Identifiers/Module Name.md b/design/sql_syntax/Identifiers/Module Name.md
new file mode 100644
index 0000000..6dc5574
--- /dev/null
+++ b/design/sql_syntax/Identifiers/Module Name.md
@@ -0,0 +1,15 @@
+---
+title: Module Name
+---
+
+# Module Name
+
+## Used by
+
+```dataview
+TABLE WITHOUT ID
+ split(file.path,"/")[length(split(file.path,"/"))-2] as Type,
+ file.link AS Element
+FROM "ba-Projects/EpilogLite/sql_syntax"
+WHERE contains(identifiers, this.file.name)
+```
diff --git a/design/sql_syntax/Identifiers/Save Point Name.md b/design/sql_syntax/Identifiers/Save Point Name.md
new file mode 100644
index 0000000..3b1ac43
--- /dev/null
+++ b/design/sql_syntax/Identifiers/Save Point Name.md
@@ -0,0 +1,15 @@
+---
+title: Save Point Name
+---
+
+# Save Point Name
+
+## Used by
+
+```dataview
+TABLE WITHOUT ID
+ split(file.path,"/")[length(split(file.path,"/"))-2] as Type,
+ file.link AS Element
+FROM "ba-Projects/EpilogLite/sql_syntax"
+WHERE contains(identifiers, this.file.name)
+```
diff --git a/design/sql_syntax/Identifiers/Schema Name.md b/design/sql_syntax/Identifiers/Schema Name.md
new file mode 100644
index 0000000..9f551d1
--- /dev/null
+++ b/design/sql_syntax/Identifiers/Schema Name.md
@@ -0,0 +1,15 @@
+---
+title: Schema Name
+---
+
+# Schema Name
+
+## Used by
+
+```dataview
+TABLE WITHOUT ID
+ split(file.path,"/")[length(split(file.path,"/"))-2] as Type,
+ file.link AS Element
+FROM "ba-Projects/EpilogLite/sql_syntax"
+WHERE contains(identifiers, this.file.name)
+```
diff --git a/design/sql_syntax/Identifiers/Table Name.md b/design/sql_syntax/Identifiers/Table Name.md
new file mode 100644
index 0000000..0199f56
--- /dev/null
+++ b/design/sql_syntax/Identifiers/Table Name.md
@@ -0,0 +1,15 @@
+---
+title: Table Name
+---
+
+# Table Name
+
+## Used by
+
+```dataview
+TABLE WITHOUT ID
+ split(file.path,"/")[length(split(file.path,"/"))-2] as Type,
+ file.link AS Element
+FROM "ba-Projects/EpilogLite/sql_syntax"
+WHERE contains(identifiers, this.file.name)
+```
diff --git a/design/sql_syntax/Identifiers/Trigger Name.md b/design/sql_syntax/Identifiers/Trigger Name.md
new file mode 100644
index 0000000..00264aa
--- /dev/null
+++ b/design/sql_syntax/Identifiers/Trigger Name.md
@@ -0,0 +1,15 @@
+---
+title: Trigger Name
+---
+
+# Trigger Name
+
+## Used by
+
+```dataview
+TABLE WITHOUT ID
+ split(file.path,"/")[length(split(file.path,"/"))-2] as Type,
+ file.link AS Element
+FROM "ba-Projects/EpilogLite/sql_syntax"
+WHERE contains(identifiers, this.file.name)
+```
diff --git a/design/sql_syntax/Identifiers/View Name.md b/design/sql_syntax/Identifiers/View Name.md
new file mode 100644
index 0000000..f176573
--- /dev/null
+++ b/design/sql_syntax/Identifiers/View Name.md
@@ -0,0 +1,15 @@
+---
+title: View Name
+---
+
+# View Name
+
+## Used by
+
+```dataview
+TABLE WITHOUT ID
+ split(file.path,"/")[length(split(file.path,"/"))-2] as Type,
+ file.link AS Element
+FROM "ba-Projects/EpilogLite/sql_syntax"
+WHERE contains(identifiers, this.file.name)
+```
diff --git a/design/sql_syntax/Index.md b/design/sql_syntax/Index.md
new file mode 100644
index 0000000..223e073
--- /dev/null
+++ b/design/sql_syntax/Index.md
@@ -0,0 +1,279 @@
+---
+title: Index
+---
+
+# Index
+
+## Statements
+
+
+
+
+| Statement | Uses expressions | Uses keywords | Uses identifiers | Uses characters |
+| --------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------- |
+| [ALTER TABLE](<./Statements/ALTER TABLE.md>) | - [Column Definition](<./Expressions/Column Definition.md>)
| - ADD
- ALTER
- COLUMN
- DROP
- RENAME
- TABLE
- TO
| - [Column Name](<./Identifiers/Column Name.md>)
- [Schema Name](<./Identifiers/Schema Name.md>)
- [Table Name](<./Identifiers/Table Name.md>)
| |
+| [ANALYZE](<./Statements/ANALYZE.md>) | | | - [Index Name](<./Identifiers/Index Name.md>)
- [Schema Name](<./Identifiers/Schema Name.md>)
- [Table Name](<./Identifiers/Table Name.md>)
| |
+| [ATTACH](<./Statements/ATTACH.md>) | - [Expression](<./Expressions/Expression.md>)
| | - [Schema Name](<./Identifiers/Schema Name.md>)
| |
+| [BEGIN TRANSACTION](<./Statements/BEGIN TRANSACTION.md>) | | - BEGIN
- DEFERRED
- EXCLUSIVE
- IMMEDIATE
- TRANSACTION
| | |
+| [COMMIT TRANSACTION](<./Statements/COMMIT TRANSACTION.md>) | | | | |
+| [CREATE](<./Statements/CREATE.md>) | - [Column Definition](<./Expressions/Column Definition.md>)
- [Expression](<./Expressions/Expression.md>)
- [Module Argument](<./Expressions/Module Argument.md>)
- [Table Constraint](<./Expressions/Table Constraint.md>)
- [Table Options](<./Expressions/Table Options.md>)
| - AFTER
- AS
- BEFORE
- BEGIN
- CREATE
- EACH
- END
- EXISTS
- FOR
- IF
- INDEX
- INSTEAD
- NOT
- OF
- ON
- ROW
- SELECT
- TABLE
- TEMP
- TEMPORARY
- TRIGGER
- UNIQUE
- USING
- VIEW
- VIRTUAL
- WHEN
- WHERE
| - [Column Name](<./Identifiers/Column Name.md>)
- [Index Name](<./Identifiers/Index Name.md>)
- [Module Name](<./Identifiers/Module Name.md>)
- [Schema Name](<./Identifiers/Schema Name.md>)
- [Table Name](<./Identifiers/Table Name.md>)
- [Trigger Name](<./Identifiers/Trigger Name.md>)
- [View Name](<./Identifiers/View Name.md>)
| |
+| [DELETE](<./Statements/DELETE.md>) | - [Common Table Expression](<./Expressions/Common Table Expression.md>)
- [Expression](<./Expressions/Expression.md>)
- [Qualified Table Name](<./Expressions/Qualified Table Name.md>)
- [Returning Clause](<./Expressions/Returning Clause.md>)
| - DELETE
- FROM
- RECURSIVE
- WHERE
- WITH
| | |
+| [DETACH](<./Statements/DETACH.md>) | | | - [Schema Name](<./Identifiers/Schema Name.md>)
| |
+| [DROP](<./Statements/DROP.md>) | | - DROP
- EXISTS
- IF
- INDEX
- TABLE
- TRIGGER
- VIEW
| - [Index Name](<./Identifiers/Index Name.md>)
- [Schema Name](<./Identifiers/Schema Name.md>)
- [Table Name](<./Identifiers/Table Name.md>)
- [Trigger Name](<./Identifiers/Trigger Name.md>)
- [View Name](<./Identifiers/View Name.md>)
| |
+| [END TRANSACTION](<./Statements/END TRANSACTION.md>) | | | | |
+| [EXPLAIN](<./Statements/EXPLAIN.md>) | | | | |
+| [INSERT](<./Statements/INSERT.md>) | - [Common Table Expression](<./Expressions/Common Table Expression.md>)
- [Expression](<./Expressions/Expression.md>)
- [Upsert Clause](<./Expressions/Upsert Clause.md>)
| - ABORT
- DEFAULT
- FAIL
- IGNORE
- INSERT
- INTO
- OR
- RECURSIVE
- REPLACE
- ROLLBACK
- VALUES
- WITH
| - [Alias](Identifiers/Alias)
- [Column Name](<./Identifiers/Column Name.md>)
- [Schema Name](<./Identifiers/Schema Name.md>)
- [Table Name](<./Identifiers/Table Name.md>)
| |
+| [PRAGMA](<./Statements/PRAGMA.md>) | - [Pragma Value](Expressions/Pragma%20Value)
| | - [Pragma Name](Identifiers/Pragma%20Name)
- [Schema Name](<./Identifiers/Schema Name.md>)
| |
+| [REINDEX](<./Statements/REINDEX.md>) | | | - [Collation Name](<./Identifiers/Collation Name.md>)
- [Index Name](<./Identifiers/Index Name.md>)
- [Schema Name](<./Identifiers/Schema Name.md>)
- [Table Name](<./Identifiers/Table Name.md>)
| |
+| [RELEASE](<./Statements/RELEASE.md>) | | | - [Save Point Name](<./Identifiers/Save Point Name.md>)
| |
+| [ROLLBACK TRANSACTION](<./Statements/ROLLBACK TRANSACTION.md>) | | - ROLLBACK
- SAVEPOINT
- TO
- TRANSACTION
| - [Save Point Name](<./Identifiers/Save Point Name.md>)
| |
+| [SAVEPOINT](<./Statements/SAVEPOINT.md>) | | | - [Save Point Name](<./Identifiers/Save Point Name.md>)
| |
+| [SELECT](<./Statements/SELECT.md>) | - [Common Table Expression](<./Expressions/Common Table Expression.md>)
- [Compound Operator](Expressions/Compound%20Operator)
- [Expression](<./Expressions/Expression.md>)
- [Join Clause](Expressions/Join%20Clause)
- [Ordering Term](Expressions/Ordering%20Term)
- [Window Definition](Expressions/Window%20Definition)
| - ALL
- AS
- BY
- DISTINCT
- FROM
- GROUP
- HAVING
- LIMIT
- OFFSET
- ORDER
- RECURSIVE
- SELECT
- VALUES
- WHERE
- WINDOW
- WITH
| - [Column Name](<./Identifiers/Column Name.md>)
- [Subquery](Identifiers/Subquery)
- [Table Name](<./Identifiers/Table Name.md>)
- [Window Name](Identifiers/Window%20Name)
| |
+| [UPDATE](<./Statements/UPDATE.md>) | - [Column Name List](Expressions/Column%20Name%20List)
- [Common Table Expression](<./Expressions/Common Table Expression.md>)
- [Expression](<./Expressions/Expression.md>)
- [Join Clause](Expressions/Join%20Clause)
- [Qualified Table Name](<./Expressions/Qualified Table Name.md>)
- [Returning Clause](<./Expressions/Returning Clause.md>)
- [Subquery](Expressions/Subquery)
| - ABORT
- FAIL
- FROM
- IGNORE
- OR
- RECURSIVE
- REPLACE
- ROLLBACK
- SET
- UPDATE
- WHERE
- WITH
| - [Column Name](<./Identifiers/Column Name.md>)
- [Table Name](<./Identifiers/Table Name.md>)
| |
+| [VACUUM](<./Statements/VACUUM.md>) | | | - [File Name](Identifiers/File%20Name)
- [Schema Name](<./Identifiers/Schema Name.md>)
| |
+
+
+## Expressions
+
+
+
+
+| Expression | Used by |
+| ------------------------------------------------------------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| [Binary Operator](Expressions/Binary%20Operator) | - [Expression: Expression](<./Expressions/Expression.md>)
|
+| [Bind Value](Expressions/Bind%20Value) | - [Expression: Expression](<./Expressions/Expression.md>)
|
+| [Column Definition](<./Expressions/Column Definition.md>) | - [Statement: ALTER TABLE](<./Statements/ALTER TABLE.md>)
- [Statement: CREATE](<./Statements/CREATE.md>)
|
+| [Column Name List](Expressions/Column%20Name%20List) | - [Statement: UPDATE](<./Statements/UPDATE.md>)
|
+| [Common Table Expression](<./Expressions/Common Table Expression.md>) | - [Statement: DELETE](<./Statements/DELETE.md>)
- [Statement: INSERT](<./Statements/INSERT.md>)
- [Statement: SELECT](<./Statements/SELECT.md>)
- [Statement: UPDATE](<./Statements/UPDATE.md>)
|
+| [Compound Operator](Expressions/Compound%20Operator) | - [Statement: SELECT](<./Statements/SELECT.md>)
|
+| [Expression](<./Expressions/Expression.md>) | - [Expression: Aggregate](<./Expressions/Aggregate.md>)
- [Expression: Expression](<./Expressions/Expression.md>)
- [Expression: Filter Clause](<./Expressions/Filter Clause.md>)
- [Expression: Function](<./Expressions/Function.md>)
- [Statement: ATTACH](<./Statements/ATTACH.md>)
- [Statement: CREATE](<./Statements/CREATE.md>)
- [Statement: DELETE](<./Statements/DELETE.md>)
- [Statement: INSERT](<./Statements/INSERT.md>)
- [Statement: SELECT](<./Statements/SELECT.md>)
- [Statement: UPDATE](<./Statements/UPDATE.md>)
|
+| [Filter Clause](<./Expressions/Filter Clause.md>) | - [Expression: Aggregate](<./Expressions/Aggregate.md>)
- [Expression: Expression](<./Expressions/Expression.md>)
|
+| [Function Arguments](Expressions/Function%20Arguments) | - [Expression: Expression](<./Expressions/Expression.md>)
|
+| [Join Clause](Expressions/Join%20Clause) | - [Statement: SELECT](<./Statements/SELECT.md>)
- [Statement: UPDATE](<./Statements/UPDATE.md>)
|
+| [Literal Value](Expressions/Literal%20Value) | - [Expression: Expression](<./Expressions/Expression.md>)
|
+| [Module Argument](<./Expressions/Module Argument.md>) | - [Statement: CREATE](<./Statements/CREATE.md>)
|
+| [Ordering Term](Expressions/Ordering%20Term) | - [Expression: Aggregate](<./Expressions/Aggregate.md>)
- [Statement: SELECT](<./Statements/SELECT.md>)
|
+| [Over Clause](Expressions/Over%20Clause) | - [Expression: Expression](<./Expressions/Expression.md>)
|
+| [Pragma Value](Expressions/Pragma%20Value) | - [Statement: PRAGMA](<./Statements/PRAGMA.md>)
|
+| [Qualified Table Name](<./Expressions/Qualified Table Name.md>) | - [Statement: DELETE](<./Statements/DELETE.md>)
- [Statement: UPDATE](<./Statements/UPDATE.md>)
|
+| [Raise Function](Expressions/Raise%20Function) | - [Expression: Expression](<./Expressions/Expression.md>)
|
+| [Returning Clause](<./Expressions/Returning Clause.md>) | - [Statement: DELETE](<./Statements/DELETE.md>)
- [Statement: UPDATE](<./Statements/UPDATE.md>)
|
+| [Subquery](Expressions/Subquery) | - [Statement: UPDATE](<./Statements/UPDATE.md>)
|
+| [Table Constraint](<./Expressions/Table Constraint.md>) | - [Statement: CREATE](<./Statements/CREATE.md>)
|
+| [Table Function](Expressions/Table%20Function) | - [Expression: Expression](<./Expressions/Expression.md>)
|
+| [Table Options](<./Expressions/Table Options.md>) | - [Statement: CREATE](<./Statements/CREATE.md>)
|
+| [Unary Operator](Expressions/Unary%20Operator) | - [Expression: Expression](<./Expressions/Expression.md>)
|
+| [Upsert Clause](<./Expressions/Upsert Clause.md>) | - [Statement: INSERT](<./Statements/INSERT.md>)
|
+| [Window Definition](Expressions/Window%20Definition) | - [Statement: SELECT](<./Statements/SELECT.md>)
|
+
+
+## Tokens
+
+### Keywords
+
+
+
+
+| Keyword | Code | Used by |
+| ------------------------- | --------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| \- | `#[token("\-")]` | - [Expression: Expression](<./Expressions/Expression.md>)
|
+| ABORT | `#[token("ABORT")]` | - [Statement: INSERT](<./Statements/INSERT.md>)
- [Statement: UPDATE](<./Statements/UPDATE.md>)
|
+| abs | `#[token("abs")]` | - [Identifier: Function Name](<./Identifiers/Function Name.md>)
|
+| ADD | `#[token("ADD")]` | - [Statement: ALTER TABLE](<./Statements/ALTER TABLE.md>)
|
+| AFTER | `#[token("AFTER")]` | - [Statement: CREATE](<./Statements/CREATE.md>)
|
+| ALL | `#[token("ALL")]` | - [Statement: SELECT](<./Statements/SELECT.md>)
|
+| ALTER | `#[token("ALTER")]` | - [Statement: ALTER TABLE](<./Statements/ALTER TABLE.md>)
|
+| ANALYZE | `#[token("ANALYZE")]` | - [Statement: ANALYZE](<./Statements/ANALYZE.md>)
|
+| AND | `#[token("AND")]` | - [Expression: Expression](<./Expressions/Expression.md>)
|
+| AS | `#[token("AS")]` | - [Expression: Expression](<./Expressions/Expression.md>)
- [Statement: ATTACH](<./Statements/ATTACH.md>)
- [Statement: CREATE](<./Statements/CREATE.md>)
- [Statement: SELECT](<./Statements/SELECT.md>)
|
+| ATTACH | `#[token("ATTACH")]` | - [Statement: ATTACH](<./Statements/ATTACH.md>)
|
+| avg | `#[token("avg")]` | - [Identifier: Aggregate Function Name](<./Identifiers/Aggregate Function Name.md>)
|
+| BEFORE | `#[token("BEFORE")]` | - [Statement: CREATE](<./Statements/CREATE.md>)
|
+| BEGIN | `#[token("BEGIN")]` | - [Statement: BEGIN TRANSACTION](<./Statements/BEGIN TRANSACTION.md>)
- [Statement: CREATE](<./Statements/CREATE.md>)
|
+| BETWEEN | `#[token("BETWEEN")]` | - [Expression: Expression](<./Expressions/Expression.md>)
|
+| BY | `#[token("BY")]` | - [Expression: Aggregate](<./Expressions/Aggregate.md>)
- [Statement: SELECT](<./Statements/SELECT.md>)
|
+| CASE | `#[token("CASE")]` | - [Expression: Expression](<./Expressions/Expression.md>)
|
+| CAST | `#[token("CAST")]` | - [Expression: Expression](<./Expressions/Expression.md>)
|
+| changes | `#[token("changes")]` | - [Identifier: Function Name](<./Identifiers/Function Name.md>)
|
+| char | `#[token("char")]` | - [Identifier: Function Name](<./Identifiers/Function Name.md>)
|
+| coalesce | `#[token("coalesce")]` | - [Identifier: Function Name](<./Identifiers/Function Name.md>)
|
+| COLLATE | `#[token("COLLATE")]` | - [Expression: Expression](<./Expressions/Expression.md>)
|
+| COLUMN | `#[token("COLUMN")]` | - [Statement: ALTER TABLE](<./Statements/ALTER TABLE.md>)
|
+| COMMIT | `#[token("COMMIT")]` | - [Statement: COMMIT TRANSACTION](<./Statements/COMMIT TRANSACTION.md>)
- [Statement: END TRANSACTION](<./Statements/END TRANSACTION.md>)
|
+| concat | `#[token("concat")]` | - [Identifier: Function Name](<./Identifiers/Function Name.md>)
|
+| concat_ws | `#[token("concat_ws")]` | - [Identifier: Function Name](<./Identifiers/Function Name.md>)
|
+| count | `#[token("count")]` | - [Identifier: Aggregate Function Name](<./Identifiers/Aggregate Function Name.md>)
|
+| CREATE | `#[token("CREATE")]` | - [Statement: CREATE](<./Statements/CREATE.md>)
|
+| DATABASE | `#[token("DATABASE")]` | - [Statement: ATTACH](<./Statements/ATTACH.md>)
- [Statement: DETACH](<./Statements/DETACH.md>)
|
+| DEFAULT | `#[token("DEFAULT")]` | - [Statement: INSERT](<./Statements/INSERT.md>)
|
+| DEFERRED | `#[token("DEFERRED")]` | - [Statement: BEGIN TRANSACTION](<./Statements/BEGIN TRANSACTION.md>)
|
+| DELETE | `#[token("DELETE")]` | - [Statement: DELETE](<./Statements/DELETE.md>)
|
+| DETACH | `#[token("DETACH")]` | - [Statement: DETACH](<./Statements/DETACH.md>)
|
+| DISTINCT | `#[token("DISTINCT")]` | - [Expression: Aggregate](<./Expressions/Aggregate.md>)
- [Expression: Expression](<./Expressions/Expression.md>)
- [Statement: SELECT](<./Statements/SELECT.md>)
|
+| DROP | `#[token("DROP")]` | - [Statement: ALTER TABLE](<./Statements/ALTER TABLE.md>)
- [Statement: DROP](<./Statements/DROP.md>)
|
+| EACH | `#[token("EACH")]` | - [Statement: CREATE](<./Statements/CREATE.md>)
|
+| ELSE | `#[token("ELSE")]` | - [Expression: Expression](<./Expressions/Expression.md>)
|
+| END | `#[token("END")]` | - [Expression: Expression](<./Expressions/Expression.md>)
- [Statement: COMMIT TRANSACTION](<./Statements/COMMIT TRANSACTION.md>)
- [Statement: CREATE](<./Statements/CREATE.md>)
- [Statement: END TRANSACTION](<./Statements/END TRANSACTION.md>)
|
+| ESCAPE | `#[token("ESCAPE")]` | - [Expression: Expression](<./Expressions/Expression.md>)
|
+| EXCLUSIVE | `#[token("EXCLUSIVE")]` | - [Statement: BEGIN TRANSACTION](<./Statements/BEGIN TRANSACTION.md>)
|
+| EXISTS | `#[token("EXISTS")]` | - [Expression: Expression](<./Expressions/Expression.md>)
- [Statement: CREATE](<./Statements/CREATE.md>)
- [Statement: DROP](<./Statements/DROP.md>)
|
+| EXPLAIN | `#[token("EXPLAIN")]` | - [Statement: EXPLAIN](<./Statements/EXPLAIN.md>)
|
+| FAIL | `#[token("FAIL")]` | - [Statement: INSERT](<./Statements/INSERT.md>)
- [Statement: UPDATE](<./Statements/UPDATE.md>)
|
+| FILTER | `#[token("FILTER")]` | - [Expression: Filter Clause](<./Expressions/Filter Clause.md>)
|
+| FOR | `#[token("FOR")]` | - [Statement: CREATE](<./Statements/CREATE.md>)
|
+| format | `#[token("format")]` | - [Identifier: Function Name](<./Identifiers/Function Name.md>)
|
+| FROM | `#[token("FROM")]` | - [Expression: Expression](<./Expressions/Expression.md>)
- [Statement: DELETE](<./Statements/DELETE.md>)
- [Statement: SELECT](<./Statements/SELECT.md>)
- [Statement: UPDATE](<./Statements/UPDATE.md>)
|
+| glob | `#[token("glob")]` | - [Identifier: Function Name](<./Identifiers/Function Name.md>)
|
+| GLOB | `#[token("GLOB")]` | - [Expression: Expression](<./Expressions/Expression.md>)
|
+| GROUP | `#[token("GROUP")]` | - [Statement: SELECT](<./Statements/SELECT.md>)
|
+| group_concat | `#[token("group_concat")]` | - [Identifier: Aggregate Function Name](<./Identifiers/Aggregate Function Name.md>)
|
+| HAVING | `#[token("HAVING")]` | - [Statement: SELECT](<./Statements/SELECT.md>)
|
+| hex | `#[token("hex")]` | - [Identifier: Function Name](<./Identifiers/Function Name.md>)
|
+| IF | `#[token("IF")]` | - [Statement: CREATE](<./Statements/CREATE.md>)
- [Statement: DROP](<./Statements/DROP.md>)
|
+| ifnull | `#[token("ifnull")]` | - [Identifier: Function Name](<./Identifiers/Function Name.md>)
|
+| IGNORE | `#[token("IGNORE")]` | - [Statement: INSERT](<./Statements/INSERT.md>)
- [Statement: UPDATE](<./Statements/UPDATE.md>)
|
+| iif | `#[token("iif")]` | - [Identifier: Function Name](<./Identifiers/Function Name.md>)
|
+| IMMEDIATE | `#[token("IMMEDIATE")]` | - [Statement: BEGIN TRANSACTION](<./Statements/BEGIN TRANSACTION.md>)
|
+| IN | `#[token("IN")]` | - [Expression: Expression](<./Expressions/Expression.md>)
|
+| INDEX | `#[token("INDEX")]` | - [Statement: CREATE](<./Statements/CREATE.md>)
- [Statement: DROP](<./Statements/DROP.md>)
|
+| INSERT | `#[token("INSERT")]` | - [Statement: INSERT](<./Statements/INSERT.md>)
|
+| INSTEAD | `#[token("INSTEAD")]` | - [Statement: CREATE](<./Statements/CREATE.md>)
|
+| instr | `#[token("instr")]` | - [Identifier: Function Name](<./Identifiers/Function Name.md>)
|
+| INTO | `#[token("INTO")]` | - [Statement: INSERT](<./Statements/INSERT.md>)
- [Statement: VACUUM](<./Statements/VACUUM.md>)
|
+| IS | `#[token("IS")]` | - [Expression: Expression](<./Expressions/Expression.md>)
|
+| ISNULL | `#[token("ISNULL")]` | - [Expression: Expression](<./Expressions/Expression.md>)
|
+| last_insert_rowid | `#[token("last_insert_rowid")]` | - [Identifier: Function Name](<./Identifiers/Function Name.md>)
|
+| length | `#[token("length")]` | - [Identifier: Function Name](<./Identifiers/Function Name.md>)
|
+| like | `#[token("like")]` | - [Identifier: Function Name](<./Identifiers/Function Name.md>)
|
+| LIKE | `#[token("LIKE")]` | - [Expression: Expression](<./Expressions/Expression.md>)
|
+| likelihood | `#[token("likelihood")]` | - [Identifier: Function Name](<./Identifiers/Function Name.md>)
|
+| likely | `#[token("likely")]` | - [Identifier: Function Name](<./Identifiers/Function Name.md>)
|
+| LIMIT | `#[token("LIMIT")]` | - [Statement: SELECT](<./Statements/SELECT.md>)
|
+| load_extension | `#[token("load_extension")]` | - [Identifier: Function Name](<./Identifiers/Function Name.md>)
|
+| lower | `#[token("lower")]` | - [Identifier: Function Name](<./Identifiers/Function Name.md>)
|
+| ltrim | `#[token("ltrim")]` | - [Identifier: Function Name](<./Identifiers/Function Name.md>)
|
+| MATCH | `#[token("MATCH")]` | - [Expression: Expression](<./Expressions/Expression.md>)
|
+| max | `#[token("max")]` | - [Identifier: Aggregate Function Name](<./Identifiers/Aggregate Function Name.md>)
- [Identifier: Function Name](<./Identifiers/Function Name.md>)
|
+| min | `#[token("min")]` | - [Identifier: Aggregate Function Name](<./Identifiers/Aggregate Function Name.md>)
- [Identifier: Function Name](<./Identifiers/Function Name.md>)
|
+| NOT | `#[token("NOT")]` | - [Expression: Expression](<./Expressions/Expression.md>)
- [Statement: CREATE](<./Statements/CREATE.md>)
|
+| NOTNULL | `#[token("NOTNULL")]` | - [Expression: Expression](<./Expressions/Expression.md>)
|
+| nullif | `#[token("nullif")]` | - [Identifier: Function Name](<./Identifiers/Function Name.md>)
|
+| octet_length | `#[token("octet_length")]` | - [Identifier: Function Name](<./Identifiers/Function Name.md>)
|
+| OF | `#[token("OF")]` | - [Statement: CREATE](<./Statements/CREATE.md>)
|
+| OFFSET | `#[token("OFFSET")]` | - [Statement: SELECT](<./Statements/SELECT.md>)
|
+| ON | `#[token("ON")]` | - [Statement: CREATE](<./Statements/CREATE.md>)
|
+| OR | `#[token("OR")]` | - [Statement: INSERT](<./Statements/INSERT.md>)
- [Statement: UPDATE](<./Statements/UPDATE.md>)
|
+| ORDER | `#[token("ORDER")]` | - [Expression: Aggregate](<./Expressions/Aggregate.md>)
- [Statement: SELECT](<./Statements/SELECT.md>)
|
+| PLAN | `#[token("PLAN")]` | - [Statement: EXPLAIN](<./Statements/EXPLAIN.md>)
|
+| PRAGMA | `#[token("PRAGMA")]` | - [Statement: PRAGMA](<./Statements/PRAGMA.md>)
|
+| printf | `#[token("printf")]` | - [Identifier: Function Name](<./Identifiers/Function Name.md>)
|
+| QUERY | `#[token("QUERY")]` | - [Statement: EXPLAIN](<./Statements/EXPLAIN.md>)
|
+| quote | `#[token("quote")]` | - [Identifier: Function Name](<./Identifiers/Function Name.md>)
|
+| random | `#[token("random")]` | - [Identifier: Function Name](<./Identifiers/Function Name.md>)
|
+| randomblob | `#[token("randomblob")]` | - [Identifier: Function Name](<./Identifiers/Function Name.md>)
|
+| RECURSIVE | `#[token("RECURSIVE")]` | - [Statement: DELETE](<./Statements/DELETE.md>)
- [Statement: INSERT](<./Statements/INSERT.md>)
- [Statement: SELECT](<./Statements/SELECT.md>)
- [Statement: UPDATE](<./Statements/UPDATE.md>)
|
+| REGEXP | `#[token("REGEXP")]` | - [Expression: Expression](<./Expressions/Expression.md>)
|
+| REINDEX | `#[token("REINDEX")]` | - [Statement: REINDEX](<./Statements/REINDEX.md>)
|
+| RELEASE | `#[token("RELEASE")]` | - [Statement: RELEASE](<./Statements/RELEASE.md>)
|
+| RENAME | `#[token("RENAME")]` | - [Statement: ALTER TABLE](<./Statements/ALTER TABLE.md>)
|
+| replace | `#[token("replace")]` | - [Identifier: Function Name](<./Identifiers/Function Name.md>)
|
+| REPLACE | `#[token("REPLACE")]` | - [Statement: INSERT](<./Statements/INSERT.md>)
- [Statement: UPDATE](<./Statements/UPDATE.md>)
|
+| ROLLBACK | `#[token("ROLLBACK")]` | - [Statement: INSERT](<./Statements/INSERT.md>)
- [Statement: ROLLBACK TRANSACTION](<./Statements/ROLLBACK TRANSACTION.md>)
- [Statement: UPDATE](<./Statements/UPDATE.md>)
|
+| round | `#[token("round")]` | - [Identifier: Function Name](<./Identifiers/Function Name.md>)
|
+| ROW | `#[token("ROW")]` | - [Statement: CREATE](<./Statements/CREATE.md>)
|
+| rtrim | `#[token("rtrim")]` | - [Identifier: Function Name](<./Identifiers/Function Name.md>)
|
+| SAVEPOINT | `#[token("SAVEPOINT")]` | - [Statement: RELEASE](<./Statements/RELEASE.md>)
- [Statement: ROLLBACK TRANSACTION](<./Statements/ROLLBACK TRANSACTION.md>)
- [Statement: SAVEPOINT](<./Statements/SAVEPOINT.md>)
|
+| SELECT | `#[token("SELECT")]` | - [Statement: CREATE](<./Statements/CREATE.md>)
- [Statement: SELECT](<./Statements/SELECT.md>)
|
+| SET | `#[token("SET")]` | - [Statement: UPDATE](<./Statements/UPDATE.md>)
|
+| sign | `#[token("sign")]` | - [Identifier: Function Name](<./Identifiers/Function Name.md>)
|
+| soundex | `#[token("soundex")]` | - [Identifier: Function Name](<./Identifiers/Function Name.md>)
|
+| sqlite_compileoption_get | `#[token("sqlite_compileoption_get")]` | - [Identifier: Function Name](<./Identifiers/Function Name.md>)
|
+| sqlite_compileoption_used | `#[token("sqlite_compileoption_used")]` | - [Identifier: Function Name](<./Identifiers/Function Name.md>)
|
+| sqlite_offset | `#[token("sqlite_offset")]` | - [Identifier: Function Name](<./Identifiers/Function Name.md>)
|
+| sqlite_source | `#[token("sqlite_source")]` | - [Identifier: Function Name](<./Identifiers/Function Name.md>)
|
+| sqlite_version | `#[token("sqlite_version")]` | - [Identifier: Function Name](<./Identifiers/Function Name.md>)
|
+| string_agg | `#[token("string_agg")]` | - [Identifier: Aggregate Function Name](<./Identifiers/Aggregate Function Name.md>)
|
+| substr | `#[token("substr")]` | - [Identifier: Function Name](<./Identifiers/Function Name.md>)
|
+| substring | `#[token("substring")]` | - [Identifier: Function Name](<./Identifiers/Function Name.md>)
|
+| sum | `#[token("sum")]` | - [Identifier: Aggregate Function Name](<./Identifiers/Aggregate Function Name.md>)
|
+| TABLE | `#[token("TABLE")]` | - [Statement: ALTER TABLE](<./Statements/ALTER TABLE.md>)
- [Statement: CREATE](<./Statements/CREATE.md>)
- [Statement: DROP](<./Statements/DROP.md>)
|
+| TEMP | `#[token("TEMP")]` | - [Statement: CREATE](<./Statements/CREATE.md>)
|
+| TEMPORARY | `#[token("TEMPORARY")]` | - [Statement: CREATE](<./Statements/CREATE.md>)
|
+| THEN | `#[token("THEN")]` | - [Expression: Expression](<./Expressions/Expression.md>)
|
+| TO | `#[token("TO")]` | - [Statement: ALTER TABLE](<./Statements/ALTER TABLE.md>)
- [Statement: ROLLBACK TRANSACTION](<./Statements/ROLLBACK TRANSACTION.md>)
|
+| total | `#[token("total")]` | - [Identifier: Aggregate Function Name](<./Identifiers/Aggregate Function Name.md>)
|
+| total_changes | `#[token("total_changes")]` | - [Identifier: Function Name](<./Identifiers/Function Name.md>)
|
+| TRANSACTION | `#[token("TRANSACTION")]` | - [Statement: BEGIN TRANSACTION](<./Statements/BEGIN TRANSACTION.md>)
- [Statement: COMMIT TRANSACTION](<./Statements/COMMIT TRANSACTION.md>)
- [Statement: END TRANSACTION](<./Statements/END TRANSACTION.md>)
- [Statement: ROLLBACK TRANSACTION](<./Statements/ROLLBACK TRANSACTION.md>)
|
+| TRIGGER | `#[token("TRIGGER")]` | - [Statement: CREATE](<./Statements/CREATE.md>)
- [Statement: DROP](<./Statements/DROP.md>)
|
+| trim | `#[token("trim")]` | - [Identifier: Function Name](<./Identifiers/Function Name.md>)
|
+| typeof | `#[token("typeof")]` | - [Identifier: Function Name](<./Identifiers/Function Name.md>)
|
+| unhex | `#[token("unhex")]` | - [Identifier: Function Name](<./Identifiers/Function Name.md>)
|
+| unicode | `#[token("unicode")]` | - [Identifier: Function Name](<./Identifiers/Function Name.md>)
|
+| UNIQUE | `#[token("UNIQUE")]` | - [Statement: CREATE](<./Statements/CREATE.md>)
|
+| unlikely | `#[token("unlikely")]` | - [Identifier: Function Name](<./Identifiers/Function Name.md>)
|
+| UPDATE | `#[token("UPDATE")]` | - [Statement: UPDATE](<./Statements/UPDATE.md>)
|
+| upper | `#[token("upper")]` | - [Identifier: Function Name](<./Identifiers/Function Name.md>)
|
+| USING | `#[token("USING")]` | - [Statement: CREATE](<./Statements/CREATE.md>)
|
+| VACUUM | `#[token("VACUUM")]` | - [Statement: VACUUM](<./Statements/VACUUM.md>)
|
+| VALUES | `#[token("VALUES")]` | - [Statement: INSERT](<./Statements/INSERT.md>)
- [Statement: SELECT](<./Statements/SELECT.md>)
|
+| VIEW | `#[token("VIEW")]` | - [Statement: CREATE](<./Statements/CREATE.md>)
- [Statement: DROP](<./Statements/DROP.md>)
|
+| VIRTUAL | `#[token("VIRTUAL")]` | - [Statement: CREATE](<./Statements/CREATE.md>)
|
+| WHEN | `#[token("WHEN")]` | - [Expression: Expression](<./Expressions/Expression.md>)
- [Statement: CREATE](<./Statements/CREATE.md>)
|
+| WHERE | `#[token("WHERE")]` | - [Statement: CREATE](<./Statements/CREATE.md>)
- [Statement: DELETE](<./Statements/DELETE.md>)
- [Statement: SELECT](<./Statements/SELECT.md>)
- [Statement: UPDATE](<./Statements/UPDATE.md>)
|
+| WINDOW | `#[token("WINDOW")]` | - [Statement: SELECT](<./Statements/SELECT.md>)
|
+| WITH | `#[token("WITH")]` | - [Statement: DELETE](<./Statements/DELETE.md>)
- [Statement: INSERT](<./Statements/INSERT.md>)
- [Statement: SELECT](<./Statements/SELECT.md>)
- [Statement: UPDATE](<./Statements/UPDATE.md>)
|
+| zeroblob | `#[token("zeroblob")]` | - [Identifier: Function Name](<./Identifiers/Function Name.md>)
|
+
+
+## Identifiers
+
+
+
+
+| Identifier | Used by |
+| ------------------------------------------------------------------ | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| [Aggregate Function Name](<./Identifiers/Aggregate Function Name.md>) | - [Expression: Aggregate](<./Expressions/Aggregate.md>)
|
+| [Alias](Identifiers/Alias) | - [Statement: INSERT](<./Statements/INSERT.md>)
|
+| [Collation Name](<./Identifiers/Collation Name.md>) | - [Expression: Expression](<./Expressions/Expression.md>)
- [Statement: REINDEX](<./Statements/REINDEX.md>)
|
+| [Column Name](<./Identifiers/Column Name.md>) | - [Expression: Expression](<./Expressions/Expression.md>)
- [Statement: ALTER TABLE](<./Statements/ALTER TABLE.md>)
- [Statement: CREATE](<./Statements/CREATE.md>)
- [Statement: INSERT](<./Statements/INSERT.md>)
- [Statement: SELECT](<./Statements/SELECT.md>)
- [Statement: UPDATE](<./Statements/UPDATE.md>)
|
+| [File Name](Identifiers/File%20Name) | - [Statement: VACUUM](<./Statements/VACUUM.md>)
|
+| [Function Name](<./Identifiers/Function Name.md>) | - [Expression: Expression](<./Expressions/Expression.md>)
- [Expression: Function](<./Expressions/Function.md>)
|
+| [Index Name](<./Identifiers/Index Name.md>) | - [Statement: ANALYZE](<./Statements/ANALYZE.md>)
- [Statement: CREATE](<./Statements/CREATE.md>)
- [Statement: DROP](<./Statements/DROP.md>)
- [Statement: REINDEX](<./Statements/REINDEX.md>)
|
+| [Module Name](<./Identifiers/Module Name.md>) | - [Statement: CREATE](<./Statements/CREATE.md>)
|
+| [Pragma Name](Identifiers/Pragma%20Name) | - [Statement: PRAGMA](<./Statements/PRAGMA.md>)
|
+| [Save Point Name](<./Identifiers/Save Point Name.md>) | - [Statement: RELEASE](<./Statements/RELEASE.md>)
- [Statement: ROLLBACK TRANSACTION](<./Statements/ROLLBACK TRANSACTION.md>)
- [Statement: SAVEPOINT](<./Statements/SAVEPOINT.md>)
|
+| [Schema Name](<./Identifiers/Schema Name.md>) | - [Expression: Expression](<./Expressions/Expression.md>)
- [Statement: ALTER TABLE](<./Statements/ALTER TABLE.md>)
- [Statement: ANALYZE](<./Statements/ANALYZE.md>)
- [Statement: ATTACH](<./Statements/ATTACH.md>)
- [Statement: CREATE](<./Statements/CREATE.md>)
- [Statement: DETACH](<./Statements/DETACH.md>)
- [Statement: DROP](<./Statements/DROP.md>)
- [Statement: INSERT](<./Statements/INSERT.md>)
- [Statement: PRAGMA](<./Statements/PRAGMA.md>)
- [Statement: REINDEX](<./Statements/REINDEX.md>)
- [Statement: VACUUM](<./Statements/VACUUM.md>)
|
+| [Subquery](Identifiers/Subquery) | - [Statement: SELECT](<./Statements/SELECT.md>)
|
+| [Table Name](<./Identifiers/Table Name.md>) | - [Expression: Expression](<./Expressions/Expression.md>)
- [Statement: ALTER TABLE](<./Statements/ALTER TABLE.md>)
- [Statement: ANALYZE](<./Statements/ANALYZE.md>)
- [Statement: CREATE](<./Statements/CREATE.md>)
- [Statement: DROP](<./Statements/DROP.md>)
- [Statement: INSERT](<./Statements/INSERT.md>)
- [Statement: REINDEX](<./Statements/REINDEX.md>)
- [Statement: SELECT](<./Statements/SELECT.md>)
- [Statement: UPDATE](<./Statements/UPDATE.md>)
|
+| [Trigger Name](<./Identifiers/Trigger Name.md>) | - [Statement: CREATE](<./Statements/CREATE.md>)
- [Statement: DROP](<./Statements/DROP.md>)
|
+| [Type Name](Identifiers/Type%20Name) | - [Expression: Expression](<./Expressions/Expression.md>)
|
+| [View Name](<./Identifiers/View Name.md>) | - [Statement: CREATE](<./Statements/CREATE.md>)
- [Statement: DROP](<./Statements/DROP.md>)
|
+| [Window Name](Identifiers/Window%20Name) | - [Statement: SELECT](<./Statements/SELECT.md>)
|
+
+
+## Comments
+
+
+
+
+| Type | Code |
+| ------------------------------------------------------------------------------------------ | --------------------------------------------- |
+| [[./Comments/Multi-line Comment]] | #[regex(r"/\*(?:[^\*]\|\*[^/])*(?:\**/\|$)")] |
+| [[./Comments/Single Line Comment]] | #[regex(r"//[^\n]*")] |
+
+
+### Characters
+
+
+
+
+| Character | Code | Used by |
+| --------- | --------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| , | `#[token(",")]` | - [Expression: Aggregate](<./Expressions/Aggregate.md>)
- [Expression: Expression](<./Expressions/Expression.md>)
- [Expression: Function](<./Expressions/Function.md>)
- [Statement: CREATE](<./Statements/CREATE.md>)
- [Statement: DELETE](<./Statements/DELETE.md>)
- [Statement: INSERT](<./Statements/INSERT.md>)
- [Statement: SELECT](<./Statements/SELECT.md>)
- [Statement: UPDATE](<./Statements/UPDATE.md>)
|
+| ; | `#[token(";")]` | - [Expression: Expression](<./Expressions/Expression.md>)
- [Statement: ALTER TABLE](<./Statements/ALTER TABLE.md>)
- [Statement: ANALYZE](<./Statements/ANALYZE.md>)
- [Statement: ATTACH](<./Statements/ATTACH.md>)
- [Statement: BEGIN TRANSACTION](<./Statements/BEGIN TRANSACTION.md>)
- [Statement: COMMIT TRANSACTION](<./Statements/COMMIT TRANSACTION.md>)
- [Statement: CREATE](<./Statements/CREATE.md>)
- [Statement: DELETE](<./Statements/DELETE.md>)
- [Statement: DETACH](<./Statements/DETACH.md>)
- [Statement: DROP](<./Statements/DROP.md>)
- [Statement: END TRANSACTION](<./Statements/END TRANSACTION.md>)
- [Statement: INSERT](<./Statements/INSERT.md>)
- [Statement: PRAGMA](<./Statements/PRAGMA.md>)
- [Statement: REINDEX](<./Statements/REINDEX.md>)
- [Statement: RELEASE](<./Statements/RELEASE.md>)
- [Statement: ROLLBACK TRANSACTION](<./Statements/ROLLBACK TRANSACTION.md>)
- [Statement: SAVEPOINT](<./Statements/SAVEPOINT.md>)
- [Statement: SELECT](<./Statements/SELECT.md>)
- [Statement: UPDATE](<./Statements/UPDATE.md>)
- [Statement: VACUUM](<./Statements/VACUUM.md>)
|
+| . | `#[token(".")]` | - [Expression: Expression](<./Expressions/Expression.md>)
- [Statement: ALTER TABLE](<./Statements/ALTER TABLE.md>)
- [Statement: ANALYZE](<./Statements/ANALYZE.md>)
- [Statement: CREATE](<./Statements/CREATE.md>)
- [Statement: DROP](<./Statements/DROP.md>)
- [Statement: INSERT](<./Statements/INSERT.md>)
- [Statement: PRAGMA](<./Statements/PRAGMA.md>)
- [Statement: REINDEX](<./Statements/REINDEX.md>)
|
+| ( | `#[token("(")]` | - [Expression: Aggregate](<./Expressions/Aggregate.md>)
- [Expression: Expression](<./Expressions/Expression.md>)
- [Expression: Filter Clause](<./Expressions/Filter Clause.md>)
- [Expression: Function](<./Expressions/Function.md>)
- [Statement: CREATE](<./Statements/CREATE.md>)
- [Statement: INSERT](<./Statements/INSERT.md>)
- [Statement: PRAGMA](<./Statements/PRAGMA.md>)
- [Statement: SELECT](<./Statements/SELECT.md>)
|
+| ) | `#[token(")")]` | - [Expression: Aggregate](<./Expressions/Aggregate.md>)
- [Expression: Expression](<./Expressions/Expression.md>)
- [Expression: Filter Clause](<./Expressions/Filter Clause.md>)
- [Expression: Function](<./Expressions/Function.md>)
- [Statement: CREATE](<./Statements/CREATE.md>)
- [Statement: INSERT](<./Statements/INSERT.md>)
- [Statement: PRAGMA](<./Statements/PRAGMA.md>)
- [Statement: SELECT](<./Statements/SELECT.md>)
|
+| * | `#[token("*")]` | - [Expression: Aggregate](<./Expressions/Aggregate.md>)
- [Expression: Function](<./Expressions/Function.md>)
|
+| = | `#[token("=")]` | - [Statement: PRAGMA](<./Statements/PRAGMA.md>)
- [Statement: UPDATE](<./Statements/UPDATE.md>)
|
+
diff --git a/design/sql_syntax/Statements/ALTER TABLE.md b/design/sql_syntax/Statements/ALTER TABLE.md
new file mode 100644
index 0000000..cb89a7e
--- /dev/null
+++ b/design/sql_syntax/Statements/ALTER TABLE.md
@@ -0,0 +1,42 @@
+---
+characters: [";", "."]
+expressions: [Column Definition]
+identifiers: [Column Name, Schema Name, Table Name]
+keywords: [ADD, ALTER, COLUMN, DROP, RENAME, TABLE, TO]
+title: ALTER TABLE
+---
+
+# ALTER TABLE
+
+```mermaid
+graph TB
+ st(( ))
+ semi(;)
+ stop(( ))
+ semi --> stop
+ st --> ALTER
+ ALTER --> TABLE
+ TABLE --> schema_name([Schema Name])
+ TABLE --> table_name([Table Name])
+ schema_name -->|#quot;.#quot;| table_name
+ table_name --> RENAME
+ table_name --> ADD
+ table_name --> DROP
+ RENAME --> TO
+ RENAME --> COLUMN
+ RENAME --> column_name([Column Name])
+ COLUMN --> column_name
+ TO --> new_table_name([Table Name])
+ new_table_name --> semi
+ column_name --> TO2["TO"]
+ TO2 --> new_column_name([Column Name])
+ new_column_name --> semi
+ ADD --> COLUMN2["COLUMN"]
+ ADD --> column_definition>Column Definition]
+ COLUMN2 --> column_definition
+ column_definition --> semi
+ DROP --> COLUMN3["COLUMN"]
+ DROP --> column_name2([Column Name])
+ COLUMN3 --> column_name2
+ column_name2 --> semi
+```
diff --git a/design/sql_syntax/Statements/ANALYZE.md b/design/sql_syntax/Statements/ANALYZE.md
new file mode 100644
index 0000000..b130715
--- /dev/null
+++ b/design/sql_syntax/Statements/ANALYZE.md
@@ -0,0 +1,25 @@
+---
+characters: [";", "."]
+identifiers: [Index Name, Schema Name, Table Name]
+keywords: [ANALYZE]
+title: ANALYZE
+---
+
+# ANALYZE
+
+```mermaid
+graph TB
+ st(( ))
+ semi(;)
+ stop(( ))
+ semi --> stop
+ st --> ANALYZE
+ ANALYZE --> schema_name([Schema Name])
+ ANALYZE --> index_name([Index Name])
+ ANALYZE --> table_name([Table Name])
+ schema_name -->|#quot;.#quot;| index_name
+ schema_name -->|#quot;.#quot;| table_name
+ schema_name --> semi
+ table_name --> semi
+ index_name --> semi
+```
diff --git a/design/sql_syntax/Statements/ATTACH.md b/design/sql_syntax/Statements/ATTACH.md
new file mode 100644
index 0000000..a16d20a
--- /dev/null
+++ b/design/sql_syntax/Statements/ATTACH.md
@@ -0,0 +1,24 @@
+---
+characters: [";"]
+expressions: [Expression]
+identifiers: [Schema Name]
+keywords: [AS, ATTACH, DATABASE]
+title: ATTACH
+---
+
+# ATTACH
+
+```mermaid
+graph TB
+ st(( ))
+ semi(;)
+ stop(( ))
+ semi --> stop
+ st --> ATTACH
+ ATTACH --> DATABASE
+ ATTACH --> expression>Expression]
+ DATABASE --> expression
+ expression --> AS
+ AS --> schema_name([Schema Name])
+ schema_name --> semi
+```
diff --git a/design/sql_syntax/Statements/BEGIN TRANSACTION.md b/design/sql_syntax/Statements/BEGIN TRANSACTION.md
new file mode 100644
index 0000000..eb07d12
--- /dev/null
+++ b/design/sql_syntax/Statements/BEGIN TRANSACTION.md
@@ -0,0 +1,23 @@
+---
+characters: [";"]
+keywords: [BEGIN, DEFERRED, EXCLUSIVE, IMMEDIATE, TRANSACTION]
+title: BEGIN TRANSACTION
+---
+
+# BEGIN TRANSACTION
+
+```mermaid
+graph TB
+ st(( ))
+ semi(;)
+ stop(( ))
+ semi --> stop
+ st --> BEGIN
+ BEGIN --> DEFERRED
+ BEGIN --> IMMEDIATE
+ BEGIN --> EXCLUSIVE
+ DEFERRED --> TRANSACTION
+ IMMEDIATE --> TRANSACTION
+ EXCLUSIVE --> TRANSACTION
+ TRANSACTION --> semi
+```
diff --git a/design/sql_syntax/Statements/COMMIT TRANSACTION.md b/design/sql_syntax/Statements/COMMIT TRANSACTION.md
new file mode 100644
index 0000000..51fccae
--- /dev/null
+++ b/design/sql_syntax/Statements/COMMIT TRANSACTION.md
@@ -0,0 +1,20 @@
+---
+aliases: [COMMIT TRANSACTION]
+characters: [";"]
+keywords: [COMMIT, END, TRANSACTION]
+linter-yaml-title-alias: COMMIT TRANSACTION
+title: COMMIT TRANSACTION
+---
+
+# COMMIT TRANSACTION
+
+```mermaid
+graph TB
+ st(( ))
+ semi(;)
+ stop(( ))
+ semi --> stop
+ st --> COMMIT
+ COMMIT --> TRANSACTION
+ TRANSACTION --> semi
+```
diff --git a/design/sql_syntax/Statements/CREATE.md b/design/sql_syntax/Statements/CREATE.md
new file mode 100644
index 0000000..e5121d5
--- /dev/null
+++ b/design/sql_syntax/Statements/CREATE.md
@@ -0,0 +1,141 @@
+---
+characters: [",", ";", ".", "(", ")"]
+expressions: [Column Definition, Expression, Module Argument, Table Constraint, Table Options]
+identifiers: [Column Name, Index Name, Module Name, Schema Name, Table Name, Trigger Name, View Name]
+keywords: [AFTER, AS, BEFORE, BEGIN, CREATE, EACH, END, EXISTS, FOR, IF, INDEX, INSTEAD, NOT, OF, ON, ROW, SELECT, TABLE, TEMP, TEMPORARY, TRIGGER, UNIQUE, USING, VIEW, VIRTUAL, WHEN, WHERE]
+statements: [DELETE, INSERT, SELECT, UPDATE]
+title: CREATE
+---
+
+# CREATE
+
+```mermaid
+graph TB
+ st(( ))
+ semi(;)
+ stop(( ))
+ semi --> stop
+ st --> CREATE
+ CREATE --> UNIQUE
+ CREATE --> INDEX
+ UNIQUE --> INDEX
+ INDEX --> index_exists[IF NOT EXISTS]
+ INDEX --> index_schema_name([Schema Name])
+ INDEX --> index_name([Index Name])
+ index_exists --> index_schema_name([Schema Name])
+ index_exists --> index_name
+ index_schema_name -->|#quot;.#quot;| index_name
+ index_name --> ON
+ ON --> index_table_name([Table Name])
+ index_table_name --> index_table_lparen("(")
+ index_table_lparen --> index_column_name([Column Name])
+ index_column_name -->|#quot;,#quot;| index_column_name
+ index_column_name --> index_table_rparen(")")
+ index_table_rparen --> WHERE
+ index_table_rparen --> semi
+ WHERE --> expression>Expression]
+ expression --> semi
+
+ CREATE --> TEMP
+ CREATE --> TEMPORARY
+ CREATE --> TABLE
+ TEMP --> TABLE
+ TEMPORARY --> TABLE
+ TABLE --> table_exists_clause[IF NOT EXISTS]
+ TABLE --> table_schema_name([Schema Name])
+ TABLE --> table_name([Table Name])
+ table_exists_clause --> table_schema_name
+ table_exists_clause --> table_name
+ table_schema_name -->|#quot;.#quot;| table_name
+ table_name --> AS
+ table_name --> table_lparen("(")
+ AS --> select_statement{{Select Statement}}
+ select_statement --> semi
+ table_lparen --> column_definition>Column Definition]
+ column_definition -->|#quot;,#quot;| column_definition
+ column_definition -->|#quot;,#quot;| table_constraint>Table Constraint]
+ column_definition --> table_rparen(")")
+ table_constraint -->|#quot;,#quot;| table_constraint
+ table_constraint --> table_rparen
+ table_rparen --> table_options>Table Options]
+ table_rparen --> semi
+ table_options --> semi
+
+ TEMP --> TRIGGER
+ TEMPORARY --> TRIGGER
+ TRIGGER --> trigger_exists[IF NOT EXISTS]
+ TRIGGER --> trigger_schema_name([Schema Name])
+ TRIGGER --> trigger_name([Trigger Name])
+ trigger_exists --> trigger_schema_name
+ trigger_exists --> trigger_name
+ trigger_schema_name -->|#quot;.#quot;| trigger_name
+ trigger_name --> BEFORE
+ trigger_name --> AFTER
+ trigger_name --> instead[INSTEAD OF]
+ trigger_name --> j0((+))
+ BEFORE --> j0
+ AFTER --> j0
+ instead --> j0
+ j0 --> DELETE
+ j0 --> INSERT
+ j0 --> UPDATE
+ DELETE --> trigger_on[ON]
+ INSERT --> trigger_on
+ UPDATE --> trigger_on
+ UPDATE --> OF
+ OF --> update_column_name([Column Name])
+ update_column_name -->|#quot;,#quot;| update_column_name
+ update_column_name --> trigger_on
+ trigger_on --> trigger_table_name([Table Name])
+ trigger_table_name --> for_each_row[FOR EACH ROW]
+ trigger_table_name --> WHEN
+ trigger_table_name --> BEGIN
+ for_each_row --> WHEN
+ for_each_row --> BEGIN
+ WHEN --> trigger_when_expression>Expression]
+ trigger_when_expression --> BEGIN
+ BEGIN --> update_statement{{Update Statement}}
+ BEGIN --> insert_statement{{Insert Statement}}
+ BEGIN --> delete_statement{{Delete Statement}}
+ BEGIN --> trigger_select_statement{{Select Statement}}
+ update_statement --> END
+ insert_statement --> END
+ delete_statement --> END
+ trigger_select_statement --> END
+ END --> semi
+
+ TEMP --> VIEW
+ TEMPORARY --> VIEW
+ CREATE --> VIEW
+ VIEW --> view_exists[IF NOT EXISTS]
+ VIEW --> view_schema_name([Schema Name])
+ VIEW --> view_name
+ view_exists --> view_schema_name
+ view_exists --> view_name
+ view_schema_name -->|#quot;.#quot;| view_name
+ view_name --> view_as[AS]
+ view_name --> view_lparen("(")
+ view_as --> view_select_statement{{Select Statement}}
+ view_lparen --> view_column_name([Column Name])
+ view_column_name -->|#quot;,#quot;| view_column_name
+ view_column_name --> view_rparen(")")
+ view_rparen --> view_as
+ view_select_statement --> semi
+
+ CREATE --> VIRTUAL
+ VIRTUAL --> v_table[TABLE]
+ v_table --> v_exists[IF NOT EXISTS]
+ v_table --> v_schema_name([Schema Name])
+ v_table --> v_table_name([Table Name])
+ v_exists --> v_schema_name
+ v_exists --> v_table_name
+ v_schema_name -->|#quot;.#quot;| v_table_name
+ v_table_name --> USING
+ USING --> module_name([Module Name])
+ module_name --> semi
+ module_name --> m_lparen("(")
+ m_lparen --> m_argument>Module Argument]
+ m_argument -->|#quot;,#quot;| m_argument
+ m_argument --> m_rparen(")")
+ m_rparen --> semi
+```
diff --git a/design/sql_syntax/Statements/DELETE.md b/design/sql_syntax/Statements/DELETE.md
new file mode 100644
index 0000000..72341a9
--- /dev/null
+++ b/design/sql_syntax/Statements/DELETE.md
@@ -0,0 +1,32 @@
+---
+characters: [",", ";"]
+expressions: [Common Table Expression, Expression, Qualified Table Name, Returning Clause]
+keywords: [DELETE, FROM, RECURSIVE, WHERE, WITH]
+title: DELETE
+---
+
+# DELETE
+
+```mermaid
+graph TB
+ st(( ))
+ semi(;)
+ stop(( ))
+ semi --> stop
+ st --> WITH
+ st --> DELETE
+ WITH --> RECURSIVE
+ WITH --> common_table_expression
+ RECURSIVE --> common_table_expression>Common Table Expression]
+ common_table_expression -->|#quot;,#quot;| common_table_expression
+ common_table_expression --> DELETE
+ DELETE --> FROM
+ FROM --> qualified_table_name>Qualified Table Name]
+ qualified_table_name --> WHERE
+ qualified_table_name --> returning_clause>Returning Clause]
+ qualified_table_name --> semi
+ WHERE --> expression>Expression]
+ expression --> returning_clause
+ expression --> semi
+ returning_clause --> semi
+```
diff --git a/design/sql_syntax/Statements/DETACH.md b/design/sql_syntax/Statements/DETACH.md
new file mode 100644
index 0000000..398dfae
--- /dev/null
+++ b/design/sql_syntax/Statements/DETACH.md
@@ -0,0 +1,21 @@
+---
+characters: [";"]
+identifiers: [Schema Name]
+keywords: [DATABASE, DETACH]
+title: DETACH
+---
+
+# DETACH
+
+```mermaid
+graph TB
+ st(( ))
+ semi(;)
+ stop(( ))
+ semi --> stop
+ st --> DETACH
+ DETACH --> DATABASE
+ DETACH --> schema_name([Schema Name])
+ DATABASE --> schema_name
+ schema_name --> semi
+```
diff --git a/design/sql_syntax/Statements/DROP.md b/design/sql_syntax/Statements/DROP.md
new file mode 100644
index 0000000..a77b13d
--- /dev/null
+++ b/design/sql_syntax/Statements/DROP.md
@@ -0,0 +1,54 @@
+---
+characters: [";", "."]
+identifiers: [Index Name, Schema Name, Table Name, Trigger Name, View Name]
+keywords: [DROP, EXISTS, IF, INDEX, TABLE, TRIGGER, VIEW]
+title: DROP
+---
+
+# DROP
+
+```mermaid
+graph TB
+ st(( ))
+ semi(;)
+ stop(( ))
+ semi --> stop
+
+ st --> DROP
+
+ DROP --> INDEX
+ INDEX --> index_exists[IF EXISTS]
+ INDEX --> index_schema_name([Schema Name])
+ INDEX --> index_name([Index Name])
+ index_exists --> index_schema_name
+ index_exists --> index_name
+ index_schema_name -->|#quot;.#quot;| index_name
+ index_name --> semi
+
+ DROP --> TABLE
+ TABLE --> table_exists[IF EXISTS]
+ TABLE --> table_schema_name([Schema Name])
+ TABLE --> table_name([Table Name])
+ table_exists --> table_schema_name
+ table_exists --> table_name
+ table_schema_name -->|#quot;.#quot;|table_name
+ table_name --> semi
+
+ DROP --> TRIGGER
+ TRIGGER --> trigger_exists[IF EXISTS]
+ TRIGGER --> trigger_schema_name([Schema Name])
+ TRIGGER --> trigger_name([Trigger Name])
+ trigger_exists --> trigger_schema_name
+ trigger_exists --> trigger_name
+ trigger_schema_name -->|#quot;.#quot;| trigger_name
+ trigger_name --> semi
+
+ DROP --> VIEW
+ VIEW --> view_exists[IF EXISTS]
+ VIEW --> view_schema_name([Schema Name])
+ VIEW --> view_name([View Name])
+ view_exists --> view_schema_name
+ view_exists --> view_name
+ view_schema_name -->|#quot;.#quot;| view_name
+ view_name --> semi
+```
diff --git a/design/sql_syntax/Statements/END TRANSACTION.md b/design/sql_syntax/Statements/END TRANSACTION.md
new file mode 100644
index 0000000..c2d4839
--- /dev/null
+++ b/design/sql_syntax/Statements/END TRANSACTION.md
@@ -0,0 +1,21 @@
+---
+aliases: [COMMIT or END TRANSACTION]
+characters: [";"]
+keywords: [COMMIT, END, TRANSACTION]
+linter-yaml-title-alias: COMMIT or END TRANSACTION
+title: COMMIT or END TRANSACTION
+---
+
+# COMMIT or END TRANSACTION
+
+```mermaid
+graph TB
+ st(( ))
+ semi(;)
+ stop(( ))
+ semi --> stop
+
+ st --> END
+ END --> TRANSACTION
+ TRANSACTION --> semi
+```
diff --git a/design/sql_syntax/Statements/EXPLAIN.md b/design/sql_syntax/Statements/EXPLAIN.md
new file mode 100644
index 0000000..0456dd8
--- /dev/null
+++ b/design/sql_syntax/Statements/EXPLAIN.md
@@ -0,0 +1,55 @@
+---
+keywords: [EXPLAIN, PLAN, QUERY]
+statements: [ALTER TABLE, ANALYZE, ATTACH, BEGIN TRANSACTION, COMMIT or END TRANSACTION, CREATE, DELETE, DETACH, DROP, INSERT, PRAGMA, REINDEX, RELEASE, ROLLBACK TRANSACTION, SAVEPOINT, SELECT, VACUUM]
+title: EXPLAIN
+---
+
+# EXPLAIN
+
+```mermaid
+graph LR
+ st(( ))
+ stop(( ))
+
+ st --> EXPLAIN
+ EXPLAIN --> QUERY
+ EXPLAIN --> j0((+))
+ QUERY --> PLAN
+ PLAN --> j0
+ j0 --> alter_table_statement{{ALTER TABLE Statement}}
+ jo --> analyze_statement{{ANALYZE Statement}}
+ j0 --> attach_statement{{ATTACH Statement}}
+ j0 --> begin_transaction_statement{{BEGIN TRANSACTION Statement}}
+ j0 --> commit_transaction_statement{{COMMIT TRANSACTION Statement}}
+ j0 --> end_transaction_statement{{END TRANSACTION Statement}}
+ j0 --> create_statement{{CREATE Statement}}
+ j0 --> delete_statement{{DELETE Statement}}
+ j0 --> detach_statement{{DETACH Statement}}
+ j0 --> drop_statement{{DROP Statement}}
+ j0 --> insert_statement{{INSERT Statement}}
+ j0 --> pragma_statement{{PRAGMA Statement}}
+ j0 --> reindex_statement{{REINDEX Statement}}
+ j0 --> release_statement{{RELEASE Statement}}
+ j0 --> rollback_transaction_statement{{ROLLBACK Statement}}
+ j0 --> savepoint_statement{{SAVEPOINT Statement}}
+ j0 --> select_statement{{SELECT Statement}}
+ j0 --> vacuum_statement{{VACUUM Statement}}
+ alter_table_statement --> stop
+ analyze_statement --> stop
+ attach_statement --> stop
+ begin_transaction_statement --> stop
+ commit_transaction_statement --> stop
+ end_transaction_statement --> stop
+ create_statement --> stop
+ delete_statement --> stop
+ detach_statement --> stop
+ drop_statement --> stop
+ insert_statement --> stop
+ pragma_statement --> stop
+ reindex_statement --> stop
+ release_statement --> stop
+ rollback_transaction_statement --> stop
+ savepoint_statement --> stop
+ select_statement --> stop
+ vacuum_statement --> stop
+```
diff --git a/design/sql_syntax/Statements/INSERT.md b/design/sql_syntax/Statements/INSERT.md
new file mode 100644
index 0000000..3506f52
--- /dev/null
+++ b/design/sql_syntax/Statements/INSERT.md
@@ -0,0 +1,85 @@
+---
+characters: [",", ";", ".", "(", ")"]
+expressions: [Common Table Expression, Expression, Upsert Clause]
+identifiers: [Alias, Column Name, Schema Name, Table Name]
+keywords: [ABORT, DEFAULT, FAIL, IGNORE, INSERT, INTO, OR, RECURSIVE, REPLACE, ROLLBACK, VALUES, WITH]
+statements: [SELECT]
+title: INSERT
+---
+
+# INSERT
+
+```mermaid
+graph TB
+ st(( ))
+ semi(;)
+ stop(( ))
+ semi --> stop
+
+ st --> WITH
+ st --> REPLACE
+ st --> INSERT
+
+ WITH --> RECURSIVE
+ WITH --> common_table_expression>Common Table Expression]
+ RECURSIVE --> common_table_expression
+ common_table_expression -->|#quot;,#quot;| common_table_expression
+ common_table_expression --> REPLACE
+ common_table_expression --> INSERT
+
+ REPLACE --> INTO
+ INSERT --> INTO
+ INSERT --> OR
+ OR --> ABORT
+ OR --> FAIL
+ OR --> IGNORE
+ OR --> REPLACE
+ OR --> ROLLBACK
+ ABORT --> INTO
+ FAIL --> INTO
+ IGNORE --> INTO
+ ROLLBACK --> INTO
+
+ INTO --> schema_name([Schema Name])
+ INTO --> table_name([Table Name])
+ schema_name -->|#quot;.#quot;| table_name
+
+ table_name --> AS
+ table_name --> column_lparen("(")
+ table_name --> j0((+))
+
+ AS --> alias([Alias])
+ alias --> column_lparen("(")
+ alias --> j0
+
+ column_lparen --> column_name([Column Name])
+ column_name -->|#quot;,#quot;| column_name
+ column_name --> column_rparen(")")
+ column_rparen --> j0
+
+ j0 --> VALUES
+ j0 --> select_statement{{Select Statement}}
+ j0 --> default_clause[DEFAULT VALUES]
+
+ VALUES --> values_lparen("(")
+ values_lparen --> expression>Expression]
+ expression -->|#quot;,#quot;| expression
+ expression --> values_rparen(")")
+ values_rparen -->|#quot;,#quot;| values_lparen
+ values_rparen --> j1((+))
+ values_rparen --> j2((+))
+
+ j1 --> upsert_clause>Upsert Clause]
+ upsert_clause --> j2
+
+ select_statement --> j1
+ select_statement --> j2
+
+ j2 --> returning_clause>Returning Clause]
+ j2 --> semi
+
+ default_clause --> j2
+
+
+ returning_clause --> semi
+```
diff --git a/design/sql_syntax/Statements/PRAGMA.md b/design/sql_syntax/Statements/PRAGMA.md
new file mode 100644
index 0000000..04a4b0a
--- /dev/null
+++ b/design/sql_syntax/Statements/PRAGMA.md
@@ -0,0 +1,27 @@
+---
+characters: [";", ".", "(", ")", "="]
+expressions: [Pragma Value]
+identifiers: [Pragma Name, Schema Name]
+keywords: [PRAGMA]
+title: PRAGMA
+---
+
+# PRAGMA
+
+```mermaid
+graph TB
+ st(( ))
+ semi(;)
+ stop(( ))
+ semi --> stop
+
+ st --> PRAGMA
+ PRAGMA --> schema_name([Schema Name])
+ PRAGMA --> pragma_name([Pragma Name])
+ schema_name -->|#quot;.#quot;| pragma_name
+ pragma_name --> semi
+ pragma_name -->|#quot;=#quot;| equal_pragma_value>Pragma Value]
+ pragma_name -->|"#quot;(#quot;"| paren_pragma_value>Pragma Value]
+ equal_pragma_value --> semi
+ paren_pragma_value -->|"#quot;(#quot;"| semi
+```
diff --git a/design/sql_syntax/Statements/REINDEX.md b/design/sql_syntax/Statements/REINDEX.md
new file mode 100644
index 0000000..a51927e
--- /dev/null
+++ b/design/sql_syntax/Statements/REINDEX.md
@@ -0,0 +1,31 @@
+---
+characters: [";", "."]
+identifiers: [Collation Name, Index Name, Schema Name, Table Name]
+keywords: [REINDEX]
+title: REINDEX
+---
+
+# REINDEX
+
+```mermaid
+graph TB
+ st(( ))
+ semi(;)
+ stop(( ))
+ semi --> stop
+
+ st --> REINDEX
+ REINDEX --> semi
+ REINDEX --> collation_name([Collation Name])
+ REINDEX --> schema_name([Schema Name])
+ REINDEX --> table_name([Table Name])
+ REINDEX --> index_name([Index Name])
+
+ collation_name --> semi
+
+ schema_name -->|#quot;.#quot;| table_name
+ schema_name -->|#quot;.#quot;| index_name
+
+ table_name --> semi
+ index_name --> semi
+```
diff --git a/design/sql_syntax/Statements/RELEASE.md b/design/sql_syntax/Statements/RELEASE.md
new file mode 100644
index 0000000..522ea63
--- /dev/null
+++ b/design/sql_syntax/Statements/RELEASE.md
@@ -0,0 +1,22 @@
+---
+characters: [";"]
+identifiers: [Save Point Name]
+keywords: [RELEASE, SAVEPOINT]
+title: RELEASE
+---
+
+# RELEASE
+
+```mermaid
+graph TB
+ st(( ))
+ semi(;)
+ stop(( ))
+ semi --> stop
+
+ st --> RELEASE
+ RELEASE --> SAVEPOINT
+ RELEASE --> savepoint_name([Save Point Name])
+ SAVEPOINT --> savepoint_name
+ savepoint_name --> semi
+```
diff --git a/design/sql_syntax/Statements/ROLLBACK TRANSACTION.md b/design/sql_syntax/Statements/ROLLBACK TRANSACTION.md
new file mode 100644
index 0000000..79420d2
--- /dev/null
+++ b/design/sql_syntax/Statements/ROLLBACK TRANSACTION.md
@@ -0,0 +1,26 @@
+---
+characters: [";"]
+identifiers: [Save Point Name]
+keywords: [ROLLBACK, SAVEPOINT, TO, TRANSACTION]
+title: ROLLBACK TRANSACTION
+---
+
+# ROLLBACK TRANSACTION
+
+```mermaid
+graph TB
+ st(( ))
+ semi(;)
+ stop(( ))
+ semi --> stop
+
+ st --> ROLLBACK
+ ROLLBACK --> TRANSACTION
+ ROLLBACK --> TO
+ TRANSACTION --> TO
+ TRANSACTION --> semi
+ TO --> SAVEPOINT
+ TO --> savepoint_name([Save Point Name])
+ SAVEPOINT --> savepoint_name
+ savepoint_name --> semi
+```
diff --git a/design/sql_syntax/Statements/SAVEPOINT.md b/design/sql_syntax/Statements/SAVEPOINT.md
new file mode 100644
index 0000000..36f7c80
--- /dev/null
+++ b/design/sql_syntax/Statements/SAVEPOINT.md
@@ -0,0 +1,20 @@
+---
+characters: [";"]
+identifiers: [Save Point Name]
+keywords: [SAVEPOINT]
+title: SAVEPOINT
+---
+
+# SAVEPOINT
+
+```mermaid
+graph TB
+ st(( ))
+ semi(;)
+ stop(( ))
+ semi --> stop
+
+ st --> SAVEPOINT
+ SAVEPOINT --> savepoint_name([Save Point Name])
+ savepoint_name --> semi
+```
diff --git a/design/sql_syntax/Statements/SELECT.md b/design/sql_syntax/Statements/SELECT.md
new file mode 100644
index 0000000..4f3c6ea
--- /dev/null
+++ b/design/sql_syntax/Statements/SELECT.md
@@ -0,0 +1,118 @@
+---
+characters: [",", ";", "(", ")"]
+expressions: [Common Table Expression, Compound Operator, Expression, Join Clause, Ordering Term, Window Definition]
+identifiers: [Column Name, Subquery, Table Name, Window Name]
+keywords: [ALL, AS, BY, DISTINCT, FROM, GROUP, HAVING, LIMIT, OFFSET, ORDER, RECURSIVE, SELECT, VALUES, WHERE, WINDOW, WITH]
+title: SELECT
+---
+
+# SELECT
+
+```mermaid
+graph TB
+ st(( ))
+ semi(;)
+ stop(( ))
+ semi --> stop
+
+ st --> WITH
+ st --> SELECT
+ st --> VALUES
+
+ WITH --> RECURSIVE
+ WITH --> common_table_expression>Common Table Expression]
+ RECURSIVE --> common_table_expression
+
+ common_table_expression -->|#quot;,#quot;| common_table_expression
+ common_table_expression --> j0((+))
+
+ j0 --> SELECT
+ j0 --> VALUES
+
+ SELECT --> result_column>Result Column]
+ SELECT --> DISTINCT
+ SELECT --> ALL
+ DISTINCT --> result_column
+ ALL --> result_column
+
+ result_column -->|#quot;,#quot;| result_column
+ result_column --> j1((+))
+
+ j1 --> FROM
+ j1 --> j2((+))
+
+ j2 --> WHERE
+ j2 --> j3((+))
+
+ j3 --> GROUP
+ j3 --> HAVING
+ j3 --> j4((+))
+
+ j4 --> WINDOW
+ j4 --> j5((+))
+
+ j5 --> compound_operator>Compound Operator]
+ j5 --> order_clause[ORDER BY]
+ j5 --> j6((+))
+
+ j6 --> LIMIT
+ j6 --> semi
+
+ FROM --> from_schema_name[Schema Name]
+ FROM --> from_table_name[Table Name]
+ FROM --> from_subquery>Subquery]
+ FROM --> join_clause>Join Clause]
+
+ from_schema_name -->|#quot;,#quot;| from_table_name
+
+ from_table_name -->|#quot;,#quot;| from_schema_name
+ from_table_name -->|#quot;,#quot;| from_table_name
+ from_table_name -->|#quot;,#quot;| from_subquery
+ from_table_name --> j2
+
+ from_subquery -->|#quot;,#quot;| from_schema_name
+ from_subquery -->|#quot;,#quot;| from_table_name
+ from_subquery -->|#quot;,#quot;| from_subquery
+ from_subquery --> j2
+
+ join_clause --> j2
+
+ WHERE --> where_expression>Expression]
+ where_expression --> j3
+
+ GROUP --> GROUP_BY
+ GROUP_BY --> by_expression>Expression]
+ by_expression -->|#quot;,#quot;| by_expression
+ by_expression --> HAVING
+ by_expression --> j4
+
+ HAVING --> hav_expression>Expression]
+ hav_expression --> j4
+
+ WINDOW --> window_name([Window Name])
+ window_name --> AS
+ AS --> window_definition>Window Definition]
+ window_definition -->|#quot;,#quot;| window_name
+ window_definition --> j5
+
+ VALUES -->|"#quot;(#quot;"| values_expression>Expression]
+ values_expression -->|#quot;,#quot;| values_expression
+ values_expression -->|"#quot;),(#quot;"| values_expression
+ values_expression -->|"#quot;)#quot;"| j5
+
+ compound_operator --> j0
+
+ order_clause --> ordering_term>Orderng Term]
+ ordering_term -->|#quot;,#quot;| ordering_term
+ ordering_term --> j6
+
+ LIMIT --> limit_expression>Expression]
+ limit_expression --> OFFSET
+ limit_expression -->|#quot;,#quot;| limit_expression2>Expression]
+ limit_expression --> semi
+ limit_expression2 --> semi
+
+ OFFSET --> offset_expression>expression]
+ offset_expression --> semi
+
+```
diff --git a/design/sql_syntax/Statements/UPDATE.md b/design/sql_syntax/Statements/UPDATE.md
new file mode 100644
index 0000000..46f5fd1
--- /dev/null
+++ b/design/sql_syntax/Statements/UPDATE.md
@@ -0,0 +1,81 @@
+---
+characters: [",", ";", "="]
+expressions: [Column Name List, Common Table Expression, Expression, Join Clause, Qualified Table Name, Returning Clause, Subquery]
+identifiers: [Column Name, Table Name]
+keywords: [ABORT, FAIL, FROM, IGNORE, OR, RECURSIVE, REPLACE, ROLLBACK, SET, UPDATE, WHERE, WITH]
+title: UPDATE
+---
+
+# UPDATE
+
+```mermaid
+graph TB
+ st(( ))
+ semi(;)
+ stop(( ))
+ semi --> stop
+
+ st --> WITH
+ st --> UPDATE
+
+ WITH --> RECURSIVE
+ WITH --> common_table_expression>Common Table Expression]
+ RECURSIVE --> common_table_expression
+ common_table_expression -->|#quot;,#quot;| common_table_expression
+ common_table_expression --> UPDATE
+
+ UPDATE --> qualified_table_name>Qualified Table Name]
+ UPDATE --> OR
+ OR --> ABORT
+ OR --> FAIL
+ OR --> IGNORE
+ OR --> REPLACE
+ OR --> ROLLBACK
+ ABORT --> qualified_table_name
+ FAIL --> qualified_table_name
+ IGNORE --> qualified_table_name
+ REPLACE --> qualified_table_name
+ ROLLBACK --> qualified_table_name
+ qualified_table_name --> SET
+
+ SET --> column_name([Column Name])
+ SET --> column_name_list>Column Name List]
+ column_name -->|#quot;=#quot;| column_expression>Expression]
+ column_name_list --> |#quot;=#quot;| column_expression
+ column_expression -->|#quot;,#quot;| column_name
+ column_expression -->|#quot;,#quot;| column_name_list
+ column_expression --> FROM
+ column_expression --> WHERE
+ column_expression --> returning_clause>Returning Clause]
+ column_expression --> semi
+
+ FROM --> schema_name([Schema Name])
+ FROM --> table_name([Table Name])
+ FROM --> subquery>Subquery]
+ FROM --> join_clause>Join Clause]
+ FROM --> returning_clause
+
+ schema_name -->|#quot;.#quot;| table_name
+ table_name -->|#quot;,#quot;| schema_name
+ table_name -->|#quot;,#quot;| table_name
+ table_name -->|#quot;,#quot;| subquery
+ table_name --> WHERE
+ table_name --> returning_clause
+ table_name --> semi
+
+ subquery -->|#quot;,#quot;| schema_name
+ subquery -->|#quot;,#quot;| table_name
+ subquery -->|#quot;,#quot;| subquery
+ subquery --> WHERE
+ subquery --> returning_clause
+ subquery --> semi
+
+ join_clause --> WHERE
+ join_clause --> semi
+
+ WHERE --> where_expression>Expression]
+ where_expression --> returning_clause
+ where_expression --> semi
+
+ returning_clause --> semi
+```
diff --git a/design/sql_syntax/Statements/VACUUM.md b/design/sql_syntax/Statements/VACUUM.md
new file mode 100644
index 0000000..1fcd63f
--- /dev/null
+++ b/design/sql_syntax/Statements/VACUUM.md
@@ -0,0 +1,22 @@
+---
+characters: [";"]
+identifiers: [File Name, Schema Name]
+keywords: [INTO, VACUUM]
+title: VACUUM
+---
+
+# VACUUM
+
+```mermaid
+graph TB
+ st(("B0"))
+ semi(((";")))
+ st --> VACUUM
+ VACUUM --> schema_name([Schema Name])
+ VACUUM --> INTO
+ schema_name --> INTO
+ schema_name --> semi
+ INTO --> file_name([File Name])
+ file_name --> semi
+```
+
diff --git a/manifest b/manifest
deleted file mode 100644
index e69de29..0000000
diff --git a/manifest.uuid b/manifest.uuid
deleted file mode 100644
index e69de29..0000000