Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[RFC] Interacting with DDlog programs #11

Open
mihaibudiu opened this issue Oct 8, 2021 · 18 comments
Open

[RFC] Interacting with DDlog programs #11

mihaibudiu opened this issue Oct 8, 2021 · 18 comments
Labels
rfc Request for Comments

Comments

@mihaibudiu
Copy link

Interacting with DDlog programs

A DDlog program is compiled into a library. The task of sending and receiving data from DDlog programs is left to applications built on top.

Interaction with DDlog programs can be made either through the CLI or through API calls. The CLI operations are in fact interpreted by the CLI tool and converted into API calls. The semantics of these API calls is currently not well specified.

Here are things that should be clearly specified:

  • which API calls can fail and when. For example, inserting a row into a table with a primary key that would violate the primary key constraint can fail. What other operations can fail?
  • what is the state of the DDlog computation after a failure? The interaction with DDlog is performed by executing multiple commands. If a failure occurs which commands have affected the DDlog runtime state and which haven't? Does committing a transaction after a failure cause and insertions or deletions to occur? Should a failure imply an automatic rollback?
  • The spec indicates that a transaction commit call can only fail if there is no transaction in progress. Is there any other reason a transaction commit can fail? If yes, what is the state of the system after the failure? Same questions apply to rollback.
  • When can be indexes be safely read? Can indexes be read while a transaction is in progress? Is there a race between index access and transaction commits performed by a separate thread?
  • Does the order of operations within a set of commands matter? Does a set of commands performing a deletion followed by an insertion mean the same thing as no command, or is the result dependent on the relation contents?
  • Some API calls must be invoked only within a transactions, but for some it is not clear whether they can be invoked at any time.
  • I suspect that the order of API calls performed between a start and end within a transaction matters. The spec indicates that updates are buffered and applied at commit time. Are API calls such as ddlog_clear_relation also buffered?
  • There seems to be no way to report back why a certain apply_commands call has failed. Should some error metadata be returned on error?
  • Are operations like deletion of a missing record or insertion of an existing record in a relation errors or no-ops? Related to this question, perhaps there should be several kinds of input relations: some which apply distinct automatically to input relations, some which assume that the external operations always preserve the relation as a table (and which may have undefined results if this is protocol is not obeyed by the environment), and some which return errors when the relation would become inconsistent.
  • Commands such as ddlog_dump_input_snapshot must be run within a transaction? If not, how do they interfere with concurrent transactions that may be happening.
@mihaibudiu mihaibudiu added the rfc Request for Comments label Oct 8, 2021
@mihaibudiu
Copy link
Author

This is related to vmware/differential-datalog#372

@ryzhyk
Copy link
Contributor

ryzhyk commented Oct 11, 2021

Some more thoughts to add to this list:

  • In DDlog-1 input relations with and without primary keys behave differently. Adding a duplicate key or deleting a non-existing key from a relation with a primary key causes an error. Duplicate insertions and deletions are no-ops in relations without a key. Although motivated by real use cases, this asymmetry is confusing. Each API has its advantages: the failing API gives the caller more information about existing keys, while the non-failing API doesn't require error handling. Perhaps the solution is to support a unified API that combines the advantages of both designs: the apply_changes() method never fails and returns lists of successfully and unsuccessfully applied updates, rather than a single status flag. Or perhaps the API should allow the caller to choose failure semantics for each invocation (ignore duplicates or fail when encountering a duplicate).
  • We currently use a private snapshot of the state of input relations to implement set semantics (ignore or fail on duplicates as discussed above). We should use DD's upsert operator instead. In fact this is something we should fix in DDlog-1 too: Use the DD upsert operator instead of maintaining input state in DDlog. differential-datalog#1101.

@mihaibudiu
Copy link
Author

mihaibudiu commented Oct 11, 2021

  • A completely symmetric API would actually add a Delta, the same as the result from a transaction commit_dump_changes. We could add this API.
  • Regarding enforcing invariants on the input collections, that could be done in three ways (as I alluded):
    • add a distinct after each input relation. Then each delta is by definition correct (modulo primary keys).
    • specify a preprocessing module logically before DDlog, which maintains the input table and rejects illegal updates
    • enforce no invariants and assume that the users' updates are all legal (e.g., because they come from a real DB or another DDlog instance, which enforces the invariants itself).
  • The problem with your proposal about indicating which updates fail is that the updates actually don't commute. So skipping some will give a completely different semantics to the update set.
  • Another interesting possibility is to add an explicit transaction manager module, which accumulates all changes, applies the transaction, and automatically commits or rolls back. It also serializes all transactions. This will make the usage much simpler for users. This will not solve one important issue (which may have no solution at all, in fact), about transactions which mix updates with lookups. We cannot guarantee transactional semantics for such read/write transactions, only for write-only transactions.

@mihaibudiu
Copy link
Author

DDlog is a streaming database, but by adding indexes and their lookup APIs, and primary keys, we convert it into something that is more similar to a traditional DB. But it is not really a traditional DB, so people who expect it to behave as one may be surprised.

@ryzhyk
Copy link
Contributor

ryzhyk commented Oct 11, 2021

add a distinct after each input relation. Then each delta is by definition correct (modulo primary keys).

This won't work as insert->insert->delete behaves like an insert, whereas the "expected" behavior (assuming the user expects set semantics is a no-op)

enforce no invariants and assume that the users' updates are all legal (e.g., because they come from a real DB or another DDlog instance, which enforces the invariants itself).

From experience, many users struggle with this, which is why the current semantics was introduced in the first place. This is the same issue that Frank talks about in the upsert blog.

specify a preprocessing module logically before DDlog, which maintains the input table and rejects illegal updates

This preprocessing module will end up maintaining a private snapshot or input state as we do now. upserts avoid this overhead.

@mihaibudiu
Copy link
Author

If you have the apply(delta) API then the distinct will work fine (but will not handle the primary keys).

@ryzhyk
Copy link
Contributor

ryzhyk commented Oct 11, 2021

The problem with your proposal about indicating which updates fail is that the updates actually don't commute. So skipping some will give a completely different semantics to the update set.

It is up to the user to rollback a transaction after a failed update.

@mihaibudiu
Copy link
Author

If all failed transactions are supposed to be rolled back, why not do it automatically?

@ryzhyk
Copy link
Contributor

ryzhyk commented Oct 11, 2021

If you have the apply(delta) API then the distinct will work fine (but will not handle the primary keys).

Not sure I understand. How will apply(delta) solve the insert->insert->delete problem. Maybe I don't understand what apply(delta) means.

@mihaibudiu
Copy link
Author

And if they are not supposed to be rolled back, the state of the DB after failed transaction should be clearly defined.

@ryzhyk
Copy link
Contributor

ryzhyk commented Oct 11, 2021

If all failed transactions are supposed to be rolled back, why not do it automatically?

They are not. It's up to the client. And yes, the state needs to be clearly defined.

@mihaibudiu
Copy link
Author

If you have the apply(delta) API then the distinct will work fine (but will not handle the primary keys).

Not sure I understand. How will apply(delta) solve the insert->insert->delete problem. Maybe I don't understand what apply(delta) means.

By essentially defining the semantics of an update in this way: take a delta, add it to the input table, and apply a distinct. It is not a traditional DB view, but it is clear.

@ryzhyk
Copy link
Contributor

ryzhyk commented Oct 11, 2021

If you have the apply(delta) API then the distinct will work fine (but will not handle the primary keys).

Not sure I understand. How will apply(delta) solve the insert->insert->delete problem. Maybe I don't understand what apply(delta) means.

By essentially defining the semantics of an update in this way: take a delta, add it to the input table, and apply a distinct. It is not a traditional DB view, but it is clear.

I see. This still doesn't solve the insert->insert->delete problem though if each operation happens in a separate transaction.

@Kixiron
Copy link
Contributor

Kixiron commented Oct 11, 2021

We could draw inspiration from Materialize, the way that it handles internal or user-produced errors is by producing parallel error tables (read: relations) for outputs, allowing it to incrementally process errors (and to incrementally fix them as well). The basic structure is that one relation is filled with Ok values and one with Errs, if the Err table is non-empty then the Ok table has an indeterminate value. Ideally this could also allow for rollback to happen if engineered correctly and it should automatically address insert->insert->delete since it'd maintain weights like DD does normally

@ryzhyk
Copy link
Contributor

ryzhyk commented Oct 11, 2021

Yeah, status tables are a nice way to report errors incrementally, especially if we want to support a larger class of consistency constraints.

The problem with insert->insert->delete though is that we don't want to maintain weights in the normal way. Clients that rely on the upsert semantics expect the second insert to be a no-op, so that the last delete destroys the record. This fundamentally requires keeping a snapshot of the input tables, and DD's upsert operator should do this reasonably efficiently.

@mihaibudiu
Copy link
Author

mihaibudiu commented Oct 12, 2021

Reading this paper:

  author =	 {McSherry, Frank and Lattuada, Andrea and
                  Schwarzkopf, Malte and Roscoe, Timothy},
  title =	 {Shared Arrangements: Practical Inter-Query Sharing
                  for Streaming Dataflows},
  url =		 {http://www.vldb.org/pvldb/vol13/p1793-mcsherry.pdf},

suggests an interesting solution: shared arrangements for all inputs. This could be a compilation option.
This solution could provide several benefits:

  • the shared arrangements maintain the input collections and can be used to detect whether deltas applied are legal. (They do not solve all transaction questions, like what happens to reads and their consistency).
  • the shared arrangements are in fact accumulators. Thus they can be implemented on a reliable medium (e.g., either secondary storage or using a distributed reliable implementation, using Paxos or something equivalent). This would provide immediately the fault tolerance support by providing the rehydration mechanism.
  • having the full collection would also enable dynamic queries, that could be installed at runtime
  • and, as suggested by the paper, this could allow sharing data between different queries

In this model a DDlog computation is really a two-stage process: input arrangements followed by the actual dataflow graph.
The input arrangements have a rich API on both sides: to the outside to perform transactions, and to the inside to supply deltas and to replay data. The dataflow graph inside has a simple API, where everything is a delta.

@Kixiron
Copy link
Contributor

Kixiron commented Oct 13, 2021

That's pretty much what the upsert stuff that Leon's talking about is

@mihaibudiu
Copy link
Author

I thought about this more and I hope I have a design. It's not final, but I hope it clarifies some dimensions. I will write a document about it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
rfc Request for Comments
Projects
None yet
Development

No branches or pull requests

3 participants