This section introduces some of the basic concepts involved in creating
+ontologies for DSP projects, by means of a relatively simple example
+project. Before reading this document, it will be helpful to have some
+familiarity with the basic concepts explained in knora-base.
+
DSP-API comes with two example projects, called incunabula and
+images-demo. Here we will consider the incunabula example, which is
+a reduced version of a real research project on early printed books. It
+is designed to store an image of each page of each book, as well as RDF
+data about books, pages, their contents, and relationships between them.
+
The Incunabula Ontology
+
Here we will just focus on some of the main aspects of the ontology. An
+ontology file typically begins by defining prefixes for the IRIs of
+other ontologies that will be referred to. First there are some prefixes
+for ontologies that are very commonly used in RDF:
The rdf, rdfs, and owl ontologies contain basic properties that
+are used to define ontology entities. The xsd ontology contains
+definitions of literal data types such as string and integer. (For
+more information about these ontologies, see the references in
+knora-base.) The foaf ontology contains classes and properties for
+representing people. The dcterms ontology represents Dublin
+Core metadata.
The knora-base ontology contains DSP-API's core abstractions, and is
+described in knora-base. The salsah-gui ontology includes properties
+that DSP projects must use to enable SALSAH, DSP-API's generic virtual
+research environment.
+
For convenience, we can use the empty prefix to refer to the
+incunabula ontology itself:
However, outside the ontology file, it would make more sense to define
+an incunabula prefix to refer to the incunabula ontology.
+
Properties
+
All the content produced by a DSP project must be stored in Knora
+resources (see incunabula-resource-classes). Resources have properties
+that point to different parts of their contents; for example, the
+incunabula project contains books, which have properties like title.
+Every property that poitns to a DSP value must be a subproperty of
+knora-base:hasValue, and every property that points to another Knora
+resource must be a subproperty of knora-base:hasLinkTo.
+
Here is the definition of the incunabula:title property:
The definition of incunabula:title consists of a list of triples, all
+of which have :title as their subject. To avoid repeating :title for
+each triple, Turtle syntax allows us to use a semicolon (;) to
+separate triples that have the same subject. Moreover, some triples also
+have the same predicate; a comma (,) is used to avoid repeating the
+predicate. The definition of :title says:
+
+
rdf:type owl:ObjectProperty: It is an owl:ObjectProperty. There are
+ two kinds of OWL properties: object properties and datatype properties.
+ Object properties point to objects, which have IRIs and
+ can have their own properties. Datatype properties point to literal
+ values, such as strings and integers.
+
rdfs:subPropertyOf knora-base:hasValue, dcterms:title: It is a
+ subproperty of knora-base:hasValue and dcterms:title. Since the
+ objects of this property will be Knora values, it must be a
+ subproperty of knora-base:hasValue. To facilitate searches, we
+ have also chosen to make it a subproperty of dcterms:title. In the
+ DSP-API v2, if you do a search for resources that have a certain
+ dcterms:title, and there is a resource with a matching
+ incunabula:title, the search results could include that resource.
+
rdfs:label "Titel"@de, etc.: It has the specified labels in
+ various languages. These are needed, for example, by user
+ interfaces, to prompt the user to enter a value.
+
knora-base:subjectClassConstraint :book: The subject of the
+ property must be an incunabula:book.
+
knora-base:objectClassConstraint knora-base:TextValue: The object
+ of this property must be a knora-base:TextValue (which is a
+ subclass of knora-base:Value).
+
salsah-gui:guiElement salsah-gui:SimpleText: When SALSAH asks a
+ user to enter a value for this property, it should use a simple text
+ field.
+
salsah-gui:guiAttribute "size=80" , "maxlength=255": The SALSAH
+ text field for entering a value for this property should be 80
+ characters wide, and should accept at most 255 characters.
+
+
The incunabula ontology contains several other property definitions
+that are basically similar. Note that different subclasses of Value
+are used. For example, incunabula:pubdate, which represents the
+publication date of a book, points to a knora-base:DateValue. The
+DateValue class stores a date range, with a specified degree of
+precision and a preferred calendar system for display.
+
A property can point to a Knora resource instead of to a Knora value.
+For example, in the incunabula ontology, there are resources
+representing pages and books, and each page is part of some book. This
+relationship is expressed using the property incunabula:partOf:
+
:partOf rdf:type owl:ObjectProperty ;
+
+ rdfs:subPropertyOf knora-base:isPartOf ;
+
+ rdfs:label "ist ein Teil von"@de ,
+ "est un part de"@fr ,
+ "e una parte di"@it ,
+ "is a part of"@en ;
+
+ rdfs:comment """Diese Property bezeichnet eine Verbindung zu einer anderen Resource, in dem ausgesagt wird, dass die vorliegende Resource ein integraler Teil der anderen Resource ist. Zum Beispiel ist eine Buchseite ein integraler Bestandteil genau eines Buches."""@de ;
+
+ knora-base:subjectClassConstraint :page ;
+
+ knora-base:objectClassConstraint :book ;
+
+ salsah-gui:guiElement salsah-gui:Searchbox .
+
+
The key things to notice here are:
+
+
rdfs:subPropertyOf knora-base:isPartOf: The knora-base ontology provides a generic isPartOf property to express
+ part-whole relationships. A project may use knora-base:isPartOf directly, however creating a subproperty such as
+ incunabula:partOf will allow to customize the property further, e.g. by giving it a more descriptive label.
+ It is important to note that knora-base:isPartOf is a subproperty of knora-base:hasLinkTo. Any property that
+ points to a knora-base:Resource must be a subproperty of knora-base:hasLinkTo. Such a
+ property is called a link property.
+
knora-base:objectClassConstraint :book: The object of this property must be a member of the class incunabula:book,
+ which, as we will see below, is a subclass of knora-base:Resource.
+
salsah-gui:guiElement salsah-gui:Searchbox: When SALSAH prompts a user to select the book that a page is part of, it
+ should provide a search box enabling the user to find the desired book.
+
+
Because incunabula:partOf is a link property, it must always
+accompanied by a link value property, which enables Knora to store
+metadata about each link that is created with the link property. This
+metadata includes the date and time when the link was created, its
+owner, the permissions it grants, and whether it has been deleted.
+Storing this metadata allows Knora to authorise users to see or modify
+the link, as well as to query a previous state of a repository in which
+a deleted link had not yet been deleted. (The ability to query previous
+states of a repository is planned for DSP-API version 2.)
+
The name of a link property and its link value property must be related
+by the following naming convention: to determine the name of the link
+value property, add the word Value to the name of the link property.
+Hence, the incunabula ontology defines the property partOfValue:
As a link value property, incunabula:partOfValue must point to a
+knora-base:LinkValue. The LinkValue class is an RDF reification of
+a triple (in this case, the triple that links a page to a book). For
+more details about this, see knora-base-linkvalue.
+
Note that the property incunabula:hasAuthor points to a
+knora-base:TextValue, because the incunabula project represents
+authors simply by their names. A more complex project could represent
+each author as a resource, in which case incunabula:hasAuthor would
+need to be a subproperty of knora-base:hasLinkTo.
+
Resource Classes
+
The two main resource classes in the incunabula ontology are book
+and page. Here is incunabula:book:
Like every Knora resource class, incunabula:book is a subclass of
+knora-base:Resource. It is also a subclass of a number of other
+classes of type owl:Restriction, which are defined in square brackets,
+using Turtle's syntax for anonymous blank nodes. Each owl:Restriction
+specifies a cardinality for a property that is allowed in resources of
+type incunabula:book. A cardinality is indeed a kind of restriction:
+it means that a resource of this type may have, or must have, a certain
+number of instances of the specified property. For example,
+incunabula:book has cardinalities saying that a book must have at
+least one title and at most one publication date. In the DSP-API
+version 1, the word 'occurrence' is used instead of 'cardinality'.
+
The OWL cardinalities supported by Knora are described in
+OWL Cardinalities.
+
Note that incunabula:book specifies a cardinality of
+owl:minCardinality 0 on the property incunabula:hasAuthor. At first
+glance, this might seem as if it serves no purpose, since it says that
+the property is optional and can have any number of instances. You may
+be wondering whether this cardinality could simply be omitted from the
+definition of incunabula:book. However, Knora requires every property
+of a resource to have some cardinality in the resource's class. This is
+because Knora uses the cardinalities to determine which properties are
+possible for instances of the class, and the DSP-API relies on this
+information. If there was no cardinality for incunabula:hasAuthor,
+Knora would not allow a book to have an author.
+
Each owl:Restriction specifying a cardinality can include the predicate
+salsah-gui:guiOrder, which tells the SALSAH GUI the order the properties
+should be displayed in.
The incunabula:page class is a subclass of
+knora-base:StillImageRepresentation, which is a subclass of
+knora-base:Representation, which is a subclass of
+knora-base:Resource. The class knora-base:Representation is used for
+resources that contain metadata about files stored by Knora. Each It has
+different subclasses that can hold different types of files, including
+still images, audio, and video files. A given Representation can store
+metadata about several different files, as long as they are of the same
+type and are semantically equivalent, e.g. are different versions of the
+same image with different colorspaces, so that coordinates in one file
+will work in the other files.
+
In Knora, a subclass inherits the cardinalities defined in its
+superclasses. Let's look at the class hierarchy of incunabula:page,
+starting with knora-base:Representation:
+
:Representation rdf:type owl:Class ;
+
+ rdfs:subClassOf :Resource , [
+ rdf:type owl:Restriction ;
+ owl:onProperty :hasFileValue ;
+ owl:minCardinality "1"^^xsd:nonNegativeInteger
+ ] ;
+
+ rdfs:comment "A resource that can store one or more FileValues"@en .
+
+
This says that a Representation must have at least one instance of the
+property hasFileValue, which is defined like this:
The subject of hasFileValue must be a Representation, and its object
+must be a FileValue. There are different subclasses of FileValue for
+different kinds of files, but we'll skip the details here.
+
This is the definition of knora-base:StillImageRepresentation:
+
:StillImageRepresentation rdf:type owl:Class ;
+
+ rdfs:subClassOf :Representation , [
+ rdf:type owl:Restriction ;
+ owl:onProperty :hasStillImageFileValue ;
+ owl:minCardinality "1"^^xsd:nonNegativeInteger
+ ] ;
+
+ rdfs:comment "A resource that can contain two-dimensional still image files"@en .
+
+
It must have at least one instance of the property
+hasStillImageFileValue, which is defined as follows:
Because hasStillImageFileValue is a subproperty of hasFileValue, the
+cardinality on hasStillImageFileValue, defined in the subclass
+StillImageRepresentation, overrides the cardinality on hasFileValue,
+defined in the superclass Representation. In other words, the more
+general cardinality in the superclass is replaced by a more specific
+cardinality in the base class. Since incunabula:page is a subclass of
+StillImageRepresentation, it inherits the cardinality on
+hasStillImageFileValue. As a result, a page must have at least one
+image file value attached to it.
+
Here's another example of cardinality inheritance. The class
+knora-base:Resource has a cardinality for knora-base:seqnum. The
+idea is that resources of any type could be arranged in some sort of
+sequence. As we saw above, incunabula:page is a subclass of
+knora-base:Resource. But incunabula:page has its own cardinality for
+incunabula:seqnum, which is a subproperty of knora-base:seqnum. Once
+again, the subclass's cardinality on the subproperty replaces the
+superclass's cardinality on the superproperty: a page is allowed to have
+an incunabula:seqnum, but it is not allowed to have a
+knora-base:seqnum.
Currently, only a limited number of file formats is accepted to be uploaded onto DSP.
+Some metadata is extracted from the files during the ingest but the file formats are not validated.
+Only image file formats are currently migrated into another format.
+Both, the migrated version of the file and the original are kept.
+
The following table shows the accepted file formats:
Standoff markup
+is text markup that is stored separately from the content it describes. DSP-API's
+Standoff/RDF markup stores content as a simple Unicode string, and represents markup
+separately as RDF data. This approach has some advantages over commonly used markup systems
+such as XML:
+
First, XML and other hierarchical markup systems assume that a document is a hierarchy, and
+have difficulty representing non-hierarchical structures
+or multiple overlapping hierarchies. Standoff markup can easily represent these structures.
+
Second, markup languages are typically designed to be used in text files. But there is no
+standard system for searching and linking together many different text files containing
+markup. It is possible to do this in a non-standard way by using an XML database
+such as eXist, but this still does not allow for queries that include
+text as well as non-textual data not stored in XML.
+
By storing markup as RDF, DSP-API can search for markup structures in the same way as it
+searches for any RDF data structure. This makes it possible to do searches that combine
+text-related criteria with other sorts of criteria. For example, if persons and events are
+represented as resources, and texts are represented in Standoff/RDF, a text can contain
+tags representing links to persons or events. You could then search for a text that mentions a
+person who lived in the same city as another person who is the author of a text that mentions an
+event that occurred during a certain time period.
+
In DSP-API's Standoff/RDF, a tag is an RDF entity that is linked to a
+text value. Each tag points to a substring
+of the text, and has semantic properties of its own. You can define your own tag classes
+in your ontology by making subclasses of knora-base:StandoffTag, and attach your own
+properties to them. You can then search for those properties using DSP-API's search language,
+Gravsearch.
+
The built-in knora-base and standoff ontologies
+provide some basic tags that can be reused or extended. These include tags that represent
+DSP-API data types. For example, knora-base:StandoffDateTag represents a date in exactly the
+same way as a date value, i.e. as a
+calendar-independent astronomical date. You can use this tag as-is, or extend it by making
+a subclass, to represent dates in texts. Gravsearch includes built-in functionality for
+searching for these data type tags. For example, you can search for text containing a date that
+falls within a certain date range.
+
DSP-API supports automatic conversion between XML and Standoff/RDF. To make this work,
+Standoff/RDF stores the order of tags and their hierarchical relationships. You must define an
+XML-to-Standoff Mapping for your standoff tag classes and properties.
+Then you can import an XML document into DSP-API, which will store it as Standoff/RDF. The text and markup
+can then be searched using Gravsearch. When you retrieve the document, DSP-API converts it back to the
+original XML.
+
To represent overlapping or non-hierarchical markup in exported and imported XML, DSP-API supports
+CLIX tags.
+
As XML-to-Standoff has proved to be complicated and not very well performing,
+the use of standoff with custom mappings is discouraged.
+Improved integration of text with XML mark up, particularly TEI-XML, is in planning.
The DaSCH Service Platform (DSP) is
+a content management system for the long-term preservation and reuse of
+humanities data. It is designed to accommodate data with a complex internal
+structure, including data that could be stored in relational databases.
+
DSP aims to solve key problems in the long-term preservation and reuse
+of humanities data:
+
First, traditional archives preserve data, but do not facilitate reuse. Typically,
+only metadata can be searched, not the data itself. You have to first identify
+an information package that might be of interest, then download it, and only
+then can you find out what's really in it. This is time-consuming, and
+makes it impractical to reuse data from many different sources.
+
DSP solves this problem by keeping the data alive. You can query all the data
+in a DSP repository, not just the metadata. You can import thousands of databases into
+DSP, and run queries that search through all of them at once.
+
Another problem is that researchers use a multitude of different file formats, many of
+which are proprietary and quickly become obsolete. It is not practical to maintain
+all the programs that were used to create and read old files, or even
+all the operating systems that these programs ran on. Therefore, DSP only accepts a
+certain number of file formats.
+
+
Non-binary data is stored as
+ RDF, in a dedicated
+ database called a triplestore. RDF is an open, vendor-independent standard
+ that can express any data structure.
+
Binary media files (images, audio, and video) are converted to a few specialised
+ archival file formats and stored by Sipi,
+ with metadata stored in the triplestore.
+
+
DSP makes this data available for reuse via its generic, standards-based
+application programming interface DSP-API. A virtual research environment
+(VRE) can use DSP-API to query, link, and add to data
+from different research projects in a unified way.
+
Humanities-Focused Data Storage
+
Each project creates its own data model (or ontology), describing the types of
+items it wishes to store, using basic data types defined in Knora's
+base ontology.
+This gives projects the freedom to describe their data in a way that makes
+sense to them, while allowing DSP to support searching and linking across projects.
+
DSP has built-in support for data structures that are commonly needed in
+humanities data, and that present unique challenges for any type of database storage.
+
Calendar-Independent Dates
+
In the humanities, a date could be based on any sort of calendar (e.g.
+Gregorian, Julian, Islamic, or Hebrew). The DSP stores dates using a calendar-independent,
+astronomical representation, and converts between calendars as needed. This makes
+it possible to search for a date in one calendar, and get search results in other calendars.
+
Flexible, Searchable Text Markup
+
Commonly used text markup systems, such as TEI/XML,
+have to represent a text as a hierarchy, and therefore have trouble supporting
+overlapping markup. DSP supports Standoff/RDF markup: the markup is stored
+as RDF data, separately from the text, allowing for overlapping markup. The DSP
+can import any XML document (including TEI/XML) for storage as standoff/RDF,
+and can regenerate the original XML document at any time.
+
Powerful Searches
+
DSP-API provides a search language, Gravsearch,
+that is designed to meet the needs of humanities researchers. Gravsearch supports DSP-API's
+humanities-focused data structures, including calendar-independent dates and standoff markup, as well
+as fast full-text searches. This allows searches to combine text-related criteria with any other
+criteria. For example, you could search for a text that contains a certain word
+and also mentions a person who lived in the same city as another person who is the
+author of a text that mentions an event that occurred during a certain time period.
+
Access Control
+
The RDF standards do not include any concept of permissions. DSP-API's permission
+system allows project administrators and users to determine who can see or
+modify each item of data. DSP-API filters search results according to each
+user's permissions.
+
Data History
+
RDF does not have a concept of data history. DSP-API maintains all previous
+versions of each item of data. Ordinary searches return only the latest version,
+but you can
+obtain
+and
+cite
+an item as it was at any point in the past.
+
Data Consistency
+
RDF triplestores do not implement a standardised way of ensuring the consistency
+of data in a repository. DSP-API ensures that all data is consistent, conforms
+the project-specific data models, and meets DSP-API's minimum requirements
+for interoperability and reusability of data.
+
Linked Open Data
+
DSP-API supports publishing data online as Linked Open Data,
+using open standards to allow interoperability between different repositories
+on the web.
The DSP ontologies provide a generic framework for describing humanities
+research data, allowing data from different projects to be combined, augmented,
+and reused.
+
Resource Description Framework (RDF)
+
DSP-API uses a hierarchy of ontologies based on the Resource Description
+Framework
+(RDF), RDF
+Schema (RDFS), and
+the Web Ontology Language
+(OWL). Both RDFS and OWL
+are expressed in RDF. RDF expresses information as a set of statements
+(called triples). A triple consists of a subject, a predicate, and an
+object:
+
+
The object may be either a literal value (such as a name or number) or
+another subject. Thus it is possible to create complex graphs that
+connect many subjects, like this:
+
+
In RDF, each subject and predicate has a unique, URL-like identifier
+called an Internationalized Resource Identifier
+(IRI). Within a given project,
+IRIs typically differ only in their last component (the "local part"),
+which is often the fragment following a # character. Such IRIs share a
+long "prefix". In Turtle and similar
+formats for writing RDF, a short prefix label can be defined to
+represent the long prefix. Then an IRI can be written as a prefix label
+and a local part, separated by a colon (:). For example, if the
+"example" project's long prefix is http://www.example.org/rdf#, and it
+contains subjects with IRIs like http://www.example.org/rdf#book, we
+can define the prefix label ex to represent the prefix label, and
+write prefixed names for IRIs:
+
+
Built-in Ontologies and User-Created Ontologies
+
To ensure the interoperability of data produced by different projects,
+each project must describe its data model by creating one or more ontologies that
+extend Knora's built-in ontologies. The main built-in ontology in Knora
+is knora-base.
+
Shared Ontologies
+
Knora does not normally allow a project to use classes or properties defined in
+an ontology that belongs to another project. Each project must be free to change
+its own ontologies, but this is not possible if they have been used in ontologies
+or data created by other projects.
+
However, an ontology can be defined as shared, meaning that it can be used by
+multiple projects, and that its creators will not change it in ways that could
+affect other ontologies or data that are based on it. Specifically, in a shared
+ontology, existing classes and properties cannot safely be changed, but new ones
+can be added. (It is not even safe to add an optional cardinality to an existing
+class, because this could cause subclasses to violate the rule that a class cannot
+have a cardinality on property P as well as a cardinality on a subproperty of P;
+see Restrictions on Classes.)
The Knora base ontology is the main built-in Knora ontology. Each project that uses DSP-API must describe its data model
+by creating ontologies that extend this ontology.
+
The Knora base ontology is identified by the IRI http://www.knora.org/ontology/knora-base. In the DSP-API
+documentation in general, it is identified by the prefix knora-base, but for brevity, in this document, we use kb or
+omit the prefix entirely.
+
The Knora Data Model
+
The Knora data model is based on the observation that, in the humanities, a value or literal is often itself structured
+and can be highly complex. Moreover, a value may have its own metadata, such as its creation date, information about
+permissions, and so on. Therefore, the Knora base ontology describes structured value types that can store this type of
+metadata. In the diagram below, a book (ex:book2) has a title
+(identified by the predicate ex:title) and a publication date
+(ex:pubdate), each of which has some metadata.
+
+
Projects
+
In DSP-API, each item of data belongs to some particular project. Each project using DSP-API must define a
+kb:knoraProject, which has these properties (cardinalities are indicated in parentheses after each property name):
+
+
+
projectShortname (1): A short name that can be used to identify the project in configuration files and the like.
+
+
+
projectLongname (0-1): The full name of the project.
+
+
+
projectShortcode (1): A hexadecimal code that uniquely identifies the project. These codes are assigned to projects
+ by the DaSCH.
+
+
+
projectDescription (1-n): A description of the project.
+
+
+
Ontologies and resources are associated with a project by means of the
+kb:attachedToProject property, as described in Ontologies
+and Properties of Resource). Users are associated with a project by means of
+the kb:isInProject property, as described in
+Users and Groups.
+
Ontologies
+
Each user-created ontology must be defined as an owl:Ontology with the properties rdfs:label
+and kb:attachedToProject.
+Since DSP-API v20kb:lastModificationDate property is
+also required.
+
Resources
+
All the content produced by a project (e.g. digitised primary source materials or research data) must be stored in
+objects that belong to subclasses of kb:Resource, so that DSP-API can query and update that content. Each project using
+the Knora base ontology must define its own OWL classes, derived from kb:Resource, to represent the types of data it
+deals with. A subclass of kb:Resource may additionally be a subclass of any other class, e.g. an industry-standard
+class such as foaf:Person; this can facilitate searches across projects.
+
Resources have properties that point to different parts of the content they contain. For example, a resource
+representing a book could have a property called hasAuthor, pointing to the author of the book. There are two possible
+kinds of content in a Knora resource: Knora values (see Values) or links to other resources (see
+Links Between Resources). Properties that point to Knora values must be subproperties
+of kb:hasValue, and properties that point to other resources must be subproperties of kb:hasLinkTo. Either of these
+two types of properties may also be a subproperty of any other property, e.g. an industry-standard property such
+as foaf:name; this can facilitate searches across projects. Each property definition must specify the types that its
+subjects and objects must belong to (see
+Constraints on the Types of Property Subjects and Objects
+for details).
+
Each user-created resource class definition must use OWL cardinality restrictions to specify the properties that
+resources of that class can have (see OWL Cardinalities for details).
+
Resources are not versioned; only their values are versioned (see
+Values).
+
Every resource is required to have an rdfs:label. The object of this property is an xsd:string, rather than a Knora
+value; hence it is not versioned. A user who has modify permission on a resource (see
+Authorisation) can change its label.
+
A resource can be marked as deleted; DSP-API does this by adding the predicate kb:isDeleted true to the resource. An
+optional kb:deleteComment may be added to explain why the resource has been marked as deleted. Deleted resources are
+normally hidden. They cannot be undeleted, because even though resources are not versioned, it is necessary to be able
+to find out when a resource was deleted. If desired, a new resource can be created by copying data from a deleted
+resource.
+
Properties of Resource
+
+
+
creationDate (1): The time when the resource was created.
+
+
+
attachedToUser (1): The user who owns the resource.
+
+
+
attachedToProject (1): The project that the resource is part of.
+
+
+
lastModificationDate (0-1): A timestamp indicating when the resource (or one of its values) was last modified.
+
+
+
seqnum (0-1): The sequence number of the resource, if it is part of an ordered group of resources, such as the pages
+ in a book.
+
+
+
isDeleted (1): Indicates whether the resource has been deleted.
+
+
+
deleteDate (0-1): If the resource has been deleted, indicates when it was deleted.
+
+
+
deleteComment (0-1): If the resource has been deleted, indicates why it was deleted.
+
+
+
Resources can have properties that point to other resources; see
+Links Between Resources. A resource grants permissions to groups of users;
+see Authorisation.
+
Representations
+
It is not practical to store all data in RDF. In particular, RDF is not a good storage medium for binary data such as
+images. Therefore, DSP-API stores such data outside the triplestore, in ordinary files. A resource can have metadata about
+a file attached to it. The technical term for such a resource in the Knora ontology is a Representation. For each file, there is
+a kb:FileValue in the triplestore containing metadata about the file (see FileValue). DSP-API
+uses Sipi to store files. The DSP-API provides ways to create file values.
+
A resource that has a file value must belong to one of the subclasses of
+kb:Representation. Its subclasses include:
+
+
+
StillImageRepresentation: A representation referring to a still image file which can be stored in Sipi or an external IIIF server.
+
+
+
MovingImageRepresentation: A representation containing a video file.
+
+
+
AudioRepresentation: A representation containing an audio file.
+
+
+
DDDrepresentation: A representation containing a 3D image file.
+
+
+
TextRepresentation: A representation containing a formatted text file, such as an XML file.
+
+
+
DocumentRepresentation: A representation containing a document (such as a PDF file) that is not a text file.
+
+
+
ArchiveRepresentation: A representation containing an archive file (such as a zip archive).
+
+
+
These classes can be used directly in data, but it is often better to make subclasses of them, to include metadata about
+the files being stored.
+
The base class of all these classes is Representation, which is not intended to be used directly. It has this
+property, which its subclasses override:
+
+
hasFileValue (1): Points to a file value.
+
+
There are two ways for a project to design classes for representations. The simpler way is to create a resource class
+that represents a thing in the world (such as ex:Painting) and also belongs to a subclass of Representation. This is
+adequate if the class can have only one type of file attached to it. For example, if paintings are represented only by
+still images, ex:Painting could be a subclass of StillImageRepresentation.
+
The more flexible approach, which is supported by DSP-API v2, is for each ex:Painting to
+link (using kb:hasRepresentation or a subproperty) to other resources containing files that represent the painting.
+Each of these other resources can extend a different subclass of Representation. For example, a painting could have a
+StillImageRepresentation as well as a DDDrepresentation.
+
Standard Resource Classes
+
In general, each project must define its own subclasses of kb:Resource. However, the Knora base ontology
+provides some standard subclasses of kb:Resource, which are intended to be used by any project:
+
+
+
Region: Represents a region of a Representation (see Representations).
+
+
+
Annotation: Represents an annotation of a resource.
+ The hasComment property points to the text of the annotation, represented as a kb:TextValue.
+
+
+
LinkObj: Represents a link that connects two or more resources.
+ A LinkObj has a hasLinkTo property pointing to each resource that it connects, as well as a hasLinkToValue
+ property pointing to a reification of each of these direct links (
+ see Links Between Resources).
+ A LinkObj is more complex (and hence less convenient and readable) than a simple direct link, but it has the
+ advantage that it can be annotated using an Annotation. For improved readability, a project can make its own
+ subclasses of LinkObj with specific meanings.
+
+
+
Values
+
The Knora base ontology defines a set of OWL classes that are derived from kb:Value and represent different types of
+structured values found in humanities data. This set of classes may not be extended by user-created ontologies.
+
A value is always part of one particular resource, which points to it using some property derived from hasValue. For
+example, a user-created ontology could specify a Book class with a property hasSummary (derived from hasValue),
+and that property could have a knora-base:objectClassConstraint of TextValue. This would mean that the summary of
+each book is represented as a TextValue.
+
Knora values are versioned. Existing values are not modified. Instead, a new version of an existing value is created.
+The new version is linked to the old version via the previousValue property.
+
Since each value version has a different IRI, there is no IRI that can be used to cite the value, such that it will
+always refer to the latest version of the value. Therefore, the latest version of each value has a separate UUID, as the
+object of the property valueHasUUID. When a new version of the value is created, this UUID is moved to the new
+version. This makes it possible to cite the latest version of a value by searching for the UUID.
+
"Deleting" a value means marking it with kb:isDeleted. An optional kb:deleteComment may be added to explain why the
+value has been marked as deleted. Deleted values are normally hidden.
+
Most types of values are marked as deleted without creating a new version of the value. However, link values must be
+treated as a special case. Before a LinkValue can be marked as deleted, its reference count must be decremented to 0.
+Therefore, a new version of the LinkValue is made, with a reference count of 0, and it is this new version that is
+marked as deleted.
+
To simplify the enforcement of ontology constraints, and for consistency with resource updates, no new versions of a
+deleted value can be made; it is not possible to undelete. Instead, if desired, a new value can be created by copying
+data from a deleted value.
+
Properties of Value
+
+
+
valueCreationDate (1): The date and time when the value was created.
+
+
+
attachedToUser (1): The user who owns the value.
+
+
+
valueHasString (1): A human-readable string representation of the value's contents, which is available to DSP-API's
+ full-text search index.
+
+
+
valueHasOrder (0-1): A resource may have several properties of the same type with different values (which will be of
+ the same class), and it may be necessary to indicate an order in which these values occur. For example, a book may
+ have several authors which should appear in a defined order. Hence, valueHasOrder, when present, points to an
+ integer literal indicating the order of a given value relative to the other values of the same property. These
+ integers will not necessarily start at any particular number, and will not necessarily be consecutive.
+
+
+
previousValue (0-1): The previous version of the value.
+
+
+
valueHasUUID (0-1): The UUID that refers to all versions of the value. Only the latest version of the value has this
+ property.
+
+
+
isDeleted (1): Indicates whether the value has been deleted.
+
+
+
deleteDate (0-1): If the value has been deleted, indicates when it was deleted.
+
+
+
deleteComment (0-1): If the value has been deleted, indicates why it was deleted.
+
+
+
Each Knora value can grant permissions (see Authorisation).
+
Subclasses of Value
+
TextValue
+
Represents text, possibly including markup. The text is the object of the valueHasString property. A line break is
+represented as a Unicode line feed character (U+000A). The non-printing Unicode character
+INFORMATION SEPARATOR TWO (U+001E) can be used to separate words that are separated only by standoff markup (see
+below), so they are recognised as separate in a full-text search index.
valueHasMapping (0-1): Points to the mapping used to create the standoff markup and to convert it back to the
+ original XML. See Mapping to Create Standoff From XML.
+
+
+
A text value can have a specified language:
+
+
valueHasLanguage (0-1): An ISO 639-1 code as string specifying the language of the text.
+
+
DateValue
+
Humanities data includes many different types of dates. A date has a specified calendar, and is always
+represented as a period with start and end points (which may be equal), each of which has a precision (DAY, MONTH,
+or YEAR). For GREGORIAN and JULIAN calendars, an optional ERA indicator term (BCE, CE, or BC, AD) can be
+added to the date, when no era is provided the default era AD will be considered. Internally, the start and end points
+are stored as two Julian Day Numbers. This calendar-independent representation makes it possible to compare and search
+for dates regardless of the calendar in which they were entered. Properties:
+
+
+
valueHasCalendar (1): The name of the calendar in which the date should be displayed. Currently GREGORIAN,
+ JULIAN, and ISLAMIC civil calendars are supported.
+
+
+
valueHasStartJDN (1): The Julian Day Number of the start of the period (an xsd:integer).
+
+
+
valueHasStartPrecision (1): The precision of the start of the period.
+
+
+
valueHasEndJDN (1): The Julian Day Number of the end of the period (an xsd:integer).
+
+
+
valueHasEndPrecision (1): The precision of the end of the period.
+
+
+
TimeValue
+
A Knora time value represents a precise moment in time in the Gregorian calendar. Since nanosecond precision can be
+included, it is suitable for use as a timestamp. Properties:
+
+
valueHasTimeStamp (1): An xsd:dateTimeStamp, stored as an xsd:dateTime (because SPARQL does not support
+ xsd:dateTimeStamp).
+
+
IntValue
+
Represents an integer. Property:
+
+
valueHasInteger (1): An xsd:integer.
+
+
ColorValue
+
+
valueHasColor (1): A string representing a color. The string encodes a color as hexadecimal RGB values, e.g.
+ \#FF0000.
+
+
DecimalValue
+
Represents an arbitrary-precision decimal number. Property:
+
+
valueHasDecimal (1): An xsd:decimal.
+
+
UriValue
+
Represents a non-Knora URI. Property:
+
+
valueHasUri (1): An xsd:anyURI.
+
+
BooleanValue
+
Represents a boolean value. Property:
+
+
valueHasBoolean (1): An xsd:boolean.
+
+
GeomValue
+
Represents a geometrical object as a JSON string, using normalized coordinates. Property:
+
+
valueHasGeometry (1): A JSON string.
+
+
GeonameValue
+
Represents a geolocation, using the identifiers found at GeoNames. Property:
+
+
valueHasGeonameCode (1): The identifier of a geographical feature from GeoNames, represented
+ as an xsd:string.
+
+
IntervalValue
+
Represents a time interval, with precise start and end times on a timeline, e.g. relative to the beginning of an audio
+or video file. Properties:
+
+
+
valueHasIntervalStart (1): An xsd:decimal representing the start of the interval in seconds.
+
+
+
valueHasIntervalEnd (1): An xsd:decimal representing the end of the interval in seconds.
+
+
+
ListValue
+
Projects often need to define lists or hierarchies of categories that can be assigned to many different resources. Then,
+for example, a user interface can provide a drop-down menu to allow the user to assign a category to a resource.
+The ListValue class provides a way to represent these sorts of data structures. It can represent either a flat list or
+a tree.
+
A ListValue has this property:
+
+
valueHasListNode (1): Points to a ListNode.
+
+
Each ListNode can have the following properties:
+
+
+
isRootNode (0-1): Set to true if this is the root node.
+
+
+
hasSubListNode (0-n): Points to the node's child nodes, if any.
+
+
+
hasRootNode (0-1): Points to the root node of the list (absent if isRootNode is true).
+
+
+
listNodePosition (0-1): An integer indicating the node's position in the list of its siblings (absent
+ if isRootNode is true).
+
+
+
listNodeName (0-1): The node's human-readable name (absent if isRootNode is true).
+
+
+
FileValue
+
DSP-API can store certain kinds of data outside the triplestore, in files (see Representations). Each
+digital object that is stored outside the triplestore has associated metadata, which is stored in the triplestore in
+a kb:FileValue. The base class FileValue, which is not intended to be used directly, has these properties:
+
+
+
internalFilename (1): The name of the file as stored by Knora.
+
+
+
internalMimeType (1): The MIME type of the file as stored by Knora.
+
+
+
originalFilename (0-1): The original name of the file when it was uploaded to the DSP-API server.
+
+
+
originalMimeType (0-1): The original MIME type of the file when it was uploaded to the Knora API server.
+
+
+
isPreview (0-1): A boolean indicating whether the file is a preview, i.e. a small image representing the contents of
+ the file. A preview is always a StillImageAbstractFileValue, regardless of the type of the enclosing Representation.
+
+
+
The subclasses of FileValue, which are intended to be used directly in data, include:
+
+
+
StillImageAbstractFileValue: Contains metadata about a still image file, which can be either StillImageFileValue (an image stored in Sipi) or StillImageExternalFileValue (a reference to an image stored in an external IIIF service).
+
+
+
MovingImageFileValue: Contains metadata about a video file.
+
+
+
AudioFileValue: Contains metadata about an audio file.
+
+
+
DDDFileValue: Contains metadata about a 3D image file.
+
+
+
TextFileValue: Contains metadata about a text file.
+
+
+
DocumentFileValue: Contains metadata about a document (such as PDF) that is not a text file.
+
+
+
ArchiveFileValue: Contains metadata about an archive (such as zio archive).
+
+
+
Each of these classes contains properties that are specific to the type of file it describes. For example, still image
+files have dimensions, video files have frame rates, and so on.
+
FileValue objects are versioned like other values, and the actual files stored by DSP-API are also versioned. Version 1
+of the DSP-API does not provide a way to retrieve a previous version of a file, but this feature will be added in a
+subsequent version of the API.
+
LinkValue
+
A LinkValue is an RDF "reification" containing metadata about a link between two resources. It is therefore a subclass
+of rdf:Statement as well as of Value. It has these properties:
+
rdf:subject (1)
+
: The resource that is the source of the link.
+
rdf:predicate (1)
+
: The link property.
+
rdf:object (1)
+
: The resource that is the target of the link.
+
valueHasRefCount (1)
+
: The reference count of the link. This is meaningful when the
+LinkValue describes resource references in Standoff text markup
+(see StandoffLinkTag). Otherwise, the reference count will always be 1 (if the link exists) or 0 (if
+it has been deleted).
Represents a resource that is not stored in the RDF triplestore managed by DSP-API, but instead resides in an external
+repository managed by some other software. The ExternalResValue contains the information that DSP-API needs in order to
+access the resource, assuming that a suitable gateway plugin is installed.
+
extResAccessInfo (1)
+
: The location of the repository containing the external resource
+(e.g. its URL).
+
extResId (1)
+
: The repository-specific ID of the external resource.
+
extResProvider (1)
+
: The name of the external provider of the resource.
+
Links Between Resources
+
A link between two resources is expressed, first of all, as a triple, in which the subject is the resource that is the
+source of the link, the predicate is a "link property" (a subproperty of kb:hasLinkTo), and the object is the resource
+that is the target of the link.
+
It is also useful to store metadata about links. For example, DSP-API needs to know who owns the link, who has permission
+to modify it, when it was created, and so on. Such metadata cannot simply describe the link property, because then it
+would refer to that property in general, not to any particular instance in which that property is used to connect two
+particular resources. To attach metadata to a specific link in RDF, it is necessary to create an RDF "reification". A
+reification makes statements about a particular triple (subject, predicate, object), in this case the triple that
+expresses the link between the resources. DSP-API uses reifications of type kb:LinkValue (described in
+LinkValue) to store metadata about links.
+
For example, suppose a project describes paintings that belong to collections. The project can define an ontology as
+follows (expressed here in Turtle format, and simplified for the purposes of
+illustration):
To link the paintings to the collection, we must add a "link property"
+to the ontology. In this case, the link property will point from a painting to the collection it belongs to. Every link
+property must be a subproperty of kb:hasLinkTo.
We must then add a "link value property", which will point from a painting to a kb:LinkValue (described in
+LinkValue), which will contain metadata about the link between the property and the collection. In
+particular, the link value specifies the creator of the link, the date when it was created, and the permissions that
+determine who can view or modify it. The name of the link value property is constructed using a simple naming
+convention: the word Value is appended to the name of the link property. In this case, since our link property is
+called
+:isInCollection, the link value property must be called
+:isInCollectionValue. Every link value property must be a subproperty of kb:hasLinkToValue.
This creates a link (paintings:isInCollection) between the painting and the collection, along with a reification
+containing metadata about the link. We can visualise the result as the following graph:
+
+
DSP-API allows a user to see a link if the requesting user has permission to see the source and target resources as well
+as the kb:LinkValue.
+
Part-Whole-Relations between Resources
+
isPartOf
+
A special case of linked resources are part-of related resources, i.e. a resource consisting of several other
+resources. In order to create a part-of relation between two resources, the resource that is part of another resource
+needs to have a property that is either kb:isPartOf or a subproperty thereof.
+kb:isPartOf itself is a subproperty of kb:hasLinkTo. Same as described above for link properties, a corresponding
+part-of value property is created automatically. This value property has the same name as the part-of property with
+Value appended. For example, if in an ontology data a property data:partOf was defined, the corresponding value
+property would be named data:partOfValue. This newly created property data:partOfValue is defined as a subproperty
+of kb:isPartOfValue.
+
Part-of relations are recommended for resources of type kb:StillImageRepresentation. In that case, the resource that
+is part of another resource needs to have a property kb:seqnum or a subproperty thereof, with an integer as value. A
+client can then use this information to leaf through the parts of the compound resource (p.ex. to leaf through the pages
+of a book like in this example).
+
Segment
+
DSP-API supports the creation of segment resources.
+A segment is a part of a resource which has a temporal extent;
+the segment is defined by a start and end time relative to the resource.
+Segments are modelled as resources of type kb:Segment,
+having the properties kb:isSegmentOf, a LinkValue pointing to the resource the segment is part of,
+and kb:hasSegmentBounds, a IntervalValue representing the temporal extent of the segment.
+However, kb:Segment is "abstract" and cannot be used directly in data.
+
Segments have a number of optional, generic properties to add additional information:
+kb:hasTitle (0-1), kb:hasDescription (0-n), kb:hasKeyword (0-n),
+kb:relatesTo/kb:relatesToValue (0-n), and kb:hasComment (0-1).
+
There are two concrete subclasses of kb:Segment for video and audio resources.
+
It is possible to create subclasses of kb:AudioSegment and kb:VideoSegment to add additional properties,
+but this is discouraged and may not be supported in future versions of DSP-API.
+Instead, instances of kb:Annotation pointing to the segment should be used to add additional information.
+
AudioSegment
+
Audio segments are defined by the following properties:
kb:hasSegmentBounds (1): An IntervalValue representing the temporal extent of the segment.
+
kb:hasTitle (0-1): A TextValue for adding a title or name to the segment.
+
kb:hasDescription (0-n): A TextValue for providing one or more descriptions of the segment.
+
kb:hasKeyword (0-n): A TextValue for adding one or more keywords to the segment.
+
kb:relatesTo/kb:relatesToValue (0-n): A LinkValue for relating the segment to another resource.
+
kb:hasComment (0-1): A TextValue for a comment on the segment.
+
+
Text with Standoff Markup
+
DSP-API is designed to be able to store text with markup, which can indicate formatting and structure, as well as the
+complex observations involved in transcribing handwritten manuscripts. One popular way of representing text in the
+humanities is to encode it in XML using the Text Encoding
+Initiative (TEI)
+guidelines. In DSP-API, a TEI/XML document can be stored as a file with attached metadata, but this is not recommended,
+because it does not allow to perform searches across multiple documents.
+
The recommended way to store text with markup in DSP-API is to use the built-in support for "standoff" markup, which
+is stored separately from the text. This has some advantages over embedded markup such as XML. While XML requires markup
+to have a hierarchical structure, and does not allow overlapping tags, standoff nodes do not have these limitations
+(see
+Using Standoff Properties for Marking-up Historical Documents in the Humanities).
+A standoff tag can be attached to any substring in the text by giving its start and end positions. Unlike in corpus
+linguistics, we do not use any tokenisation resulting in a form of predefined segmentation, which would limit the user's
+ability to freely annotate any ranges in the text.
This would require just two standoff tags: (italic, start=5, end=29) and (bold, start=14, end=36).
+
Moreover, standoff makes it possible to mark up the same text in different, possibly incompatible ways, allowing for
+different interpretations without making redundant copies of the text. In the Knora base ontology, any text value can
+have standoff tags.
+
By representing standoff as RDF triples, DSP-API makes markup searchable across multiple text documents in a repository.
+For example, if a repository contains documents in which references to persons are indicated in standoff, it is
+straightforward to find all the documents mentioning a particular person. DSP-API's standoff support is intended to make
+it possible to convert documents with embedded, hierarchical markup, such as TEI/XML, into RDF standoff and back again,
+with no data loss, thus bringing the benefits of RDF to existing TEI-encoded documents.
+
In the Knora base ontology, a TextValue can have one or more standoff tags. Each standoff tag indicates the start and
+end positions of a substring in the text that has a particular attribute. The OWL class
+kb:StandoffTag, which is the base class of all standoff node classes, has these properties:
+
+
standoffTagHasStart (1): The index of the first character in the text that has the attribute.
+
standoffTagHasEnd (1): The index of the last character in the text that has the attribute, plus 1.
+
standoffTagHasUUID (1): A UUID identifying this instance and those corresponding to it in later versions of
+ the TextValue it belongs to.
+ The UUID is a means to maintain a reference to a particular range of a text also when new versions are made and
+ standoff
+ tag IRIs change.
+
standoffTagHasOriginalXMLID (0-1): The original ID of the XML element that the standoff tag represents, if any.
+
standoffTagHasStartIndex (1): The start index of the standoff tag. Start indexes are numbered from 0 within the
+ context of a particular text.
+ When several standoff tags share the same start position, they can be nested correctly with this information when
+ transforming them to XML.
+
standoffTagHasEndIndex (1): The end index of the standoff tag. Start indexes are numbered from 0 within the context
+ of a particular text.
+ When several standoff tags share the same end position, they can be nested correctly with this information when
+ transforming
+ them to XML.
+
standoffTagHasStartParent (0-1): Points to the parent standoff tag. This corresponds to the original nesting of tags
+ in XML. If a standoff tag has no parent, it represents the XML root element.
+ If the original XML element is a CLIX tag, it represents the start of a virtual (non syntactical) hierarchy.
+
standoffTagHasEndParent (0-1): Points to the parent standoff tag if the original XML element is a CLIX tag and
+ represents the end of a virtual (non syntactical) hierarchy.
+
+
The StandoffTag class is not used directly in RDF data; instead, its subclasses are used. A few subclasses are
+currently provided in standoff-onto.ttl, and more will be added to support TEI semantics.
+Projects are able to define their own custom standoff tag classes (direct subclasses of StandoffTag
+or one of the standoff data type classes or subclasses of one of the standoff classes defined in standoff-onto.ttl).
+
Subclasses of StandoffTag
+
Standoff Data Type Tags
+
Associates data in some Knora value type with a substring in a text. Standoff data type tags are subclasses
+of ValueBase classes.
+
+
StandoffLinkTag Indicates that a substring refers to another kb:Resource. See StandoffLinkTag.
+
StandoffInternalReferenceTag Indicates that a substring refers to another standoff tag in the same text value.
+ See Internal Links in a TextValue.
+
StandoffUriTag Indicates that a substring is associated with a URI, which is stored in the same form that is used
+ for kb:UriValue. See UriValue.
+
StandoffDateTag Indicates that a substring represents a date, which is stored in the same form that is used
+ for kb:DateValue. See DateValue.
+
StandoffColorTag Indicates that a substring represents a color, which is stored in the same form that is used
+ for kb:ColorValue. See ColorValue.
+
StandoffIntegerTag Indicates that a substring represents an integer, which is stored in the same form that is used
+ for kb:IntegerValue. See IntValue.
+
StandoffDecimalTag Indicates that a substring represents a number with fractions, which is stored in the same form
+ that is used for kb:DecimalValue. See DecimalValue.
+
StandoffIntervalTag Indicates that a substring represents an interval, which is stored in the same form that is used
+ for kb:IntervalValue. See IntervalValue.
+
StandoffBooleanTag Indicates that a substring represents a Boolean, which is stored in the same form that is used
+ for kb:BooleanValue. See BooleanValue.
+
StandoffTimeTag Indicates that a substring represents a timestamp, which is stored in the same form that is used
+ for kb:TimeValue. See TimeValue.
+
+
StandoffLinkTag
+
A StandoffLinkTag Indicates that a substring is associated with a Knora resource. For example, if a repository
+contains resources representing persons, a text could be marked up so that each time a person's name is mentioned,
+a StandoffLinkTag connects the name to the Knora resource describing that person. It has the following property:
+
standoffTagHasLink (1): The IRI of the resource that is referred to.
+
One of the design goals of the Knora base ontology is to make it easy and efficient to find out which resources contain
+references to a given resource. Direct links are easier and more efficient to query than indirect links. Therefore, when
+a text value contains a resource reference in its standoff nodes, DSP-API automatically creates a direct link between the
+containing resource and the target resource, along with an RDF reification (a kb:LinkValue) describing the link, as
+discussed in Links Between Resources. In this case, the link property is
+always kb:hasStandoffLinkTo, and the link value property (which points to the LinkValue) is always
+kb:hasStandoffLinkToValue.
+
DSP-API automatically updates direct links and reifications for standoff resource references when text values are
+updated.
+To do this, it keeps track of the number of text values in each resource that contain at least one standoff reference to
+a given target resource. It stores this number as the reference count of the LinkValue (see
+LinkValue) describing the direct link. Each time this number changes, it makes a new version of
+the LinkValue, with an updated reference count. When the reference count reaches zero, it removes the direct link and
+makes a new version of the LinkValue, marked with kb:isDeleted.
+
For example, if data:R1 is a resource with a text value in which the resource data:R2 is referenced, the repository
+could contain the following triples:
Link values created automatically for resource references in standoff are visible to all users, and the creator of these
+link values is always
+kb:SystemUser (see Users and Groups). The DSP-API server allows a user to see a standoff link if
+the user has permission to see the source and target resources.
+
Internal Links in a TextValue
+
Internal links in a TextValue can be represented using the data type standoff class StandoffInternalReferenceTag or
+a subclass of it. It has the following property:
+
standoffTagHasInternalReference (1): Points to a StandoffTag that belongs to the same TextValue. It has
+an objectClassConstraint of StandoffTag.
A mapping allows for the conversion of an XML document to RDF-standoff and back. A mapping defines one-to-one relations
+between XML elements (with or without a class) and attributes and standoff classes and properties (see
+XML to Standoff Mapping).
+
A mapping is represented by a kb:XMLToStandoffMapping which contains one or more kb:MappingElement.
+A kb:MappingElement maps an XML element (including attributes) to a standoff class and standoff properties. It has the
+following properties:
+
+
mappingHasXMLTagname (1): The name of the XML element that is mapped to a standoff class.
+
mappingHasXMLNamespace (1): The XML namespace of the XML element that is mapped to a standoff class. If no namespace
+ is given, noNamespace is used.
+
mappingHasXMLClass (1): The name of the class of the XML element. If it has no class, noClass is used.
+
mappingHasStandoffClass (1): The standoff class the XML element is mapped to.
+
mappingHasXMLAttribute (0-n): Maps XML attributes to standoff properties using MappingXMLAttribute. See below.
+
mappingHasStandoffDataTypeClass (0-1): Indicates the standoff data type class of the standoff class the XML element
+ is mapped to.
+
mappingElementRequiresSeparator (1): Indicates if there should be an invisible word separator inserted after the XML
+ element in the RDF-standoff representation.
+ Once the markup is stripped, text segments that belonged to different elements may be concatenated.
+
+
A MappingXMLAttribute has the following properties:
+
+
mappingHasXMLAttributename: The name of the XML attribute that is mapped to a standoff property.
+
mappingHasXMLNamespace: The namespace of the XML attribute that is mapped to a standoff property. If no namespace is
+ given, noNamespace is used.
+
mappingHasStandoffProperty: The standoff property the XML attribute is mapped to.
+
+
DSP-API includes a standard mapping used by the DSP APP. It has the
+IRI http://rdfh.ch/standoff/mappings/StandardMapping and defines mappings for a few elements used to write texts with
+simple markup.
+
Standoff in Digital Editions
+
DSP-API's standoff is designed to make it possible to convert XML documents to standoff and back. One application for
+this
+feature is an editing workflow in which an editor works in an XML editor, and the resulting XML documents are converted
+to standoff and stored in the DSP, where they can be searched and annotated.
+
If an editor wants to correct text that has been imported from XML into standoff, the text can be exported as XML,
+edited, and imported again. To preserve annotations on standoff tags across edits, each tag can automatically be given a
+UUID. In a future version of the Knora base ontology, it may be possible to create annotations that point to UUIDs
+rather than to IRIs. When a text is exported to XML, the UUIDs can be included in the XML. When the edited XML is
+imported again, it can be converted to new standoff tags with the same UUIDs. Annotations that applied to standoff tags
+in the previous version of the text will therefore also apply to equivalent tags in the new version.
+
When text is converted from XML into standoff, tags are also given indexes, which are numbered from 0 within the context
+of a particular text. This makes it possible to order tags that share the same position, and to preserve the hierarchy
+of the original XML document. An ordinary, hierarchical XML tag is converted to a standoff tag that has one index, as
+well as the index of its parent tag, if any. The Knora base ontology also supports non-hierarchical markup such as
+CLIX, which enables overlapping
+markup to be represented in XML. When non-hierarchical markup is converted to standoff, both the start position and the
+end position of the standoff tag have indexes and parent indexes.
+
To support these features, a standoff tag can have these additional properties:
+
+
standoffTagHasStartIndex (0-1): The index of the start position.
+
standoffTagHasEndIndex (0-1): The index of the end position, if this is a non-hierarchical tag.
+
standoffTagHasStartParent (0-1): The IRI of the tag, if any, that contains the start position.
+
standoffTagHasEndParent (0-1): The IRI of the tag, if any, that contains the end position, if this is a
+ non-hierarchical tag.
+
standoffTagHasUUID (0-1): A UUID that can be used to annotate a standoff tag that may be present in different
+ versions of a text,
+ or in different layers of a text (such as a diplomatic transcription and an edited critical text).
+
+
Querying Standoff in SPARQL
+
A future version of DSP-API may provide an API for querying standoff markup. In the meantime, it is possible to query it
+directly in SPARQL. For example, here is a SPARQL query (using RDFS inference) that finds all the text values that
+have a standoff date tag referring to Christmas Eve 2016, contained in a StandoffItalicTag:
Each DSP-API user is represented by an object belonging to the class
+kb:User, which is a subclass of foaf:Person, and has the following properties:
+
userid (1)
+
: A unique identifier that the user must provide when logging in.
+
password (1)
+
: A cryptographic hash of the user's password.
+
email (0-n)
+
: Email addresses belonging to the user.
+
isInProject (0-n)
+
: Projects that the user is a member of.
+
isInGroup (0-n)
+
: user-created groups that the user is a member of.
+
foaf:familyName (1)
+
: The user's family name.
+
foaf:givenName (1)
+
: The user's given name.
+
DSP-API's concept of access control is that an object (a resource or value) can grant permissions to groups of users (but
+not to individual users). There are several built-in groups:
+
knora-admin:UnknownUser
+
: Any user who has not logged into DSP-API is automatically assigned to this group.
+
knora-admin:KnownUser
+
: Any user who has logged into DSP-API is automatically assigned to this group.
+
knora-admin:ProjectMember
+
: When checking a user's permissions on an object, the user is automatically assigned to this group if she is a member
+of the project that the object belongs to.
+
knora-admin:Creator
+
: When checking a user's permissions on an object, the user is automatically assigned to this group if he is the
+creator of the object.
+
knora-admin:ProjectAdmin
+
: When checking a user's permissions on an object, the user is automatically assigned to this group if she is an
+administrator of the project that the object belongs to.
+
knora-admin:SystemAdmin
+
: The group of DSP-API system administrators.
+
A user-created ontology can define additional groups, which must belong to the OWL class knora-admin:UserGroup.
+
There is one built-in knora-admin:SystemUser, which is the creator of link values created automatically for resource
+references in standoff markup (see StandoffLinkTag).
+
Permissions
+
Each resource or value can grant certain permissions to specified user groups. These permissions are represented as the
+object of the predicate kb:hasPermissions, which is required on every kb:Resource
+and on the current version of every kb:Value. The permissions attached to the current version of a value also apply to
+previous versions of the value. Value versions other than the current one do not have this predicate.
+
The following permissions can be granted:
+
+
Restricted view permission (RV) Allows a restricted view of the object, e.g. a view of an image with a watermark.
+
View permission (V) Allows an unrestricted view of the object. Having view permission on a resource only affects
+ the user's ability to view information about the resource other than its values. To view a value, she must have view
+ permission on the value itself.
+
Modify permission (M) For values, this permission allows a new version of a value to be created. For resources,
+ this allows the user to create a new value (as opposed to a new version of an existing value), or to change
+ information about the resource other than its values. When he wants to make a new version of a value, his permissions
+ on the containing resource are not relevant. However, when he wants to change the target of a link, the old link must
+ be deleted and a new one created, so he needs modify permission on the resource.
+
Delete permission (D) Allows the item to be marked as deleted.
+
Change rights permission (CR) Allows the permissions granted by the object to be changed.
+
+
Each permission in the above list implies all lower-numbered permissions. A user's permission level on a particular
+object is calculated in the following way:
+
+
Make a list of the groups that the user belongs to, including
+ Creator and/or ProjectMember if applicable.
+
Make a list of the permissions that she can obtain on the object, by iterating over the permissions that the object
+ grants. For each permission, if she is in the specified group, add the specified permission to the list of
+ permissions she can obtain.
+
From the resulting list, select the highest-level permission.
+
If the result is that she would have no permissions, give her whatever permission UnknownUser would have.
+
+
To view a link between resources, a user needs permission to view the source and target resources. He also needs
+permission to view the
+LinkValue representing the link, unless the link property is
+hasStandoffLinkTo (see StandoffLinkTag).
+
The format of the object of kb:hasPermissions is as follows:
+
+
Each permission is represented by the one-letter or two-letter abbreviation given above.
+
Each permission abbreviation is followed by a space, then a comma-separated list of groups that the permission is
+ granted to.
+
The IRIs of built-in groups are shortened using the knora-admin
+ prefix.
+
Multiple permissions are separated by a vertical bar (|).
+
+
For example, if an object grants view permission to unknown and known users, and modify permission to project members,
+the resulting permission literal would be:
+
V knora-admin:UnknownUser,knora-admin:KnownUser|M knora-admin:ProjectMember
+
+
Consistency Checking
+
DSP-API tries to enforce repository consistency by checking constraints that are specified in the Knora base ontology and
+in user-created ontologies. Three types of consistency rules are enforced:
+
+
Cardinalities in OWL class definitions must be satisfied.
+
Constraints on the types of the subjects and objects of OWL object properties must be satisfied.
+
A datatype property may not have an empty string as an object.
+
+
OWL Cardinalities
+
As noted in Resources, each subclass of
+Resource must use OWL cardinality restrictions to specify the properties it can have. More specifically, a resource is
+allowed to have a property that is a subproperty of kb:hasValue or kb:hasLinkTo only if the resource's class has
+some cardinality for that property. Similarly, a value is allowed to have a subproperty of kb:valueHas
+only if the value's class has some cardinality for that property.
+
DSP-API supports, and attempts to enforce, the following cardinality constraints:
+
+
+
owl:cardinality 1:
+ Exactly One 1 - A resource of this class must have exactly one instance of the specified property.
+
+
+
owl:minCardinality 1:
+ At Least One 1-n - A resource of this class must have at least one instance of the specified property.
+
+
+
owl:maxCardinality 1:
+ Zero Or One 0-1 - A resource of this class must have either zero or one instance of the specified property.
+
+
+
owl:minCardinality 0:
+ Unbounded 0-n - A resource of this class may have zero or more instances of the specified property.
+
+
+
DSP-API requires cardinalities to be defined using blank nodes, as in the following example from knora-base:
The cardinality of a link property must be the same as the cardinality of the corresponding link value property.
+
Each owl:Restriction may have the predicate salsah-gui:guiOrder to indicate the order in which properties should be
+displayed in a GUI
+(see The SALSAH GUI Ontology).
+
A resource class inherits cardinalities from its superclasses. This follows from the rules of
+RDFS inference. Also, in DSP-API, cardinalities in the subclass can
+override cardinalities that would otherwise be inherited from the superclass. Specifically, if a superclass has a
+cardinality on a property P, and a subclass has a cardinality on a subproperty of P, the subclass's cardinality
+overrides the superclass's cardinality. In the example above,
+hasStillImageFileValue is a subproperty of hasFileValue. Therefore, the cardinality on hasStillImageFileValue
+overrides (i.e. replaces)
+the one on hasFileValue.
+
Note that, unlike cardinalities, predicates of properties are not inherited. If :foo rdfs:subPropertyOf :bar, this
+does not mean that
+:foo inherits anything from :bar. Any predicates of :foo that are also needed by :bar must be defined explicitly
+on :bar. This design decision was made because property predicate inheritance is not provided by RDFS inference, and
+would make it more difficult to check the correctness of ontologies, while providing little practical benefit.
+
For more information about OWL cardinalities, see
+the OWL 2 Primer.
+
Constraints on the Types of Property Subjects and Objects
+
When a user-created ontology defines a property, it must indicate the types that are allowed as objects (and, if
+possible, as subjects) of the property. This is done using the following Knora-specific properties:
+
subjectClassConstraint
+
: Specifies the class that subjects of the property must belong to. This constraint is recommended but not required.
+DSP-API will attempt to enforce this constraint.
+
objectClassConstraint
+
: If the property is an object property, specifies the class that objects of the property must belong to. Every
+subproperty of
+kb:hasValue or a kb:hasLinkTo (i.e. every property of a resource that points to a kb:Value or to another resource)
+is required to have this constraint, because DSP-API relies on it to know what type of object to expect for the property.
+DSP-API will attempt to enforce this constraint.
+
objectDatatypeConstraint
+
: If the property is a datatype property, specifies the type of literals that can be objects of the property. DSP-API
+will not attempt to enforce this constraint, but it is useful for documentation purposes.
+
Note that it is possible for a subproperty to have a more restrictive contraint than its base property, by specifing a
+subject or object class that is a subclass of the one specified in the base property. However, it is not possible for
+the subproperty to make the base property's constraint less restrictive.
Summary of Restrictions on User-Created Ontologies
+
An ontology can refer to a Knora ontology in another project only if the other ontology is built-in or shared
+(see Shared Ontologies).
+
Restrictions on Classes
+
+
Each class must be a subclass of either kb:Resource or
+ kb:StandoffTag, but not both (note that this forbids user-created subclasses of kb:Value).
+
All the cardinalities that a class defines directly (i.e. does not inherit from kb:Resource) must be on properties
+ that are defined in the triplestore.
+
Within the cardinalities of a class, there must be a link value property for each link property and vice versa.
+
The cardinality of a link property must be the same as the cardinality of the corresponding link value property.
+
A cardinality on a property with a boolean value must be
+ owl:cardinality 1 or owl:maxCardinality 1.
+
Each class must be a subclass of all the classes that are subject class constraints of the properties in its
+ cardinalities.
+
If it's a resource class, all its directly defined cardinalities must be on Knora resource properties (subproperties
+ of kb:hasValue
+ or kb:hasLinkTo), and all its base classes with Knora IRIs must also be resource classes. A cardinality
+ on kb:resourceProperty or
+ kb:hasValue is forbidden. It must also have an rdfs:label.
+
If it's a standoff class, none of its cardinalities may be on Knora resource properties, and all its base classes with
+ Knora IRIs must also be standoff classes.
+
A class cannot have a cardinality on property P as well as a cardinality on a subproperty of P.
+
+
Restrictions on properties
+
+
The property's subject class constraint, if provided, must be a subclass of kb:Resource or kb:StandoffTag, and
+ must be a subclass of the subject class constraints of all its base properties.
+
Its object class constraint, if provided, must be a subclass of the object class constraints of all its base
+ properties.
+
If the property is a Knora resource property, it must have an object class constraint and an rdfs:label.
+
It can't be a subproperty of both kb:hasValue and kb:hasLinkTo.
+
It can't be a subproperty of kb:hasFileValue.
+
Each of its base properties that has a Knora IRI must also be a Knora resource property.
+
+
Standardisation
+
The DaSCH intends to coordinate the standardisation of generally useful entities proposed in
+user-created ontologies. We envisage a process in which two or more projects would initiate the process by starting a
+public discussion on proposed entities to be shared. Once a consensus was reached, the
+DaSCH would publish these entities in a
+Shared Ontology).
+
Knora Ontology Versions
+
The Knora base ontology has the property kb:ontologyVersion, whose object is a string that indicates the deployed
+version of all the DSP-API built-in ontologies. This allows the
+repository update program to determine which repository updates are needed
+when DSP-API is upgraded.
The SALSAH GUI ontology provides entities that can be used in
+user-created ontologies to indicate to SALSAH (or to another GUI)
+how data should be entered and displayed.
+
The SALSAH GUI ontology is identified by the IRI
+http://www.knora.org/ontology/salsah-gui. In the Knora documentation
+in general, it is identified by the prefix salsah-gui, but for
+brevity, we omit the prefix in this document.
+
Properties
+
guiOrder
+
guiOrder can be attached to a cardinality
+in a resource class, to indicate the order in which properties
+should be displayed in the GUI. The object is a non-negative
+integer. For example, a property with guiOrder 0 would be
+displayed first, followed by a property with guiOrder 1, and so
+on.
+
guiElement
+
guiElement can be attached to a property definition to indicate which
+GUI element should be used to enter data for the property. This
+should be one of the individuals of class Guielement described
+below.
+
guiAttribute
+
guiAttribute can be attached to a property definition to provide attributes for
+the GUI element specified in guiElement. The objects of this
+predicate are written in a DSL with the following syntax:
+
object =attribute name,"=",attribute value ;
+attribute name =identifier ;
+identifier =letter ,{letter };
+attribute value =integer |decimal |percent |string |iri ;
+percent =integer,"%";
+iri ="<",string,">";
+
+
The attributes used with each GUI element are described below under
+Individuals.
+
guiAttributeDefinition
+
guiAttributeDefinition is used only in the salsah-gui ontology itself, as a predicate
+attached to instances of Guielement (see Individuals),
+to specify the attributes that can be given as objects of guiAttribute when a given
+Guielement is used. The objects of this predicate are written in
+a DSL with the following syntax:
+
object =attribute name,["(required)"],":",attribute type,[enumerated values ];
+enumerated values ="(",enumerated value,{"|",enumerated value }")";
+attribute name =identifier ;
+attribute type ="integer"|"decimal"|"percent"|"string"|"iri";
+enumerated value =identifier ;
+identifier =letter ,{letter };
+
+
Enumerated values are allowed only if attribute type is string.
+If enumerated values are provided for an attribute, the attribute
+value given via guiAttribute must be one of the enumerated values.
+
Classes
+
Guielement
+
The instances of class Guielement are individuals representing GUI
+elements for data entry.
+
Individuals
+
Colorpicker
+
Colorpicker is a GUI element for selecting a color. A property definition that uses
+this element may also contain a guiAttribute predicate whose
+object is a string in the form "ncolors=N", where N is an
+integer specifying the number of colors to display.
+
Date
+
Date is a GUI element for selecting a date.
+
Geometry
+
Geometry is a GUI element for selecting the geometry of a two-dimensional
+region.
+
Geonames
+
Geonames is a GUI element for selecting a Geonames
+identifier.
+
Interval
+
Interval is a GUI element for selecting a time interval in an audio or video
+recording.
+
List
+
List is a GUI element for selecting an item in a hierarchical list (see
+ListValue). A property definition that
+uses this element must also contain this guiAttribute predicate:
+
"hlist=<LIST_IRI>", where LIST_IRI is the IRI of a
+knora-base:ListNode that is the root node of a hierarchical list.
+
Pulldown
+
Pulldown is a GUI element for selecting an item in a flat list (see
+ListValue) using a pull-down menu. A
+property definition that uses this element must also contain this
+guiAttribute predicate:
+
"hlist=<LIST_IRI>", where LIST_IRI is the IRI of a
+knora-base:ListNode that is the root node of a hierarchical list.
+
Radio
+
Radio is a GUI element for selecting an item in a flat list (see
+ListValue) using radio buttons. A property
+definition that uses this element must also contain this
+guiAttribute predicate:
+
"hlist=<LIST_IRI>", where LIST_IRI is the IRI of a
+knora-base:ListNode that is the root node of a hierarchical list.
+
Richtext
+
Richtext is a GUI element for editing multi-line formatted text.
+
Searchbox
+
Searchbox is a GUI element for searching for a resource by matching text in its rdfs:label.
+
SimpleText
+
SimpleText is a GUI element for editing a single line of unformatted text. A
+property definition that uses this element may also contain a
+guiAttribute predicate with one or both of the following objects:
+
+
"size=N", where N is an integer specifying the size of the
+ text field.
+
"maxlength=N", where N is an integer specifying the maximum
+ length of the string to be input.
+
+
Slider
+
Slider is a GUI element for choosing numerical values using a slider. A
+property definition that uses this element must also contain a
+guiAttribute predicate with both of the following objects:
+
+
"min=N", where N is an integer specifying the minimum value
+ of the input.
+
"max=N", where N is an integer specifying the maximum value
+ of the input.
+
+
Spinbox
+
Spinbox is a GUI element for choosing numerical values using a spinbox. A
+property definition that uses this element may also contain a
+guiAttribute predicate with one or both of the following objects:
+
+
"min=N", where N is an integer specifying the minimum value
+ of the input.
+
"max=N", where N is an integer specifying the maximum value
+ of the input.
+
+
Textarea
+
Textarea is a GUI element for editing multi-line unformatted text. A property
+definition that uses this element may also contain a guiAttribute
+predicate with one or more of the following objects:
+
+
"width=N", where N is a percentage of the window width (an
+ integer followed by %).
+
"cols=N", where N is an integer representing the number of
+ colums in the text entry box.
+
"rows=N", where N is an integer specifying the height of the
+ text entry box in rows.
Additionally, each group can have an optional custom IRI (of @ref:Knora IRI
+form)
+specified by the id in the request body as below:
+
{
+"id":"http://rdfh.ch/groups/00FF/a95UWs71KUklnFOe1rcw1w",
+"name":"GroupWithCustomIRI",
+"descriptions":[
+{
+"value":"A new group with a custom IRI",
+"language":"en"
+}
+],
+"project":"http://rdfh.ch/projects/00FF",
+"status":true,
+"selfjoin":false
+}
+
We provide an OpenAPI specification for certain endpoints and are working on providing this for all endpoints.
+The latest version is located at api.dasch.swiss/api/docs/docs.yaml.
+For an interactive documentation of all API endpoints, please visit api.dasch.swiss/api/docs/.
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the request body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the request body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the request body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the request body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the request body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the request body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the request body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
Checks if a list can be deleted (none of its nodes is used in data).
+
+
Input parameters
+
+
+
+
Parameter
+
In
+
Type
+
Default
+
Nullable
+
Description
+
+
+
+
+
p1
+
path
+
string
+
+
No
+
The IRI of the list.
+
+
+
+
+
+ Response 200OK
+
+
+
+
+
+
{
+"listIri":"string",
+"canDeleteList":true
+}
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the request body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the request body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the request body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
Update an existing default object access permission. The request may update
+the hasPermission and/or any allowed combination of group, resource class
+and property for the permission.
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the request body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the request body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the request body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the request body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the request body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the request body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
Returns all unique keywords for all projects as a list.
+
+
+ Response 200OK
+
+
+
+
+
+
{
+"keywords":[
+"string"
+]
+}
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
!ATTENTION! Erase a project with the given shortcode.
+This will permanently and irrecoverably remove the project and all of its
+assets.
+Authorization: Requires system admin permissions.
+Only available if the feature has been configured on the server side.
+
+
Input parameters
+
+
+
+
Parameter
+
In
+
Type
+
Default
+
Nullable
+
Description
+
+
+
+
+
httpAuth1
+
header
+
string
+
N/A
+
No
+
Basic authentication
+
+
+
httpAuth
+
header
+
string
+
N/A
+
No
+
JWT Bearer token
+
+
+
keepAssets
+
query
+
boolean
+
False
+
No
+
If set to true the assets in ingest will not be removed.
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
The shortcode of a project. Must be a 4 digit hexadecimal String.
+
+
+
+
+
+ Response 200OK
+
+
+
+
+
+
{
+"location":"string"
+}
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
Resets the content of the triplestore, only available if configuration
+allowReloadOverHttp is set to true.
+
+
Input parameters
+
+
+
+
Parameter
+
In
+
Type
+
Default
+
Nullable
+
Description
+
+
+
+
+
prependDefaults
+
query
+
boolean
+
True
+
No
+
Prepend defaults to the data objects.
+
+
+
+
Request body
+
+
+
+
+
[
+{
+"path":"string",
+"name":"string"
+}
+]
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the request body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the request body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the request body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the request body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the request body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the request body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
+⚠️This example has been generated automatically from the schema and it is not accurate. Refer to the schema for more information.
+
+Schema of the response body
+
The DSP Admin API makes it possible to administrate projects, users, user groups, permissions, and hierarchical lists.
+
RESTful API
+
The Knora Admin API is a RESTful API that allows for reading and adding of
+administrative resources from and to Knora and changing their values
+using HTTP requests. The actual data is submitted as JSON (request and
+response format). The various HTTP methods are applied according to the
+widespread practice of RESTful APIs: GET for reading, POST for adding,
+PUT for changing resources and values, and DELETE to delete resources or
+values (see
+Using HTTP Methods for RESTful Services).
+
Knora IRIs in the Admin API
+
Every resource that is created or hosted by Knora is identified by a
+unique ID called an Internationalized Resource Identifier (IRI).
+The IRI is required for every API operation to identify the resource in question.
+A Knora IRI has itself the format of a URL.
+For some API operations, the IRI has to be URL-encoded (HTTP GET requests).
+
Unlike the DSP-API v2, the admin API uses internal IRIs, i.e. the actual IRIs
+that are stored in the triplestore (see Knora IRIs).
+
Admin Path Segment
+
Every request to Admin API includes admin as a path segment, e.g.
+http://host/admin/users/iri/http%3A%2F%2Frdfh.ch%2Fusers%2Froot.
+
Admin API Response Format
+
If an API request is handled successfully, Knora responds
+with a 200 HTTP status code. The actual answer from Knora (the
+representation of the requested resource or information about the
+executed API operation) is sent in the HTTP body, encoded as JSON.
+
Placeholder host in sample URLs
+
Please note that all the sample URLs used in this documentation contain
+host as a placeholder. The placeholder host has to be replaced by
+the actual hostname (and port) of the server the Knora instance is
+running on.
+
Authentication
+
For all API operations that target at changing resources or values, the
+client has to provide credentials (username and password) so that the
+API server can authenticate the user making the request. Credentials can
+be sent as a part of the HTTP header or as parts of the URL (see
+Authentication in Knora).
+
Admin API Endpoints
+
An overview over all admin API endpoints can be found here.
GET: /admin/lists[?projectIri=<projectIri>] : return all lists optionally filtered by project
+
GET: /admin/lists/<listItemIri> : return complete list with all children if IRI of the list (i.e. root node) is given.
+ If IRI of the child node is given, return the node with its immediate children
+
GET: /admin/lists/infos/<listIri> : return list information (without children)
+
GET: /admin/lists/nodes/<nodeIri> : return list node information (without children)
+
GET: /admin/lists/<listIri>/info : return list basic information (without children)
+
+
GET: /admin/lists/candelete/<listItemIri> : check if list or its node is unused and can be deleted
+
+
+
POST: /admin/lists : create new list
+
+
+
POST: /admin/lists/<parentNodeIri> : create new child node under the supplied parent node IRI
+
+
+
PUT: /admin/lists/<listItemIri> : update node information (root or child)
+
+
PUT: /admin/lists/<listItemIri>/name : update the name of the node (root or child)
+
PUT: /admin/lists/<listItemIri>/labels : update labels of the node (root or child)
+
PUT: /admin/lists/<listItemIri>/comments : update comments of the node (root or child)
+
+
PUT: /admin/lists/<nodeIri>/position : update position of a child node within its current parent
+ or by changing its parent node
+
+
+
DELETE: /admin/lists/<listItemIri> : delete a list (i.e. root node) or a child node and all its children, if not used
+
+
DELETE: /admin/lists/comments/<nodeIri> : delete comments of a node (child only)
+
+
List Item Operations
+
Get lists
+
+
Required permission: none
+
Return all lists optionally filtered by project
+
GET: /admin/lists[?projectIri=<projectIri>]
+
+
Get list
+
+
Required permission: none
+
Return complete list (or node) including basic information of the list (or child node),
+ listinfo (or nodeinfo), and all its children
+
GET: /admin/lists/<listIri>
+
+
Get list's information
+
+
Required permission: none
+
Return list information, listinfo (without children).
Additionally, each list can have an optional custom IRI (of Knora IRI form)
+specified by the id in the request body as below:
+
{
+"id":"http://rdfh.ch/lists/0001/yWQEGXl53Z4C4DYJ-S2c5A",
+"projectIri":"http://rdfh.ch/projects/0001",
+"name":"a new list",
+"labels":[{"value":"New list with IRI","language":"en"}],
+"comments":[{"value":"New comment","language":"en"}]
+}
+
+
The response will contain the basic information of the list, listinfo and an empty list of its children, as below:
+
{
+"list":{
+"children":[],
+"listinfo":{
+"comments":[{"value":"New comment","language":"en"}],
+"id":"http://rdfh.ch/lists/0001/yWQEGXl53Z4C4DYJ-S2c5A",
+"isRootNode":true,
+"labels":[
+{
+"value":"New list with IRI",
+"language":"en"
+}
+],
+"name":"a new list",
+"projectIri":"http://rdfh.ch/projects/0001"
+}
+}
+}
+
Appends a new child node under the supplied nodeIri. If the supplied nodeIri
+ is the listIri, then a new child node is appended to the top level. If a position is given
+ for the new child node, the node will be created and inserted in the specified position, otherwise
+ the node is appended to the end of parent's children.
+
POST: /admin/lists/<parentNodeIri>
+
BODY:
+
+
{
+"parentNodeIri":"http://rdfh.ch/lists/0001/yWQEGXl53Z4C4DYJ-S2c5A",
+"projectIri":"http://rdfh.ch/projects/0001",
+"name":"a child",
+"labels":[{"value":"New List Node","language":"en"}]
+}
+
+
Additionally, each child node can have an optional custom IRI (of Knora IRI
+form) specified by the id in the request body as below:
+
{"id":"http://rdfh.ch/lists/0001/8u37MxBVMbX3XQ8-d31x6w",
+"parentNodeIri":"http://rdfh.ch/lists/0001/yWQEGXl53Z4C4DYJ-S2c5A",
+"projectIri":"http://rdfh.ch/projects/0001",
+"name":"a child",
+"labels":[{"value":"New List Node","language":"en"}]
+}
+
+
The response will contain the basic information of the node, nodeinfo, as below:
+
{
+"nodeinfo":{
+"comments":[],
+"hasRootNode":"http://rdfh.ch/lists/0001/yWQEGXl53Z4C4DYJ-S2c5A",
+"id":"http://rdfh.ch/lists/0001/8u37MxBVMbX3XQ8-d31x6w",
+"labels":[
+{
+"value":"New List Node",
+"language":"en"
+}
+],
+"name":"a new child",
+"position":1
+}
+}
+
+
The new node can be created and inserted in a specific position which must be given in the payload as shown below.
+If necessary, according to the given position, the sibling nodes will be shifted.
+Note that position cannot have a value higher than the number of existing children.
+
{"parentNodeIri":"http://rdfh.ch/lists/0001/yWQEGXl53Z4C4DYJ-S2c5A",
+"projectIri":"http://rdfh.ch/projects/0001",
+"name":"Inserted new child",
+"position":0,
+"labels":[{"value":"New List Node","language":"en"}]
+}
+
+
In case the new node should be appended to the list of current children, either position: -1 must be given in the
+payload or the position parameter must be left out of the payload.
+
Update list's or node's information
+
The basic information of a list (or node) such as its labels, comments, name, or all of them can be updated.
+The parameters that must be updated together with the new value must be given in the JSON body of the request
+together with the IRI of the list and the IRI of the project it belongs to.
+
+
Required permission: SystemAdmin / ProjectAdmin
+
Required fields: listIri, projectIri
+
Update list information
+
PUT: /admin/lists/<listIri>
+
BODY:
+
+
{
+"listIri":"http://rdfh.ch/lists/0001/yWQEGXl53Z4C4DYJ-S2c5A",
+"projectIri":"http://rdfh.ch/projects/0001",
+"name":"new name for the list",
+"labels":[{"value":"a new label for the list","language":"en"}],
+"comments":[{"value":"a new comment for the list","language":"en"}]
+}
+
+
The response will contain the basic information of the list, listinfo (or nodeinfo), without its children, as below:
+
{
+"listinfo":{
+"comments":[
+{
+"value":"a new comment for the list",
+"language":"en"
+}
+],
+"id":"http://rdfh.ch/lists/0001/yWQEGXl53Z4C4DYJ-S2c5A",
+"isRootNode":true,
+"labels":[
+{
+"value":"a new label for the list",
+"language":"en"
+}
+],
+"name":"new name for the list",
+"projectIri":"http://rdfh.ch/projects/0001"
+}
+}
+
+
If only name of the list must be updated, it can be given as below in the body of the request:
There is no need to specify the project IRI because it is automatically extracted using the given <listItemIRI>.
+
Repositioning a child node
+
The position of an existing child node can be updated. The child node can be either repositioned within its
+current parent node, or can be added to another parent node in a specific position. The IRI of the parent node
+and the new position of the child node must be given in the request body.
+
If a node is supposed to be repositioned to the end of a parent node's children, give position: -1.
+
Suppose a parent node parentNode1 has five children in positions 0-4, to change the position of its child node
+childNode4 from its original position 3 to position 1 the request body should specify the IRI of its parent node
+and the new position as below:
Then the node childNode4 will be put in position 1, and its siblings will be shifted accordingly. The new position given
+in the request body cannot be the same as the child node's original position. If position: -1 is given, the node will
+be moved to the end of children list, and its siblings will be shifted to left. In case of repositioning the node
+within its current parent, the maximum permitted position is the length of its children list, i.e. in this example the
+highest allowed position is 4.
+
To reposition a child node childNode4 to another parent node parentNode2 in a specific position, for
+example position: 3, the IRI of the new parent node and the position the node must be placed within children of
+parentNode2 must be given as:
In this case, the childNode4 is removed from the list of children of its old parent parentNode1 and its old
+siblings are shifted accordingly. Then the node childNode4 is added to the specified new parent, i.e. parentNode2, in
+the given position. The new siblings are shifted accordingly.
+
Note that, the furthest the node can be placed is at the end of the list of the children of parentNode2. That means
+if parentNode2 had 3 children with positions 0-2, then childNode4 can be placed in position 0-3 within children
+of its new parent node. If the position: -1 is given, the node will be appended to the end of new parent's children,
+and new siblings will not be shifted.
+
Values less than -1 are not permitted for parameter position.
+
+
Required permission: SystemAdmin / ProjectAdmin
+
Response: returns the updated parent node with all its children.
+
Put /admin/lists/<nodeIri>/position
+
+
Delete a list or a node
+
An entire list or a single node of it can be completely deleted, if not in use. Before deleting an entire list
+(i.e. root node), the data and ontologies are checked for any usage of the list or its children. If not in use, the list
+and all its children are deleted.
+
Similarily, before deleting a single node of a list, it is verified that the node itself and none of its children are used.
+If not in use, the node and all its children are deleted. Once a node is deleted, its parent node is updated by shifting the
+remaining child nodes with respect to the position of the deleted node.
+
+
Required permission: SystemAdmin / ProjectAdmin
+
+
Response:
+
+
If the IRI of the list (i.e. root node) is given, the iri of the deleted list with a flag deleted: true is returned.
+
If the IRI of a child node is given, the updated parent node is returned.
+
+
+
+
Delete /admin/lists/<listItemIri>
+
+
+
Delete child node comments
+
Performing a DELETE request to route /admin/lists/comments/<nodeIri> deletes the comments of that node.
+As a response sipmle JSON is returned:
For the management of users, projects, groups, lists, and permissions, the DSP-API following a resource
+centric approach, provides the following endpoints corresponding to the respective classes of objects that they have an
+effect on, namely:
deprecated, use /admin/permissions/doap/{permissionIri} instead
+
+
+
/admin/permissions/{doap_permissionIri}/property
+
PUT
+
deprecated, use /admin/permissions/doap/{permissionIri} instead
+
+
+
+
Permission Operations
+
Note: For the following operations, the requesting user must be either a systemAdminor a projectAdmin.
+
Getting Permissions
+
+
+
GET: /admin/permissions/<projectIri> : return all permissions for a project.
+As a response, the IRI and the type of all permissions of a project are returned.
+
+
+
GET: /admin/permissions/ap/<projectIri>: return all administrative permissions
+for a project. As a response, all administrative_permissions of a project are returned.
+
+
+
GET: /admin/permissions/ap/<projectIri>/<groupIri>: return the administrative
+permissions for a project group. As a response, the administrative_permission defined
+for the group is returned.
+
+
+
GET: /admin/permissions/doap/<projectIri>: return all default object access
+permissions for a project. As a response, all default_object_acces_permissions of a
+project are returned.
+
+
+
Creating New Administrative Permissions
+
+
POST: /admin/permissions/ap: create a new administrative permission. The type of
+permissions, the project and group to which the permission should be added must be
+included in the request body, for example:
In addition, in the body of the request, it is possible to specify a custom IRI (of
+DSP IRI form) for a permission through
+the @id attribute which will then be assigned to the permission; otherwise the permission will get a unique random IRI.
+A custom permission IRI must be http://rdfh.ch/permissions/PROJECT_SHORTCODE/ (where PROJECT_SHORTCODE
+is the shortcode of the project that the permission belongs to), plus a custom ID string. For example:
additionalInformation: should be left empty, otherwise will be ignored.
+
name : indicates the type of the permission that can be one of the followings:
+
ProjectAdminAllPermission: gives the user the permission to do anything
+ on project level, i.e. create new groups, modify all
+ existing groups
+
ProjectAdminGroupAllPermission: gives the user the permission to modify
+ group info and group membership on all groups
+ belonging to the project.
+
ProjectAdminGroupRestrictedPermission: gives the user the permission to modify
+ group info and group membership on certain groups
+ belonging to the project.
+
ProjectAdminRightsAllPermission: gives the user the permission to change the
+ permissions on all objects belonging to the project
+ (e.g., default permissions attached to groups and
+ permissions on objects).
+
ProjectResourceCreateAllPermission: gives the permission to create resources
+ inside the project.
+
ProjectResourceCreateRestrictedPermission: gives restricted resource creation permission
+ inside the project.
+
+
+
permissionCode: should be left empty, otherwise will be ignored.
+
+
Note that during the creation of a new project,
+a default set of administrative permissions are added to its ProjectAdmin and ProjectMember groups
+(See Default set of permissions for a new project).
+Therefore, it is not possible to create new administrative permissions
+for the ProjectAdmin and ProjectMember groups of a project.
+However, the default permissions set for these groups can be modified
+(See update permission).
+
Creating New Default Object Access Permissions
+
+
POST: /admin/permissions/doap : create a new default object access permission.
+A single instance of knora-admin:DefaultObjectAccessPermission must
+always reference a project, but can only reference either a group
+(knora-admin:forGroup property), a resource class
+(knora-admin:forResourceClass), a property (knora-admin:forProperty),
+or a combination of resource class and property. For example, to create a new
+default object access permission for a group of a project the request body would be
additionalInformation: To whom the permission should be granted: project members, known users, unknown users, etc.
+
name : indicates the type of the permission that can be one of the followings.
+
RV: restricted view permission (least privileged)
+
V: view permission
+
M modify permission
+
D: delete permission
+
CR: change rights permission (most privileged)
+
+
+
permissionCode: The code assigned to a permission indicating its hierarchical level. These codes are as below:
+
1: for restricted view permission (least privileged)
+
2: for view permission
+
6: for modify permission
+
7: for delete permission
+
8: for change rights permission (most privileged)
+
+
+
+
Note that, at least either name or permissionCode must be provided. If one is missing, it will be extrapolated from the other.
+For example, if permissionCode= 1 is given but name was left empty, its value will be set to name = RV.
+
Similar to the previous case a custom IRI can be assigned to a permission specified by the id in the request body.
+The example below shows the request body to create a new default object access permission with a custom IRI defined for
+a resource class of a specific project:
Note that during the creation of a new project,
+a set of default object access permissions are created for its ProjectAdmin and ProjectMember groups
+(See Default set of permissions for a new project).
+Therefore, it is not possible to create new default object access permissions
+for the ProjectAdmin and ProjectMember groups of a project.
+However, the default permissions set for these groups can be modified; see below for more information.
+
Updating an existing Default Object Access Permission
+
+
PUT: /admin/permissions/doap/<doap_permissionIri> to change the attributes of an existing default object
+ access permission, identified by its IRI <doap_permissionIri>.
+
+
This is an example of a request body to update an existing default object access permission:
All attributes of the default object access permission are optional and may be combined.
+
+
Warning
+
Only certain combinations of attributes are allowed. Only exactly one of the following combinations is allowed:
+
+
forGroup
+
forResourceClass
+
forProperty
+
forResourceClass and forProperty
+
+
+
If the combination of attributes is not allowed, the request will fail with a 400 Bad Request error.
+Any valid combination of attributes will replace the existing values.
PUT: /admin/permissions/<permissionIri>/group to change the group for which an administrative or a default object
+access permission, identified by its IRI <permissionIri>, is defined. The request body must contain the IRI of the new
+group as below:
When updating an administrative permission, its previous forGroup value will be replaced with the new one.
+When updating a default object access permission, if it originally had a forGroup value defined, it will be replaced
+with the new group. Otherwise, if the default object access permission was defined for a resource class or a property or
+the combination of both, the permission will be defined for the newly specified group and its previous
+forResourceClass and forProperty values will be deleted.
PUT: /admin/permissions/<permissionIri>/hasPermissions to change the scope of permissions assigned to an administrative
+ or a default object access permission identified by it IRI, <permissionIri>. The request body must contain the new set
+ of permission types as below:
Each permission item given in hasPermissions, must contain the necessary parameters with respect to the type of the
+permission. For example, if you wish to change the scope of an administrative permission, follow the
+guidelines for the
+content of its hasPermissions property. Similarly, if you wish to change the scope of a default object access permission,
+follow the guidelines given about the content of its hasPermissions property.
+Either the name or the permissionCode must be present; it is not necessary to provide both.
+
The previous permission set is replaced by the new permission set. In order to remove a permission for a group
+entirely, you can provide a new set of permissions, leaving out the permission specification for the group.
+
Deleting a Permission
+
+
DELETE: /admin/permissions/<permissionIri> to delete an administrative, or a default object access permission. The
+IRI of the permission must be given in encoded form.
shortname (unique, 3-20 characters long, can contain small and capital letters, numbers, special characters: -
+and _, cannot start with number nor allowed special characters, should be in the form of a
+xsd:NCNAME and URL safe)
+
description (collection of descriptions as strings with language tag)
+
keywords (collection of keywords)
+
status (true, if project is active. false, if project is inactive)
+
selfjoin
+
+
Optional payload:
+
+
id (unique, custom DSP IRI, e.g. used for migrating a project from one server to another)
400 Bad Request if the project already exists or any of the provided properties is invalid.
+
401 Unauthorized if authorization failed.
+
+
Default set of RestrictedViewSize
+
Starting from DSP 2023.10.02 release, the creation of new project will also set the RestrictedViewSize to default
+value, which is: !512,512. It is possible to change the value using dedicated routes.
+
Default set of permissions for a new project
+
When a new project is created, following default permissions are added to its admins and members:
+
+
+
ProjectAdmin group receives an administrative permission to do all project level operations
+ and to create resources within the new project.
+ This administrative permission is retrievable through its IRI:
+ http://rdfh.ch/permissions/[projectShortcode]/defaultApForAdmin
+
+
+
ProjectAdmin group also gets a default object access permission to change rights
+ (which includes delete, modify, view, and restricted view permissions) of any entity that belongs to the project.
+ This default object access permission is retrievable through its IRI:
+ http://rdfh.ch/permissions/[projectShortcode]/defaultDoapForAdmin
+
+
+
ProjectMember group receives an administrative permission to create resources within the new project.
+ This administrative permission is retrievable through its IRI:
+ http://rdfh.ch/permissions/[projectShortcode]/defaultApForMember
+
+
+
ProjectMember group also gets a default object access permission to delete
+ (which includes modify, view and restricted view permissions) of any entity that belongs to the project.
+ This default object access permission is retrievable through its IRI:
+ http://rdfh.ch/permissions/[projectShortcode]/defaultDoapForMember
+
+
+
Get Project by ID
+
The ID can be shortcode, shortname or IRI.
+
Permissions: No permissions required
+
Request definition:
+
+
GET /admin/projects/shortcode/{shortcode}
+
GET /admin/projects/shortname/{shortname}
+
GET /admin/projects/iri/{iri}
+
+
Description: Returns a single project identified by shortcode, shortname or IRI.
!d,d The returned image is scaled so that the width and height of the returned image are not greater than d,
+ while maintaining the aspect ratio.
+
pct:n The width and height of the returned image is scaled to n percent
+ of the width and height of the original image. 1<= n <= 100.
+
+
If the watermark is set to true, the returned image will be watermarked, otherwise the default size !128,128 is set.
+It is only possible to set either the size or the watermark, not both at the same time.
+
Permissions: ProjectAdmin/SystemAdmin
+
Request definition:
+
+
POST /admin/projects/iri/{iri}/RestrictedViewSettings
+
POST /admin/projects/shortcode/{shortcode}/RestrictedViewSettings
+
+
Description: Set the project's restricted view
+
The endpoint accepts either a size or a watermark but not both.
This endpoint allows manipulation of the triplestore content.
+
POST admin/store/ResetTriplestoreContent resets the triplestore content, given that the allowReloadOverHttp
+configuration flag is set to true. This route is mostly used in tests.
A client can obtain an access token by sending a POST request (e.g., {"identifier_type":"identifier_value",
+"password":"password_value"}) to the /v2/authentication route with
+identifier and password in the body. The identifier_type can be iri, email, or username.
+If the credentials are valid, a JSON WEB Token (JWT) will be sent back in the
+response (e.g., {"token": "eyJ0eXAiOiJ..."}). Additionally, for web browser clients a session cookie
+containing the JWT token is also created, containing KnoraAuthentication=eyJ0eXAiOiJ....
+
To logout, the client sends a DELETE request to the same route /v2/authentication and
+the access token in one of the three described ways. This will invalidate the access token,
+thus not allowing further request that would supply the invalidated token.
+
Checking Credentials
+
To check the credentials, send a GET request to /v2/authentication with the credentials
+supplied as URL parameters or HTTP authentication headers as described before.
+
Usage Scenarios
+
+
Create token by logging-in, send token on each subsequent request, and logout when finished.
The body of the request is a JSON-LD document in the
+complex API schema, specifying the type,rdfs:label, and its Knora resource properties
+and their values. The representation of the resource is the same as when it is returned in a GET request, except that
+its knora-api:attachedToUser is not given, and the resource IRI and those of its values can be optionally specified.
+The format of the values submitted is described in Creating and Editing Values.
+If there are multiple values for a property, these must be given in an array.
+
For example, here is a request to create a resource with various value types:
Permissions for the new resource can be given by adding knora-api:hasPermissions, a custom creation date can be
+specified by adding knora-api:creationDate
+(an xsd:dateTimeStamp), and the resource's creator can be specfied
+by adding knora-api:attachedToUser. For example:
To create a resource, the user must have permission to create resources of that class in that project.
+
The predicate knora-api:attachedToUser can be used to specify a creator other than the requesting user only if the
+requesting user is an administrator of the project or a system administrator. The specified creator must also have
+permission to create resources of that class in that project.
+
In addition to the creation date, in the body of the request, it is possible to specify a custom IRI (
+of Knora IRI form) for a resource through the @id attribute which will then be assigned
+to the resource; otherwise the resource will get a unique random IRI.
+
A custom resource IRI must be http://rdfh.ch/PROJECT_SHORTCODE/ (where PROJECT_SHORTCODE
+is the shortcode of the project that the resource belongs to) plus a custom ID string.
+
Similarly, it is possible to assign a custom IRI to the values using their @id attributes; if not given, random IRIs
+will be assigned to the values.
+
A custom value IRI must be the IRI of the containing resource, followed by a /values/ and a custom ID string.
+
An optional custom UUID of a value can also be given by adding knora-api:valueHasUUID. Each custom UUID must
+be base64url-encoded without padding. Each value of the new resource
+can also have a custom creation date specified by adding knora-api:creationDate
+(an xsd:dateTimeStamp). For example:
You can modify the following metadata attached to a resource:
+
+
label
+
permissions
+
last modification date
+
+
To do this, use this route:
+
HTTP PUT to http://host/v2/resources
+
+
The request body is a JSON-LD object containing the following information about the resource:
+
+
@id: the resource's IRI
+
@type: the resource's class IRI
+
knora-api:lastModificationDate: an xsd:dateTimeStamp representing the last modification date that is currently
+ attached to the resource, if any. This is used to make sure that the resource has not been modified by someone else
+ since you last read it.
+
+
The submitted JSON-LD object must also contain one or more of the following predicates, representing the metadata you
+want to change:
+
+
rdfs:label: a string
+
knora-api:hasPermissions, in the format described
+ in Permissions
{
+"@id":"http://rdfh.ch/0001/a-thing",
+"@type":"anything:Thing",
+"rdfs:label":"this is the new label",
+"knora-api:hasPermissions":"CR knora-admin:Creator|M knora-admin:ProjectMember|V knora-admin:ProjectMember",
+"knora-api:lastModificationDate":{
+"@type":"xsd:dateTimeStamp",
+"@value":"2017-11-20T15:55:17Z"
+},
+"knora-api:newModificationDate":{
+"@type":"xsd:dateTimeStamp",
+"@value":"2018-12-21T16:56:18Z"
+},
+"@context":{
+"rdf":"http://www.w3.org/1999/02/22-rdf-syntax-ns#",
+"knora-api":"http://api.knora.org/ontology/knora-api/v2#",
+"rdfs":"http://www.w3.org/2000/01/rdf-schema#",
+"xsd":"http://www.w3.org/2001/XMLSchema#",
+"anything":"http://0.0.0.0:3333/ontology/0001/anything/v2#"
+}
+}
+
+
If you submit a knora-api:lastModificationDate that is different from the resource's actual last modification date,
+you will get an HTTP 409 (Conflict) error.
+
If you submit a knora-api:newModificationDate that is earlier than the resource's knora-api:lastModificationDate,
+you will get an HTTP 400 (Bad Request) error.
+
A successful response is an HTTP 200 (OK) status containing the resource's metadata.
+
Deleting a Resource
+
Knora does not normally delete resources; instead, it marks them as deleted, which means that they do not appear in
+normal query results.
+
To mark a resource as deleted, use this route:
+
HTTP POST to http://host/v2/resources/delete
+
+
The request body is a JSON-LD object containing the following information about the resource:
+
+
@id: the resource's IRI
+
@type: the resource's class IRI
+
knora-api:lastModificationDate: an xsd:dateTimeStamp representing the last modification date that is currently
+ attached to the resource, if any. This is used to make sure that the resource has not been modified by someone else
+ since you last read it.
+
+
{
+"@id":"http://rdfh.ch/0001/a-thing",
+"@type":"anything:Thing",
+"knora-api:lastModificationDate":{
+"@type":"xsd:dateTimeStamp",
+"@value":"2019-02-05T17:05:35.776747Z"
+},
+"knora-api:deleteComment":"This resource was created by mistake.",
+"@context":{
+"rdf":"http://www.w3.org/1999/02/22-rdf-syntax-ns#",
+"knora-api":"http://api.knora.org/ontology/knora-api/v2#",
+"rdfs":"http://www.w3.org/2000/01/rdf-schema#",
+"xsd":"http://www.w3.org/2001/XMLSchema#",
+"anything":"http://0.0.0.0:3333/ontology/0001/anything/v2#"
+}
+}
+
+
The optional property knora-api:deleteComment specifies a comment to be attached to the resource, explaining why it
+has been marked as deleted.
+
The optional property knora-api:deleteDate
+(an xsd:dateTimeStamp)
+indicates when the resource was marked as deleted; if not given, the current time is used.
+
The response is a JSON-LD document containing the predicate knora-api:result with a confirmation message.
+
Requesting Deleted Resources
+
Resources marked as deleted are not found in search queries. It is however possible to request them directly or from an
+ARK URL. In these instances, the API will not return the deleted resource, but instead a generic resource of type
+knora-base:DeletedResource. This resource will be similar to the requested resource, having e.g. the same IRI.
+The resource will contain the deletion date and optionally the deletion comment.
+
The response to requesting a deleted resource will look as the following example:
If resource A has a link to resource B, and resource
+B is later marked as deleted, A's link will still exist. DSP-API v2 will still return the link when A is queried,
+but without any information about B (except for B's IRI). If A's link is necessary to meet the requirements of a
+cardinality, marking B as deleted will not violate the cardinality.
+
The reason for this design is that A and B might be in different projects, and each project must retain control of
+its resources and be able to mark them as deleted, even if they are used by another project.
+
Erasing a Resource from the Triplestore
+
Normally, resources are not actually removed from the triplestore; they are only marked as deleted (see
+Deleting a Resource). However, sometimes it is necessary to erase a resource from the
+triplestore. To do so, use this route:
+
HTTP POST to http://host/v2/resources/erase
+
+
The request body is the same as for Deleting a Resource, except that knora-api:deleteComment
+is not relevant and will be ignored.
+
To do this, a user must be a system administrator or an administrator of the project containing the resource. The user's
+permissions on the resource are not otherwise checked.
+
A resource cannot be erased if any other resource has a link to it. Any such links must first be changed or marked as
+deleted (see Updating a Value and
+Deleting a Value). Then, when the resource is erased, the deleted link values that
+referred to it will also be erased.
+
This operation cannot be undone (except by restoring the repository from a backup), so use it with care.
To create a value in an existing resource, use this route:
+
HTTP POST to http://host/v2/values
+
+
The body of the request is a JSON-LD document in the
+complex API schema, specifying the resource's IRI and type, the resource property, and the
+content of the value. The representation of the value is the same as when it is returned in a GET request, except that
+its IRI and knora-api:attachedToUser are not given. For example, to create an integer value:
Each value can have an optional custom IRI (of Knora IRI form) specified by the @id
+attribute, a custom creation date specified by adding knora-api:valueCreationDate (an
+xsd:dateTimeStamp), or a custom UUID given by
+knora-api:valueHasUUID. Each custom UUID must be base64url-encoded, without padding. If a custom
+UUID is provided, it will be used in value IRI. If a custom IRI is given for the value, its UUID should match the given
+custom UUID. If a custom IRI is provided, but there is no custom UUID provided, then the UUID given in the IRI will be
+assigned to the knora-api:valueHasUUID. A custom value IRI must be the IRI of the containing resource, followed by
+a /values/ and a custom ID string. For example:
To create a value, the user must have modify permission on the containing resource.
+
The response is a JSON-LD document containing:
+
+
@id: the IRI of the value that was created.
+
@type: the value's type.
+
knora-api:valueHasUUID, the value's UUID, which remains stable across value versions
+ (except for link values, as explained below).
+
+
Creating a Link Between Resources
+
To create a link, you must create a knora-api:LinkValue, which represents metadata about the link. The property that
+connects the resource to the LinkValue is a link value property, whose name is constructed by adding Value to the
+name of the link property (see
+Links Between Resources). The triple representing the
+direct link between the resources is created automatically. For example, if the link property that should connect the
+resources is anything:hasOtherThing, we can create a link like this:
As with ordinary values, permissions on links can be specified by adding knora-api:hasPermissions.
+
The response is a JSON-LD document containing:
+
+
@id: the IRI of the value that was created.
+
@type: the value's type.
+
knora-api:valueHasUUID, the value's UUID, which remains stable across value versions, unless the link is changed to
+ point to a different resource, in which case it is considered a new link and gets a new UUID. Changing a link's
+ metadata, without changing its target, creates a new version of the link value with the same UUID.
+
+
Creating a Text Value Without Standoff Markup
+
Use the predicate knora-api:valueAsString of knora-api:TextValue:
+
{
+"@id":"http://rdfh.ch/0001/a-thing",
+"@type":"anything:Thing",
+"anything:hasText":{
+"@type":"knora-api:TextValue",
+"knora-api:valueAsString":"This is a text without markup."
+},
+"@context":{
+"knora-api":"http://api.knora.org/ontology/knora-api/v2#",
+"anything":"http://0.0.0.0:3333/ontology/0001/anything/v2#"
+}
+}
+
+
Creating a Text Value with Standoff Markup
+
Currently, the only way to create a text value with standoff markup
+is to submit it in XML format using an XML-to-standoff mapping.
+See here for more detials.
+
Creating a Text Value with Standard Mapping
+
To create a value with the standard mapping (http://rdfh.ch/standoff/mappings/StandardMapping), we can make an XML
+document like this:
This document can then be embedded in a JSON-LD request, using the predicate knora-api:textValueAsXml:
+
{
+"@id":"http://rdfh.ch/0001/a-thing",
+"@type":"anything:Thing",
+"anything:hasText":{
+"@type":"knora-api:TextValue",
+"knora-api:textValueAsXml":"<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<text>\n This text links to another <a class=\"salsah-link\" href=\"http://rdfh.ch/0001/another-thing\">resource</a>.\n</text>",
+"knora-api:textValueHasMapping":{
+"@id":"http://rdfh.ch/standoff/mappings/StandardMapping"
+}
+},
+"@context":{
+"knora-api":"http://api.knora.org/ontology/knora-api/v2#",
+"anything":"http://0.0.0.0:3333/ontology/0001/anything/v2#"
+}
+}
+
+
Note that quotation marks and line breaks in the XML must be escaped, and that the IRI of the mapping must be provided.
+
Creating a Text Value with a Custom Mapping
+
To create a text value with custom mapping, the following steps are required:
+
+
Optionally, an XSL transformation resource (kb:XSLTransformation) can be created that may be defined as the default
+ transformation of the mapping.
+
The mapping resource (kb:XMLToStandoffMapping) must be created, if it does not already exist.
+
The text value can be created as in the example above, using the mapping resource IRI in kb:textValueHasMapping.
+
+
The kb:XSLTransformation resource is a subclass of kb:TextRepresentation, so it has a kb:hasTextFileValue pointing
+to a kb:TextFileValue
+which represents the XSLT file stored in SIPI. For more Details, see Creating File Values.
+
The kb:XMLToStandoffMapping resource requires the mapping XML as
+specified here.
+If an XSL transformation has been defined, the IRI the transformation can be placed in the <defaultXSLTransformation>
+tag of the mapping XML.
+
If a mapping has been defined, then requesting the text value will return both the kb:textValueAsXml and
+the kb:textValueAsHtml properties,
+where the XML can be used for editing the value, while the HTML can be used to display it.
+If no mapping has been defined, only kb:textValueAsXml can be returned.
+
Creating File Values
+
DSP-API supports the storage of certain types of data as files, using SIPI
+(see FileValue). DSP-API v2 currently supports using SIPI to store
+the following types of files:
+
+
Images: JPEG, JPEG2000, TIFF, or PNG which are stored internally as JPEG2000
+
Documents: PDF
+
Audio: MPEG or Waveform audio file format (.wav, .x-wav, .vnd.wave)
Support for other types of files will be added in the future.
+
The following sections describe the steps for creating a file value.
+
Files can be ingested into DSP using SIPI or DSP-INGEST (experimental).
+
Upload Files to SIPI
+
The first step is to upload one or more files to SIPI, using a multipart/form-data request, where sipihost
+represents the host and port on which SIPI is running:
+
HTTP POST to http://sipihost/upload?token=TOKEN
+
+
The token parameter must provide the JSON Web Token that DSP-API returned when the client logged in.
+Each body part in the request must contain a parameter filename, providing the file's original filename, which both
+DSP-API and SIPI will store; these filenames can be descriptive and need not be unique.
+
SIPI stores the file in a temporary location. If the file is an image, it is converted first to JPEG2000 format, and the
+converted file is stored.
+
SIPI then returns a JSON response that looks something like this:
In this example, we uploaded two files to SIPI, so uploadedFiles is an array with two elements. For each file, we
+have:
+
+
the originalFilename, which we submitted when uploading the file
+
the unique internalFilename that SIPI has randomly generated for the file
+
the temporaryBaseIIIFUrl, which we can use to construct a IIIF URL for previewing the file
+
+
In the case of an image file, the client may now wish to get a thumbnail of each uploaded image, to allow the user to
+confirm that the correct files have been uploaded. This can be done by adding IIIF parameters to temporaryBaseIIIFUrl.
+For example, to get a JPG thumbnail image that is 150 pixels wide, you would add /full/150,/0/default.jpg.
+
Upload Files to DSP-INGEST
+
Support for DSP-INGEST is in its early stage and currently mainly intended for ingesting large amounts of data.
+When a file has been ingested through DSP-INGEST,
+it is necessary to send the header X-Asset-Ingested
+along with the request to create the file value resource in DSP-API.
+
Submit A File Value to DSP-API
+
A DSP-API Representation (i.e. a resource containing information about a file) must always have exactly one file value
+attached to it. (see Representations). Therefore, a request
+to create a new file value must always be submitted as part of a request to create a new resource (see
+Creating a Resource). You can also update a file value in an existing
+Representation; see Updating a Value.
+
Instead of providing the file's complete metadata to DSP-API, you just provide the unique internal filename generated by
+SIPI.
+
Still Images
+
Still Image may be stored in SIPI or in an external IIIF server.
+
Images stored in SIPI
+
Here is an example of a request to create a resource of class anything:ThingPicture with a still image stored in SIPI.
+The resource's class is a subclass of knora-api:StillImageRepresentation and therefore has the property knora-api:hasStillImageFileValue.
+The file value is of type knora-api:StillImageFileValue:
In the case of a knora-api:StillImageFileValue DSP-API gets the rest of the file's metadata from SIPI.
+If the client's request to DSP-API is valid, DSP-API saves the file value in the triplestore and instructs SIPI to move the file to permanent storage.
+Otherwise, the temporary file that was stored by SIPI is deleted.
+
Images stored in an external IIIF server
+
In the case of a Still image stored in an external IIIF server, the request is similar to the one above, but the file value is of type knora-api:StillImageExternalFileValue
+and the knora-api:externalUrl property is used to provide the URL of the image in the IIIF server:
Note: For backwards compatibility, we support using knora-api:fileValueHasExternalUrl and
+knora-api:stillImageFileValueExternalFileValue properties if the value is submitted as a
+String literal, i.e. "knora-api:stillImageFileValueExternalFileValue" : "https://example.com/iiif/3UIsXH9bP0j-BV0D4sN51Xz.jp2/full/max/0/default.jpg".
+Support for String literals and the knora-api:fileValueHasExternalUrl property is
+deprecated and will be removed in the future.
+The knora-api:stillImageFileValueHasExternalUrl property with a xsd:anyURI type is
+correct and must be used for reading and writing.
+
PDF Documents
+
If you're submitting a PDF document, use the resource class knora-api:DocumentRepresentation, which has the property
+knora-api:hasDocumentFileValue, pointing to a knora-api:DocumentFileValue.
+
Text Files
+
For a text file, use knora-api:TextRepresentation, which has the property knora-api:hasTextFileValue, pointing to a
+knora-api:TextFileValue.
+
Archive Files
+
For an archive like zip, use knora-api:ArchiveRepresentation, which has the property knora-api:hasArchiveFileValue,
+pointing to a knora-api:ArchiveFileValue.
+
Updating a Value
+
To update a value, use this route:
+
HTTP PUT to http://host/v2/values
+
+
Updating a value means creating a new version of an existing value. The new version will have a different IRI. The
+request is the same as for creating a value, except that the @id of the current value version is given. For example,
+to update an integer value:
The value can be given a comment by using knora-api:valueHasComment. To change only the comment of a value, you can
+resubmit the existing value with the updated comment.
+
Permissions can be specified by adding knora-api:hasPermissions. Otherwise, the new version has the same permissions
+as the previous one. To change the permissions on a value, the user must have change rights permission on the value.
+
To update only the permissions on a value, submit it with the new permissions and with its @id and @type but without
+any other content, like this:
A custom value IRI must be the IRI of the containing resource, followed by a /values/ and a custom ID string.
+
The response is a JSON-LD document containing only @id and @type, returning the IRI and type of the new value
+version.
+
If you submit an outdated value ID in a request to update a value, the response will be an HTTP 404 (Not Found) error.
+
The response to a value update request contains:
+
+
@id: the IRI of the value that was created.
+
@type: the value's type.
+
knora-api:valueHasUUID, the value's UUID, which remains stable across value versions, unless the value is a link
+ value and is changed to point to a different resource, in which case it is considered a new link and gets a new UUID.
+
+
Deleting a Value
+
DSP-API does not normally delete values; instead, it marks them as deleted, which means that they do not appear in normal
+query results.
+
To mark a value as deleted, use this route:
+
HTTP POST to http://host/v2/values/delete
+
+
The request must include the resource's ID and type, the property that points from the resource to the value, and the
+value's ID and type. For example:
+
{
+"@id":"http://rdfh.ch/0001/a-thing",
+"@type":"anything:Thing",
+"anything:hasInteger":{
+"@id":"http://rdfh.ch/0001/a-thing/values/vp96riPIRnmQcbMhgpv_Rg",
+"@type":"knora-api:IntValue",
+"knora-api:deleteComment":"This value was created by mistake."
+},
+"@context":{
+"knora-api":"http://api.knora.org/ontology/knora-api/v2#",
+"anything":"http://0.0.0.0:3333/ontology/0001/anything/v2#"
+}
+}
+
+
The optional property knora-api:deleteComment specifies a comment to be attached to the value, explaining why it has
+been marked as deleted
+
The optional property knora-api:deleteDate (an
+xsd:dateTimeStamp)
+specifies a custom timestamp indicating when the value was deleted. If not specified, the current time is used.
+
The response is a JSON-LD document containing the predicate knora-api:result with a confirmation message.
+
Requesting Deleted Values
+
Values marked as deleted are not found in search queries. But when requesting a resource that has deleted values, these
+will show up as generic knora-api:DeletedValue values. This value will be similar to the deleted value, having e.g.
+the same IRI. The DeletedValue will contain the deletion date and optionally the deletion comment.
+
The response to requesting a deleted resource will look as the following example:
Every request to API v2 includes v2 as a path segment, e.g.
+http://host/v2/resources/http%3A%2F%2Frdfh.ch%2Fc5058f3a.
+Accordingly, requests using any other version of the API will require
+another path segment.
Our preferred format for data exchange is
+JSON-LD. JSON-LD allows the
+DSP-API server to provide responses that are relatively easy for
+automated processes to interpret, since their structure and semantics is
+explicitly defined. For example, each user-created Knora resource
+property is identified by an IRI, which can be dereferenced to get more
+information about it (e.g. its label in different languages). Moreover,
+each value has a type represented by an IRI. These are either standard
+RDF types (e.g. XSD datatypes) or more complex types whose IRIs can be
+dereferenced to get more information about their structure.
+
At the same time, JSON-LD responses are relatively easy for software
+developers to work with, and are more concise and easier to read than
+the equivalent XML. Items in a response can have human-readable names,
+which can nevertheless be expanded to full IRIs. Also, while a format such as
+Turtle just provides a
+set of RDF triples, an equivalent JSON-LD response can explicitly
+provide data in a hierarchical structure, with objects nested inside
+other objects.
+
Hierarchical vs. Flat JSON-LD
+
The client can choose between hierarchical and flat JSON-LD. In hierarchical
+JSON-LD, entities with IRIs are inlined (nested) where they are used. If the
+same entity is used in more than one place, it is inlined only once, and other
+uses just refer to its IRI. In Knora's flat JSON-LD, all entities with IRIs are located
+at the top level of the document (in a @graph if there is more than one of them).
+This setting does not affect blank nodes, which are always inlined (unlike in standard
+flat JSON-LD). DSP ontologies are always returned in the flat rendering; other kinds
+of responses default to hierarchical. To use this setting, submit the HTTP header
+X-Knora-JSON-LD-Rendering with the value hierarchical or flat.
+
Knora IRIs
+
Resources and entities are identified by IRIs. The format of these IRIs
+is explained in Knora IRIs.
+
API Schema
+
DSP-API v2 uses RDF data structures that are simpler than the ones
+actually stored in the triplestore, and more suitable for the development
+of client software. Thus we refer to the internal schema of data
+as it is stored in the triplestore, and to external schemas which
+are used to represent that data in API v2.
+
DSP-API v2 offers a complex schema and a simple one. The main difference
+is that the complex schema exposes the complexity of value objects, while
+the simple version does not. A client that needs to edit values must use the
+complex schema in order to obtain the IRI of each value. A client that reads
+but does not update data can use the simplified schema. The simple schema is
+mainly intended to facilitate interoperability with other RDF-based systems in the
+context of Linked Open Data. It is therefore designed to use the
+simplest possible datatypes and to require minimal knowledge of Knora.
+
In either case, the client deals only with data whose structure and
+semantics are defined by external DSP-API ontologies, which are distinct from
+the internal ontologies that are used to store date in the triplestore. The Knora
+API server automatically converts back and forth between these internal
+and external representations. This approach encapsulates the internals
+and adds a layer of abstraction to them.
+
IRIs representing ontologies and ontology entities are different in different
+schemas; see Knora IRIs.
+
Some API operations inherently require the client to accept responses in
+the complex schema. For example, if an ontology is requested using an IRI
+indicating the simple schema, the ontology will be returned in the simple schema (see
+Querying, Creating, and Updating Ontologies).
+
Other API operations can return data in either schema. In this case, the
+complex schema is used by default in the response, unless the request specifically
+asks for the simple schema. The client can specify the desired schema by using
+an HTTP header or a URL parameter:
+
+
the HTTP header X-Knora-Accept-Schema
+
the URL parameter schema
+
+
Both the HTTP header and the URL parameter accept the values simple or
+complex.
The IRIs used in Knora repositories and in the DSP-API v2 follow
+certain conventions.
+
Project Short-Codes
+
A project short-code is a hexadecimal number of at least four digits,
+assigned by the DaSCH to uniquely identify a
+Knora project regardless of where it is hosted. The IRIs of ontologies that
+are built into Knora do not contain shortcodes; these ontologies implicitly
+belong to the Knora system project.
+
A user-created ontology IRI must always include its project shortcode.
+
Project ID 0000 is reserved for shared ontologies
+(see Shared Ontologies).
+
The range of project IDs from 0001 to 00FF inclusive is reserved for
+local testing. Thus, the first useful project will be 0100.
+
In the beginning, Unil will use the IDs 0100 to 07FF, and Unibas
+0800 to 08FF.
+
IRIs for Ontologies and Ontology Entities
+
Internal Ontology IRIs
+
Knora makes a distinction between internal and external ontologies. Internal
+ontologies are used in the triplestore, while external ontologies are used in
+API v2. For each internal ontology, there is a corresponding external ontology. Some
+internal ontologies are built into Knora, while others are
+user-created. Knora automatically generates external
+ontologies based on user-created internal ontologies.
+
Each internal ontology has an IRI, which is also the IRI of the named
+graph that contains the ontology in the triplestore. An internal
+ontology IRI has the form:
For example, the internal ontology IRI based on project code 0001 and ontology
+name example would be:
+
http://www.knora.org/ontology/0001/example
+
+
An ontology name must be a valid XML
+NCName and must be URL safe.
+The following names are reserved for built-in internal DSP ontologies:
+
+
knora-base
+
standoff
+
salsah-gui
+
+
Names starting with knora are reserved for future built-in Knora
+ontologies. A user-created ontology name may not start with the
+letter v followed by a digit, and may not contain these reserved
+words:
+
+
knora
+
ontology
+
simple
+
shared
+
+
External Ontology IRIs
+
Unlike internal ontology IRIs, external ontology IRIs are meant to be
+dereferenced as URLs. When an ontology IRI is dereferenced, the ontology
+itself can be served either in a machine-readable format or as
+human-readable documentation.
+
The IRI of an external Knora ontology has the form:
For built-in and shared ontologies, the host is always api.knora.org. Otherwise,
+the hostname and port configured in application.conf under
+app.http.knora-api.host and app.http.knora-api.http-port are used
+(the port is omitted if it is 80).
+
This means that when a built-in or shared external ontology IRI is dereferenced,
+the ontology can be served by a DSP-API server running at
+api.knora.org. When the external IRI of a non-shared, project-specific ontology is
+dereferenced, the ontology can be served by Knora that
+hosts the project. During development and testing, this could be
+localhost.
+
The name of an external ontology is the same as the name of the
+corresponding internal ontology, with one exception: the external form
+of knora-base is called knora-api.
+
The API version identifier indicates not only the version of the API,
+but also an API 'schema'. The DSP-API v2 is available in two schemas:
+
+
A complex schema, which is suitable both for reading and for editing
+ data. The complex schema represents values primarily as complex
+ objects. Its version identifier is v2.
+
A simple schema, which is suitable for reading data but not for
+ editing it. The simple schema facilitates interoperability between
+ DSP ontologies and non-DSP ontologies, since it represents
+ values primarily as literals. Its version identifier is simple/v2.
+
+
Other schemas could be added in the future for more specific use cases.
+
When requesting an ontology, the client requests a particular schema.
+(This will also be true of most DSP-API v2 requests: the client will
+be able to specify which schema the response should be provided in.)
+
For example, suppose a DSP-API server is running at
+knora.example.org and hosts an ontology whose internal IRI is
+http://www.knora.org/ontology/0001/example. That ontology can then be
+requested using either of these IRIs:
+
+
http://knora.example.org/ontology/0001/example/v2 (in the complex schema)
+
http://knora.example.org/ontology/0001/example/simple/v2 (in the simple schema)
+
+
While the internal example ontology refers to definitions in
+knora-base, the external example ontology that is served by the API
+refers instead to a knora-api ontology, whose IRI depends on the
+schema being used:
+
+
http://api.knora.org/ontology/knora-api/v2 (in the complex schema)
+
http://api.knora.org/ontology/knora-api/simple/v2 (in the simple schema)
+
+
Ontology Entity IRIs
+
DSP ontologies use 'hash namespaces' (see URI
+Namespaces).
+This means that the IRI of an ontology entity (a class or property
+definition) is constructed by adding a hash character (#) to the
+ontology IRI, followed by the name of the entity. In Knora, an entity
+name must be a valid XML
+NCName.
+Thus, if there is a class called ExampleThing in an ontology whose
+internal IRI is http://www.knora.org/ontology/0001/example, that class
+has the following IRIs:
+
+
http://www.knora.org/ontology/0001/example#ExampleThing (in the internal ontology)
+
http://HOST[:PORT]/ontology/0001/example/v2#ExampleThing (in the API v2 complex schema)
+
http://HOST[:PORT]/ontology/0001/example/simple/v2#ExampleThing (in the API v2 simple schema)
+
+
Shared Ontology IRIs
+
As explained in Shared Ontologies,
+a user-created ontology can be defined as shared, meaning that it can be used by
+multiple projects, and that its creators will not change it in ways that could
+affect other ontologies or data that are based on it.
+
There is currently one project for shared ontologies:
Its project code is 0000. Additional projects for shared ontologies may be supported
+in future.
+
The internal and external IRIs of shared ontologies always use the hostname
+api.knora.org, and have an additional segment, shared, after ontology.
+The project code can be omitted, in which case the default shared ontology
+project, 0000, is assumed. The sample shared ontology, example-box, has these IRIs:
Knora generates IRIs for data that it creates in the triplestore. Each
+generated data IRI contains one or more UUID
+identifiers to make it unique. To keep data IRIs relatively short, each UUID is
+base64url-encoded, without padding;
+thus each UUID is a 22-character string. DSP-API supports UUID version 4 or 5.
+
Data IRIs are not currently intended to be dereferenced as URLs.
+Instead, each Knora resource has a separate permalink.
+
A Knora value does not have a stable IRI throughout its version history.
+Each time a new version of a value is made, the new version gets a new IRI.
+Therefore, it would not make sense to publish Knora value IRIs. When designing
+ontologies for Knora projects, keep in mind that if you want something be directly
+citable, it needs to be a resource, not a value.
+
The formats of generated data IRIs for different types of objects are as
+follows:
The response format uses prefixes to shorten IRIs, making them more
+human-readable. A client may wish to convert these to full IRIs for
+processing. This can be done with responses in JSON-LD by using a library
+that implements the JSON-LD API
+to compact the document with an empty JSON-LD @context.
+
Querying Ontology Metadata
+
Requests for ontology metadata can return information about more than one
+ontology, unlike other requests for ontology information. To get metadata
+about all ontologies:
+
HTTP GET to http://host/v2/ontologies/metadata
+
+
If you submit a project IRI in the X-Knora-Accept-Project header, only the
+ontologies for that project will be returned.
+
The response is in the complex API v2 schema. Sample response:
An ontology can be queried either by using an API route directly or by
+simply dereferencing the ontology IRI. The API route is as follows:
+
HTTP GET to http://host/v2/ontologies/allentities/ONTOLOGY_IRI
+
+
The ontology IRI must be URL-encoded, and may be in either the complex
+or the simple schema. The response will be in the same schema. For
+example, if the server is running on 0.0.0.0:3333, you can request
+the knora-api ontology in the complex schema as follows:
+
HTTP GET to http://0.0.0.0:3333/v2/ontologies/allentities/http%3A%2F%2Fapi.knora.org%2Fontology%2Fknora-api%2Fv2
+
+
By default, this returns the ontology in JSON-LD; to request Turtle
+or RDF/XML, add an HTTP Accept header
+(see Response Formats).
+
If the client dereferences a project-specific ontology IRI as a URL, the
+DSP-API server running on the hostname in the IRI will serve the
+ontology. For example, if the server is running on 0.0.0.0:3333, the
+IRI http://0.0.0.0:3333/ontology/00FF/images/simple/v2 can be
+dereferenced to request the images sample ontology in the simple
+schema.
+
If the client dereferences a built-in Knora ontology, such as
+http://api.knora.org/ontology/knora-api/simple/v2, there must be a
+DSP-API server running at api.knora.org that can serve the ontology.
+The DaSCH intends to run such as server. For
+testing, you can configure your local /etc/hosts file to resolve
+api.knora.org as localhost.
+
Differences Between Internal and External Ontologies
+
The external ontologies used by DSP-API v2 are different to the internal
+ontologies that are actually stored in the triplestore (see
+API Schema). In general, the external
+ontologies use simpler data structures, but they also provide additional
+information to make it easier for clients to use them. This is illustrated
+in the examples in the next sections.
+
The internal predicates knora-base:subjectClassConstraint and
+knora-base:objectClassConstraint (see
+Constraints on the Types of Property Subjects and Objects)
+are represented as knora-api:subjectType and knora-api:objectType in
+external ontologies.
+
JSON-LD Representation of an Ontology in the Simple Schema
+
The simple schema is suitable for client applications that need to read
+but not update data in Knora. For example, here is the response for the
+images sample ontology in the simple schema,
+http://0.0.0.0:3333/ontology/00FF/images/simple/v2 (simplified for
+clarity):
+
{
+"@id":"http://0.0.0.0:3333/ontology/00FF/images/simple/v2",
+"@type":"owl:Ontology",
+"rdfs:label":"The images demo ontology",
+"@graph":[{
+"@id":"images:bild",
+"@type":"owl:Class",
+"knora-api:resourceIcon":"bild.png",
+"rdfs:comment":"An image of the demo image collection",
+"rdfs:label":"Image",
+"rdfs:subClassOf":[{
+"@id":"knora-api:StillImageRepresentation"
+},{
+"@type":"owl:Restriction",
+"owl:cardinality":1,
+"owl:onProperty":{
+"@id":"knora-api:creationDate"
+}
+},{
+"@type":"owl:Restriction",
+"owl:minCardinality":0,
+"owl:onProperty":{
+"@id":"knora-api:hasIncomingLink"
+}
+},{
+"@type":"owl:Restriction",
+"owl:minCardinality":0,
+"owl:onProperty":{
+"@id":"knora-api:hasStandoffLinkTo"
+}
+},{
+"@type":"owl:Restriction",
+"owl:minCardinality":1,
+"owl:onProperty":{
+"@id":"knora-api:hasStillImageFile"
+}
+},{
+"@type":"owl:Restriction",
+"owl:maxCardinality":1,
+"owl:onProperty":{
+"@id":"knora-api:lastModificationDate"
+}
+},{
+"@type":"owl:Restriction",
+"owl:cardinality":1,
+"owl:onProperty":{
+"@id":"rdfs:label"
+}
+},{
+"@type":"owl:Restriction",
+"owl:cardinality":1,
+"owl:onProperty":{
+"@id":"images:description"
+}
+},{
+"@type":"owl:Restriction",
+"owl:cardinality":1,
+"owl:onProperty":{
+"@id":"images:erfassungsdatum"
+}
+},{
+"@type":"owl:Restriction",
+"owl:maxCardinality":1,
+"owl:onProperty":{
+"@id":"images:urheber"
+}
+}]
+},{
+"@id":"images:description",
+"@type":"owl:DatatypeProperty",
+"knora-api:objectType":{
+"@id":"xsd:string"
+},
+"knora-api:subjectType":{
+"@id":"images:bild"
+},
+"rdfs:label":"Description",
+"rdfs:subPropertyOf":[{
+"@id":"knora-api:hasValue"
+},{
+"@id":"http://purl.org/dc/terms/description"
+}]
+},{
+"@id":"images:erfassungsdatum",
+"@type":"owl:DatatypeProperty",
+"knora-api:objectType":{
+"@id":"knora-api:Date"
+},
+"knora-api:subjectType":{
+"@id":"images:bild"
+},
+"rdfs:label":"Date of acquisition",
+"rdfs:subPropertyOf":[{
+"@id":"knora-api:hasValue"
+},{
+"@id":"http://purl.org/dc/terms/date"
+}]
+},{
+"@id":"images:firstname",
+"@type":"owl:DatatypeProperty",
+"knora-api:objectType":{
+"@id":"xsd:string"
+},
+"knora-api:subjectType":{
+"@id":"images:person"
+},
+"rdfs:comment":"First name of a person",
+"rdfs:label":"First name",
+"rdfs:subPropertyOf":{
+"@id":"knora-api:hasValue"
+}
+},{
+"@id":"images:lastname",
+"@type":"owl:DatatypeProperty",
+"knora-api:objectType":{
+"@id":"xsd:string"
+},
+"knora-api:subjectType":{
+"@id":"images:person"
+},
+"rdfs:comment":"Last name of a person",
+"rdfs:label":"Name",
+"rdfs:subPropertyOf":{
+"@id":"knora-api:hasValue"
+}
+},{
+"@id":"images:person",
+"@type":"owl:Class",
+"knora-api:resourceIcon":"person.png",
+"rdfs:comment":"Person",
+"rdfs:label":"Person",
+"rdfs:subClassOf":[{
+"@id":"knora-api:Resource"
+},{
+"@type":"owl:Restriction",
+"owl:cardinality":1,
+"owl:onProperty":{
+"@id":"knora-api:creationDate"
+}
+},{
+"@type":"owl:Restriction",
+"owl:minCardinality":0,
+"owl:onProperty":{
+"@id":"knora-api:hasIncomingLink"
+}
+},{
+"@type":"owl:Restriction",
+"owl:minCardinality":0,
+"owl:onProperty":{
+"@id":"knora-api:hasStandoffLinkTo"
+}
+},{
+"@type":"owl:Restriction",
+"owl:maxCardinality":1,
+"owl:onProperty":{
+"@id":"knora-api:lastModificationDate"
+}
+},{
+"@type":"owl:Restriction",
+"owl:cardinality":1,
+"owl:onProperty":{
+"@id":"rdfs:label"
+}
+},{
+"@type":"owl:Restriction",
+"owl:cardinality":1,
+"owl:onProperty":{
+"@id":"images:lastname"
+}
+},{
+"@type":"owl:Restriction",
+"owl:cardinality":1,
+"owl:onProperty":{
+"@id":"images:firstname"
+}
+}]
+},{
+"@id":"images:urheber",
+"@type":"owl:ObjectProperty",
+"knora-api:objectType":{
+"@id":"images:person"
+},
+"knora-api:subjectType":{
+"@id":"images:bild"
+},
+"rdfs:comment":"An entity primarily responsible for making the resource. Examples of a Creator include a person, an organization, or a service. Typically, the name of a Creator should be used to indicate the entity.",
+"rdfs:label":"Creator",
+"rdfs:subPropertyOf":{
+"@id":"knora-api:hasLinkTo"
+}
+}],
+"@context":{
+"rdf":"http://www.w3.org/1999/02/22-rdf-syntax-ns#",
+"images":"http://0.0.0.0:3333/ontology/00FF/images/simple/v2#",
+"knora-api":"http://api.knora.org/ontology/knora-api/simple/v2#",
+"owl":"http://www.w3.org/2002/07/owl#",
+"rdfs":"http://www.w3.org/2000/01/rdf-schema#",
+"xsd":"http://www.w3.org/2001/XMLSchema#"
+}
+}
+
+
The response format is an RDF graph. The top level object describes the ontology
+itself, providing its IRI (in the @id member) and its rdfs:label.
+The @graph member (see
+Named Graphs in the
+JSON-LD specification) contains an array of entities that belong to the
+ontology.
+
In a class definition, cardinalities for properties of the class are
+represented as in OWL, using objects of type owl:Restriction. The
+supported cardinalities are the ones indicated in
+OWL Cardinalities.
+
The class definitions include cardinalities that are directly defined on
+each class, as well as cardinalities inherited from base classes. For
+example, we can see cardinalities inherited from knora-api:Resource,
+such as knora-api:hasStandoffLinkTo and http://schema.org/name
+(which represents rdfs:label).
+
In the simple schema, Knora value properties can be datatype properties.
+The knora-base:objectType of a Knora value property such as
+images:description is a literal datatype, in this case
+xsd:string. Moreover, images:description is a subproperty of
+the standard property dcterms:description, whose object can be a
+literal value. A client that understands rdfs:subPropertyOf, and is
+familiar with dcterms:description, can then work with
+images:description on the basis of its knowledge about
+dcterms:description.
+
By default, values for rdfs:label and rdfs:comment are returned only
+in the user's preferred language, or in the system default language. To
+obtain these values in all available languages, add the URL parameter
+?allLanguages=true. For example, with this parameter, the definition
+of images:description becomes:
To find out more about the knora-api entities used in the response,
+the client can request the knora-api ontology in the simple schema:
+http://api.knora.org/ontology/knora-api/simple/v2. For example,
+images:erfassungsdatum has a knora-api:objectType of
+knora-api:Date, which is a subtype of xsd:string with a
+Knora-specific, human-readable format. In the knora-api simple
+ontology, there is a definition of this type:
+
{
+"@id":"http://api.knora.org/ontology/knora-api/simple/v2",
+"@type":"owl:Ontology",
+"rdfs:label":"The knora-api ontology in the simple schema",
+"@graph":[{
+"@id":"knora-api:Date",
+"@type":"rdfs:Datatype",
+"rdfs:comment":"Represents a date as a period with different possible precisions.",
+"rdfs:label":"Date literal",
+"rdfs:subClassOf":{
+"@type":"rdfs:Datatype",
+"owl:onDatatype":{
+"@id":"xsd:string"
+},
+"owl:withRestrictions":{
+"xsd:pattern":"(GREGORIAN|JULIAN|ISLAMIC):\\d{1,4}(-\\d{1,2}(-\\d{1,2})?)?( BC| AD| BCE| CE)?(:\\d{1,4}(-\\d{1,2}(-\\d{1,2})?)?( BC| AD| BCE| CE)?)?"
+}
+}
+}],
+"@context":{
+"rdf":"http://www.w3.org/1999/02/22-rdf-syntax-ns#",
+"knora-api":"http://api.knora.org/ontology/knora-api/simple/v2#",
+"owl":"http://www.w3.org/2002/07/owl#",
+"rdfs":"http://www.w3.org/2000/01/rdf-schema#",
+"xsd":"http://www.w3.org/2001/XMLSchema#"
+}
+}
+
+
JSON-LD Representation of an Ontology in the Complex Schema
+
The complex schema is suitable for client applications that need to
+update data in Knora. For example, here is the response for the images
+sample ontology in the complex schema, http://0.0.0.0:3333/ontology/00FF/images/v2
+(simplified for clarity):
+
{
+"@id":"http://0.0.0.0:3333/ontology/00FF/images/v2",
+"@type":"owl:Ontology",
+"knora-api:attachedToProject":{
+"@id":"http://rdfh.ch/projects/00FF"
+},
+"rdfs:label":"The images demo ontology",
+"@graph":[{
+"@id":"images:bild",
+"@type":"owl:Class",
+"knora-api:canBeInstantiated":true,
+"knora-api:isResourceClass":true,
+"knora-api:resourceIcon":"bild.png",
+"rdfs:comment":"An image of the demo image collection",
+"rdfs:label":"Image",
+"rdfs:subClassOf":[{
+"@id":"knora-api:StillImageRepresentation"
+},{
+"@type":"owl:Restriction",
+"knora-api:isInherited":true,
+"owl:cardinality":1,
+"owl:onProperty":{
+"@id":"knora-api:attachedToProject"
+}
+},{
+"@type":"owl:Restriction",
+"knora-api:isInherited":true,
+"owl:cardinality":1,
+"owl:onProperty":{
+"@id":"knora-api:attachedToUser"
+}
+},{
+"@type":"owl:Restriction",
+"knora-api:isInherited":true,
+"owl:cardinality":1,
+"owl:onProperty":{
+"@id":"knora-api:creationDate"
+}
+},{
+"@type":"owl:Restriction",
+"knora-api:isInherited":true,
+"owl:minCardinality":0,
+"owl:onProperty":{
+"@id":"knora-api:hasIncomingLink"
+}
+},{
+"@type":"owl:Restriction",
+"knora-api:isInherited":true,
+"owl:cardinality":1,
+"owl:onProperty":{
+"@id":"knora-api:hasPermissions"
+}
+},{
+"@type":"owl:Restriction",
+"knora-api:isInherited":true,
+"owl:minCardinality":0,
+"owl:onProperty":{
+"@id":"knora-api:hasStandoffLinkTo"
+}
+},{
+"@type":"owl:Restriction",
+"knora-api:isInherited":true,
+"owl:minCardinality":0,
+"owl:onProperty":{
+"@id":"knora-api:hasStandoffLinkToValue"
+}
+},{
+"@type":"owl:Restriction",
+"knora-api:isInherited":true,
+"owl:minCardinality":1,
+"owl:onProperty":{
+"@id":"knora-api:hasStillImageFileValue"
+}
+},{
+"@type":"owl:Restriction",
+"knora-api:isInherited":true,
+"owl:maxCardinality":1,
+"owl:onProperty":{
+"@id":"knora-api:lastModificationDate"
+}
+},{
+"@type":"owl:Restriction",
+"knora-api:isInherited":true,
+"owl:cardinality":1,
+"owl:onProperty":{
+"@id":"rdfs:label"
+}
+},{
+"@type":"owl:Restriction",
+"salsah-gui:guiOrder":3,
+"owl:cardinality":1,
+"owl:onProperty":{
+"@id":"images:description"
+}
+},{
+"@type":"owl:Restriction",
+"salsah-gui:guiOrder":8,
+"owl:cardinality":1,
+"owl:onProperty":{
+"@id":"images:erfassungsdatum"
+}
+},{
+"@type":"owl:Restriction",
+"salsah-gui:guiOrder":12,
+"owl:maxCardinality":1,
+"owl:onProperty":{
+"@id":"images:urheber"
+}
+},{
+"@type":"owl:Restriction",
+"salsah-gui:guiOrder":12,
+"owl:maxCardinality":1,
+"owl:onProperty":{
+"@id":"images:urheberValue"
+}
+}]
+},{
+"@id":"images:description",
+"@type":"owl:ObjectProperty",
+"knora-api:isEditable":true,
+"knora-api:isResourceProperty":true,
+"knora-api:objectType":{
+"@id":"knora-api:TextValue"
+},
+"knora-api:subjectType":{
+"@id":"images:bild"
+},
+"salsah-gui:guiAttribute":["rows=10","width=95%","wrap=soft"],
+"salsah-gui:guiElement":{
+"@id":"salsah-gui:Textarea"
+},
+"rdfs:label":"Description",
+"rdfs:subPropertyOf":[{
+"@id":"knora-api:hasValue"
+},{
+"@id":"http://purl.org/dc/terms/description"
+}]
+},{
+"@id":"images:erfassungsdatum",
+"@type":"owl:ObjectProperty",
+"knora-api:isEditable":true,
+"knora-api:isResourceProperty":true,
+"knora-api:objectType":{
+"@id":"knora-api:DateValue"
+},
+"knora-api:subjectType":{
+"@id":"images:bild"
+},
+"salsah-gui:guiElement":{
+"@id":"salsah-gui:Date"
+},
+"rdfs:label":"Date of acquisition",
+"rdfs:subPropertyOf":[{
+"@id":"knora-api:hasValue"
+},{
+"@id":"http://purl.org/dc/terms/date"
+}]
+},{
+"@id":"images:firstname",
+"@type":"owl:ObjectProperty",
+"knora-api:isEditable":true,
+"knora-api:isResourceProperty":true,
+"knora-api:objectType":{
+"@id":"knora-api:TextValue"
+},
+"knora-api:subjectType":{
+"@id":"images:person"
+},
+"salsah-gui:guiAttribute":["maxlength=32","size=32"],
+"salsah-gui:guiElement":{
+"@id":"salsah-gui:SimpleText"
+},
+"rdfs:comment":"First name of a person",
+"rdfs:label":"First name",
+"rdfs:subPropertyOf":{
+"@id":"knora-api:hasValue"
+}
+},{
+"@id":"images:lastname",
+"@type":"owl:ObjectProperty",
+"knora-api:isEditable":true,
+"knora-api:isResourceProperty":true,
+"knora-api:objectType":{
+"@id":"knora-api:TextValue"
+},
+"knora-api:subjectType":{
+"@id":"images:person"
+},
+"salsah-gui:guiAttribute":["maxlength=32","size=32"],
+"salsah-gui:guiElement":{
+"@id":"salsah-gui:SimpleText"
+},
+"rdfs:comment":"Last name of a person",
+"rdfs:label":"Name",
+"rdfs:subPropertyOf":{
+"@id":"knora-api:hasValue"
+}
+},{
+"@id":"images:person",
+"@type":"owl:Class",
+"knora-api:canBeInstantiated":true,
+"knora-api:isResourceClass":true,
+"knora-api:resourceIcon":"person.png",
+"rdfs:comment":"Person",
+"rdfs:label":"Person",
+"rdfs:subClassOf":[{
+"@id":"knora-api:Resource"
+},{
+"@type":"owl:Restriction",
+"knora-api:isInherited":true,
+"owl:cardinality":1,
+"owl:onProperty":{
+"@id":"knora-api:attachedToProject"
+}
+},{
+"@type":"owl:Restriction",
+"knora-api:isInherited":true,
+"owl:cardinality":1,
+"owl:onProperty":{
+"@id":"knora-api:attachedToUser"
+}
+},{
+"@type":"owl:Restriction",
+"knora-api:isInherited":true,
+"owl:cardinality":1,
+"owl:onProperty":{
+"@id":"knora-api:creationDate"
+}
+},{
+"@type":"owl:Restriction",
+"knora-api:isInherited":true,
+"owl:minCardinality":0,
+"owl:onProperty":{
+"@id":"knora-api:hasIncomingLink"
+}
+},{
+"@type":"owl:Restriction",
+"knora-api:isInherited":true,
+"owl:cardinality":1,
+"owl:onProperty":{
+"@id":"knora-api:hasPermissions"
+}
+},{
+"@type":"owl:Restriction",
+"knora-api:isInherited":true,
+"owl:minCardinality":0,
+"owl:onProperty":{
+"@id":"knora-api:hasStandoffLinkTo"
+}
+},{
+"@type":"owl:Restriction",
+"knora-api:isInherited":true,
+"owl:minCardinality":0,
+"owl:onProperty":{
+"@id":"knora-api:hasStandoffLinkToValue"
+}
+},{
+"@type":"owl:Restriction",
+"knora-api:isInherited":true,
+"owl:maxCardinality":1,
+"owl:onProperty":{
+"@id":"knora-api:lastModificationDate"
+}
+},{
+"@type":"owl:Restriction",
+"knora-api:isInherited":true,
+"owl:cardinality":1,
+"owl:onProperty":{
+"@id":"rdfs:label"
+}
+},{
+"@type":"owl:Restriction",
+"salsah-gui:guiOrder":0,
+"owl:cardinality":1,
+"owl:onProperty":{
+"@id":"images:lastname"
+}
+},{
+"@type":"owl:Restriction",
+"salsah-gui:guiOrder":1,
+"owl:cardinality":1,
+"owl:onProperty":{
+"@id":"images:firstname"
+}
+}]
+},{
+"@id":"images:urheber",
+"@type":"owl:ObjectProperty",
+"knora-api:isEditable":true,
+"knora-api:isLinkProperty":true,
+"knora-api:isResourceProperty":true,
+"knora-api:objectType":{
+"@id":"images:person"
+},
+"knora-api:subjectType":{
+"@id":"images:bild"
+},
+"salsah-gui:guiAttribute":"numprops=2",
+"salsah-gui:guiElement":{
+"@id":"salsah-gui:Searchbox"
+},
+"rdfs:comment":"An entity primarily responsible for making the resource. Examples of a Creator include a person, an organization, or a service. Typically, the name of a Creator should be used to indicate the entity.",
+"rdfs:label":"Creator",
+"rdfs:subPropertyOf":{
+"@id":"knora-api:hasLinkTo"
+}
+},{
+"@id":"images:urheberValue",
+"@type":"owl:ObjectProperty",
+"knora-api:isEditable":true,
+"knora-api:isLinkValueProperty":true,
+"knora-api:isResourceProperty":true,
+"knora-api:objectType":{
+"@id":"knora-api:LinkValue"
+},
+"knora-api:subjectType":{
+"@id":"images:bild"
+},
+"salsah-gui:guiAttribute":"numprops=2",
+"salsah-gui:guiElement":{
+"@id":"salsah-gui:Searchbox"
+},
+"rdfs:comment":"An entity primarily responsible for making the resource. Examples of a Creator include a person, an organization, or a service. Typically, the name of a Creator should be used to indicate the entity.",
+"rdfs:label":"Creator",
+"rdfs:subPropertyOf":{
+"@id":"knora-api:hasLinkToValue"
+}
+}],
+"@context":{
+"rdf":"http://www.w3.org/1999/02/22-rdf-syntax-ns#",
+"images":"http://0.0.0.0:3333/ontology/00FF/images/v2#",
+"knora-api":"http://api.knora.org/ontology/knora-api/v2#",
+"owl":"http://www.w3.org/2002/07/owl#",
+"salsah-gui":"http://api.knora.org/ontology/salsah-gui/v2#",
+"rdfs":"http://www.w3.org/2000/01/rdf-schema#",
+"xsd":"http://www.w3.org/2001/XMLSchema#"
+}
+}
+
+
In the complex schema, all Knora value properties are object properties,
+whose objects are IRIs, each of which uniquely identifies a value that
+contains metadata and can potentially be edited. The
+knora-base:objectType of a Knora value property such as
+images:description is a Knora value class, in this case
+knora-api:TextValue. Similarly, images:erfassungsdatum has a
+knora-api:objectType of knora-api:DateValue, which has a more
+complex structure than the knora-api:Date datatype shown in the
+previous section. A client can find out more about these value classes
+by requesting the knora-api ontology in the complex schema,
+http://api.knora.org/ontology/knora-api/v2.
+
Moreover, additional information is provided in the complex schema, to
+help clients that wish to create or update resources and values. A Knora
+resource class that can be instantiated is identified with the boolean
+properties knora-api:isResourceClass and
+knora-api:canBeInstantiated, to distinguish it from built-in abstract
+classes. Knora resource properties whose values can be edited by clients
+are identified with knora-api:isResourceProperty and
+knora-api:isEditable, to distinguish them from properties whose values
+are maintained automatically by Knora. Link value
+properties are shown along with link properties, because a client that
+updates links will need the IRIs of their link values. The predicate
+salsah-gui:guiOrder tells a GUI client in what order to display the
+properties of a class, and the predicates salsah-gui:guiElement and
+salsah-gui:guiAttribute specify how to configure a GUI element for
+editing the value of a property. For more information on the
+salsah-gui ontology, see The SALSAH GUI Ontology.
+
Querying class definition
+
To get the definition of a class, use the following route:
+
HTTP GET to http://host/v2/ontologies/classes/CLASS_IRI
+
The ontology update API must ensure that the ontologies it creates are
+valid and consistent, and that existing data is not invalidated by a
+change to an ontology. To make this easier to enforce, the ontology
+update API allows only one entity to be created or modified at a time.
+It is not possible to submit an entire ontology all at once. Each
+update request is a JSON-LD document providing only the information that is
+relevant to the update.
+
Moreover, the API enforces the following rules:
+
+
An entity (i.e. a class or property) cannot be referred to until it has been created.
+
An entity cannot be modified or deleted if it is used in data,
+ except for changes to its rdfs:label or rdfs:comment.
+
An entity cannot be modified if another entity refers to it, with
+ one exception: a knora-api:subjectType or knora-api:objectType
+ that refers to a class will not prevent the class's cardinalities
+ from being modified.
+
+
Because of these rules, some operations have to be done in a specific
+order:
+
+
Properties have to be defined before they can be used in the
+ cardinalities of a class, but a property's knora-api:subjectType
+ cannot refer to a class that does not yet exist. The recommended
+ approach is to first create a class with no cardinalities, then
+ create the properties that it needs, then add cardinalities for
+ those properties to the class.
+
To delete a class along with its properties, the client must first
+ remove the cardinalities from the class, then delete the property
+ definitions, then delete the class definition.
+
+
When changing an existing ontology, the client must always supply the
+ontology's knora-api:lastModificationDate, which is returned in the
+response to each update or when querying the ontology.
+If user A attempts to update an ontology, but
+user B has already updated it since the last time user A received the
+ontology's knora-api:lastModificationDate, user A's update will be
+rejected with an HTTP 409 Conflict error. This means that it is possible
+for two different users to work concurrently on the same ontology, but
+this is discouraged since it is likely to lead to confusion.
+
An ontology can be created or updated only by a system administrator, or
+by a project administrator in the ontology's project.
+
Ontology updates always use the complex schema.
+
Creating a New Ontology
+
An ontology is always created within a particular project.
The ontology name must follow the rules given in
+Knora IRIs.
+
The ontology metadata can have an optional comment given in the request
+body as:
+
"rdfs:comment": "some comment",
+
+
If the ontology is to be shared by multiple projects, it must be
+created in the default shared ontologies project,
+http://www.knora.org/ontology/knora-base#DefaultSharedOntologiesProject,
+and the request must have this additional boolean property:
A successful response will be a JSON-LD document providing only the
+ontology's metadata, which includes the ontology's IRI. When the client
+makes further requests to create entities (classes and properties) in
+the ontology, it must construct entity IRIs by concatenating the
+ontology IRI, a # character, and the entity name. An entity name must
+be a valid XML NCName.
+
Changing an Ontology's Metadata
+
One can modify an ontology's metadata by updating its rdfs:label or rdfs:comment
+or both. The example below shows the request for changing the label of an ontology.
The request body can also contain a new label and a new comment for the ontology's metadata.
+A successful response will be a JSON-LD document providing only the
+ontology's metadata.
+
Deleting an Ontology's comment
+
HTTP DELETE to http://host/v2/ontologies/comment/ONTOLOGY_IRI?lastModificationDate=ONTOLOGY_LAST_MODIFICATION_DATE
+
+
The ontology IRI and the ontology's last modification date must be
+URL-encoded.
+
A successful response will be a JSON-LD document containing the ontology's
+updated metadata.
+
Deleting an Ontology
+
An ontology can be deleted only if it is not used in data.
+
HTTP DELETE to http://host/v2/ontologies/ONTOLOGY_IRI?lastModificationDate=ONTOLOGY_LAST_MODIFICATION_DATE
+
+
The ontology IRI and the ontology's last modification date must be
+URL-encoded.
+
A successful response will be a JSON-LD document containing a
+confirmation message.
+
To check whether an ontology can be deleted:
+
HTTP GET to http://host/v2/ontologies/candeleteontology/ONTOLOGY_IRI
+
Values for rdfs:label must be submitted in at least
+one language, either as an object or as an array of objects.
+
Values for rdfs:comment are optional, but if they are provided, they must include a language code.
+
At least one base class must be provided, which can be
+knora-api:Resource or any of its subclasses.
+
A successful response will be a JSON-LD document providing the new class
+definition (but not any of the other entities in the ontology).
+
Creating a Class With Cardinalities
+
This can work if the new class will have cardinalities for properties
+that have no knora-api:subjectType, or if the new class will be a
+subclass of their knora-api:subjectType.
OWL_CARDINALITY_PREDICATE and OWL_CARDINALITY_VALUE must correspond
+to the supported combinations given in
+OWL Cardinalities. (The placeholder
+OWL_CARDINALITY_VALUE is shown here in quotes, but it should be an
+unquoted integer.)
+
Values for rdfs:label must be submitted in at least
+one language, either as an object or as an array of objects.
+
Values for rdfs:comment are optional, but if they are provided, they must include a language code.
+
At least one base class must be provided.
+
When a cardinality on a link property is submitted, an identical cardinality
+on the corresponding link value property is automatically added (see
+Links Between Resources).
+
A successful response will be a JSON-LD document providing the new class
+definition (but not any of the other entities in the ontology).
+
Changing the Labels of a Class
+
This operation is permitted even if the class is used in data.
Values for rdfs:label must be submitted in at least one language,
+either as an object or as an array of objects. The submitted labels will
+replace the existing ones.
Values for rdfs:comment must be submitted in at least one language,
+either as an object or as an array of objects. The submitted comments
+will replace the existing ones.
Values for rdfs:label must be submitted in at least
+one language, either as an object or as an array of objects.
+
Values for rdfs:comment are optional, but if they are provided, they must include a language code.
+
At least one base property must be provided, which can be
+knora-api:hasValue, knora-api:hasLinkTo, or any of their
+subproperties, with the exception of file properties (subproperties of
+knora-api:hasFileValue) and link value properties (subproperties of
+knora-api:hasLinkToValue).
+
If the property is a link property, the corresponding link value property
+(see Links Between Resources)
+will automatically be created.
+
The property definition must specify its knora-api:objectType. If the
+new property is a subproperty of knora-api:hasValue, its
+knora-api:objectType must be one of the built-in subclasses of
+knora-api:Value, which are defined in the knora-api ontology in the
+complex schema. If the new property is a subproperty of
+knora-base:hasLinkTo, its knora-api:objectType must be a subclass of
+knora-api:Resource.
+
To improve consistency checking, it is recommended, but not required, to
+provide knora-api:subjectType, which must be a subclass of
+knora-api:Resource.
+
The predicates salsah-gui:guiElement and salsah-gui:guiAttribute are
+optional. If provided, the object of guiElement must be one of the OWL
+named individuals defined in
+The SALSAH GUI Ontology. Some GUI elements
+take required or optional attributes, which are provided as objects of
+salsah-gui:guiAttribute; see The SALSAH GUI Ontology
+for details.
+
A successful response will be a JSON-LD document providing the new
+property definition (but not any of the other entities in the ontology).
+
Changing the Labels of a Property
+
This operation is permitted even if the property is used in data.
+
HTTP PUT to http://host/v2/ontologies/properties
+
To remove the values of salsah-gui:guiElement and salsah-gui:guiAttribute from
+the property definition, submit the request without those predicates.
+
Adding Cardinalities to a Class
+
If the class (or any of its sub-classes) is used in data,
+it is not allowed to add cardinalities owl:minCardinality greater than 0 or owl:cardinality 1 to the class.
+
HTTP POST to http://host/v2/ontologies/cardinalities
+
OWL_CARDINALITY_PREDICATE and OWL_CARDINALITY_VALUE must correspond
+to the supported combinations given in
+OWL Cardinalities. (The placeholder
+OWL_CARDINALITY_VALUE is shown here in quotes, but it should be an
+unquoted integer.)
+
When a cardinality on a link property is submitted, an identical cardinality
+on the corresponding link value property is automatically added (see
+Links Between Resources).
+
A successful response will be a JSON-LD document providing the new class
+definition (but not any of the other entities in the ontology).
+
Replacing the Cardinalities of a Class
+
It is possible to replace all cardinalities on properties used by a class.
+If it succeeds the request will effectively replace all direct cardinalities of the class as specified.
+That is, it removes all the cardinalities from the class and replaces them with the submitted cardinalities.
+Meaning that, if no cardinalities are submitted (i.e. the request contains no rdfs:subClassOf),
+the class is left with no cardinalities.
+
The request will fail if any of the "Pre-Update Checks" fails.
+A partial update of the ontology will not be performed.
+
Pre-Update Checks
+
+
Ontology Check
+
Any given cardinality on a property must be included in any of the existing cardinalities
+ for the same property of the super-classes.
+
Any given cardinality on a property must include the effective cardinalities
+ for the same property of all subclasses,
+ taking into account the respective inherited cardinalities from the class hierarchy of the subclasses.
+
+
+
Consistency Check with existing data
+
Given that instances of the class or any of its subclasses exist,
+ then these instances are checked if they conform to the given cardinality.
+
+
+
+
+
Subproperty handling for cardinality pre-update checks
+
The Pre-Update check does not take into account any subproperty relations between the properties.
+Every cardinality is checked against only the given property and not its subproperties,
+neither in the ontology nor the consistency check with existing data.
+This means that currently it is necessary to maintain the cardinalities on all subproperties of a property
+in sync with the cardinalities on the superproperty.
+
+
HTTP PUT to http://host/v2/ontologies/cardinalities
+
OWL_CARDINALITY_PREDICATE and OWL_CARDINALITY_VALUE must correspond
+to the supported combinations given in
+OWL Cardinalities. (The placeholder
+OWL_CARDINALITY_VALUE is shown here in quotes, but it should be an
+unquoted integer.)
+
When a cardinality on a link property is submitted, an identical cardinality
+on the corresponding link value property is automatically added (see
+Links Between Resources).
+
A successful response will be a JSON-LD document providing the new class definition (but not any of the other entities in the ontology).
+If any of the "Pre-Update Checks" fail the endpoint will respond with a 400 Bad Request containing the reasons why the update failed.
+
The "Pre-Update Checks" are available on a dedicated endpoint.
+For a check whether a particular cardinality can be set on a class/property combination, use the following request:
+
HTTP GET to http://host/v2/ontologies/canreplacecardinalities/CLASS_IRI?propertyIri=PROPERTY_IRI&newCardinality=[0-1|1|1-n|0-n]
+
The ontologies/canreplacecardinalities/CLASS_IRI request is only checking if the class is in use.
+
Delete a single cardinality from a class
+
If a class is used in data, it is only allowed to delete a cardinality, if the
+property a cardinality refers to, is not used inside the data. Also, the property
+isn't allowed to be used inside the data in any subclasses of this class.
+
HTTP PATCH to http://host/v2/ontologies/cardinalities
+
OWL_CARDINALITY_PREDICATE and OWL_CARDINALITY_VALUE must correspond
+to the supported combinations given in
+OWL Cardinalities. (The placeholder
+OWL_CARDINALITY_VALUE is shown here in quotes, but it should be an
+unquoted integer.)
+
When a cardinality on a link property is submitted, an identical cardinality
+on the corresponding link value property is automatically added (see
+Links Between Resources).
+
A successful response will be a JSON-LD document providing the new class
+definition (but not any of the other entities in the ontology).
+
To check whether a class's cardinality can be deleted:
+
HTTP POST to http://host/v2/ontologies/candeletecardinalities
+
Only the cardinalities whose GUI order is to be changed need to be included
+in the request. The OWL_CARDINALITY_PREDICATE and OWL_CARDINALITY_VALUE
+are ignored; only the GUI_ORDER_VALUE is changed.
+
Deleting a Property
+
A property can be deleted only if no other ontology entity refers to it,
+and if it is not used in data.
+
HTTP DELETE to http://host/v2/ontologies/properties/PROPERTY_IRI?lastModificationDate=ONTOLOGY_LAST_MODIFICATION_DATE
+
+
The property IRI and the ontology's last modification date must be
+URL-encoded.
+
If the property is a link property, the corresponding link value property
+(see Links Between Resources)
+will automatically be deleted.
+
A successful response will be a JSON-LD document providing only the
+ontology's metadata.
+
To check whether a property can be deleted:
+
HTTP GET to http://host/v2/ontologies/candeleteproperty/PROPERTY_IRI
+
Knora provides a permanent, citable URL for each resource and value.
+These URLs use Archival Resource Key (ARK) Identifiers,
+and are designed to remain valid even if the resource itself is moved
+from one Knora repository to another.
+
Obtaining ARK URLs
+
In the complex schema, a resource or value
+is always returned with two ARK URLs: one that will always refer
+to the latest version of the resource or value (knora-api:arkUrl), and one that refers
+specifically to the version being returned (knora-api:versionArkUrl).
+For example:
The format of a Knora project ARK URL is as follows:
+
http://HOST/ark:/NAAN/VERSION/PROJECT
+
+
NAAN is a
+Name Assigning Authority Number,
+VERSION is the version number of the Knora ARK URL format (currently always 1),
+and PROJECT is the project's short-code.
+
For example, given a project with ID 0001, and using the DaSCH's ARK resolver
+hostname and NAAN, the ARK URL for the project itself is:
+
http://ark.dasch.swiss/ark:/72163/1/0001
+
+
This could redirect to a page describing the project.
+
ARK URLs for Resources
+
The format of a Knora resource ARK URL is as follows:
NAAN is a
+Name Assigning Authority Number,
+VERSION is the version number of the Knora ARK URL format (currently always 1),
+PROJECT is the project's short-code,
+and RESOURCE_UUID is the resource's UUID.
+
For example, given the Knora resource IRI http://rdfh.ch/0001/0C-0L1kORryKzJAJxxRyRQ,
+and using the DaSCH's ARK resolver hostname and NAAN, the corresponding
+ARK URL without a timestamp is:
NAAN is a
+Name Assigning Authority Number,
+VERSION is the version number of the Knora ARK URL format (currently always 1),
+PROJECT is the project's short-code,
+RESOURCE_UUID is the resource's UUID, and VALUE_UUID
+is the value's knora-api:valueHasUUID.
+
For example, given a value with knora-api:valueHasUUID "4OOf3qJUTnCDXlPNnygSzQ" in the resource
+http://rdfh.ch/0001/0C-0L1kORryKzJAJxxRyRQ, and using the DaSCH's ARK resolver
+hostname and NAAN, the corresponding ARK URL without a timestamp is:
Gravsearch is intended to offer the advantages of SPARQL endpoints
+(particularly the ability to perform queries using complex search
+criteria) while avoiding their drawbacks in terms of performance and
+security (see The Enduring Myth of the SPARQL
+Endpoint).
+It also has the benefit of enabling clients to work with a simpler RDF
+data model than the one the API actually uses to store data in the
+triplestore and makes it possible to provide better error-checking.
+
Rather than being processed directly by the triplestore, a Gravsearch query
+is interpreted by the API, which enforces certain
+restrictions on the query, and implements paging and permission
+checking. The API server generates SPARQL based on the Gravsearch query
+submitted, queries the triplestore, filters the results according to the
+user's permissions, and returns each page of query results as an
+API response. Thus, Gravsearch is a hybrid between a RESTful API and a
+SPARQL endpoint.
+
A Gravsearch query conforms to a subset of the syntax of a SPARQL
+CONSTRUCT query, with
+some additional restrictions and functionality. In particular, the
+variable representing the top-level (or 'main') resource that will
+appear in each search result must be identified, statements must be
+included to specify the types of the entities being queried, OFFSET is
+used to control paging, and ORDER BY is used to sort the results.
+
It is certainly possible to write Gravsearch queries by hand, but we expect
+that in general, they will be automatically generated by client
+software, e.g. by a client user interface.
It is also possible to submit a Gravsearch query using HTTP GET. The entire
+query must be URL-encoded and included as the last element of the URL path:
+
HTTP GET to http://host/v2/searchextended/QUERY
+
+
The response to a Gravsearch query is an RDF graph, which can be requested in various
+formats (see Responses Describing Resources).
+
To request the number of results rather than the results themselves, you can
+do a count query:
+
HTTP POST to http://host/v2/searchextended/count
+
+
The response to a count query request is an object with one predicate,
+http://schema.org/numberOfItems, with an integer value.
+
If a gravsearch query times out, a 504 Gateway Timeout will be returned.
+
Gravsearch and API Schemas
+
A Gravsearch query can be written in either of the two
+DSP-API v2 schemas. The simple schema
+is easier to work with, and is sufficient if you don't need to query
+anything below the level of a DSP-API value. If your query needs to refer to
+standoff markup, you must use the complex schema. Each query must use a single
+schema, with one exception (see Date Comparisons).
+
Gravsearch query results can be requested in the simple or complex schema;
+see API Schema.
+
All examples hereafter run with the DSP stack started locally. If you access another stack, you can check
+the IRI of the ontology you are targeting by requesting the ontologies metadata.
+
Using the Simple Schema
+
To write a query in the simple schema, use the knora-api ontology in
+the simple schema, and use the simple schema for any other DSP ontologies
+the query refers to, e.g.:
To write a query in the complex schema, use the knora-api ontology in
+the complex schema, and use the complex schema for any other DSP ontologies
+the query refers to, e.g.:
In the complex schema, DSP-API values are represented as objects belonging
+to subclasses of knora-api:Value, e.g. knora-api:TextValue, and have
+predicates of their own, which can be used in FILTER expressions
+(see Filtering on Values in the Complex Schema).
+
Main and Dependent Resources
+
The main resource is the top-level resource in a search result. Other
+resources that are in some way connected to the main resource are
+referred to as dependent resources. If the client asks for a resource A
+relating to a resource B, then all matches for A will be presented as
+main resources and those for B as dependent resources. The main resource
+must be represented by a variable, marked with knora-api:isMainResource,
+as explained under CONSTRUCT Clause.
+
Virtual incoming Links
+
Depending on the ontology design, a resource A points to B or vice versa.
+For example, a page A is part of a book B using the property incunabula:partOf.
+If A is marked as the main resource, then B is nested as a dependent resource
+in its link value incunabula:partOfValue. But in case B is marked as the main resource,
+B does not have a link value pointing to A because in fact B is pointed to by A.
+Instead, B has a virtual property knora-api:hasIncomingLink containing A's link value:
Note that the virtually inserted link value inverts the relation by using knora-api:linkValueHasSource.
+The source of the link is A and its target B is only represented by an IRI (knora-api:linkValueHasTargetIri)
+since B is the main resource.
+
Graph Patterns and Result Graphs
+
The WHERE clause of a Gravsearch query specifies a graph pattern. Each query
+result will match this graph pattern, and will have the form of a graph
+whose starting point is a main resource. The query's graph pattern, and
+hence each query result graph, can span zero more levels of relations
+between resources. For example, a query could request regions
+in images on pages of books written by a certain author, articles by
+authors who were students of a particular professor, or authors of texts
+that refer to events that took place within a certain date range.
+
Permission Checking
+
Each matching resource is returned with the values that the user has
+permission to see. If the user does not have permission to see a matching
+main resource, it is hidden in the results. If a user does not have
+permission to see a matching dependent resource, the link value is hidden.
+
Paging
+
Gravsearch results are returned in pages. The maximum number of main
+resources per page is determined by the API (and can be configured
+in application.conf via the setting app/v2/resources-sequence/results-per-page).
+If some resources have been filtered out because the user does not have
+permission to see them, a page could contain fewer results, or no results.
+If it is possible that more results are available in subsequent pages,
+the Gravsearch response will contain the predicate knora-api:mayHaveMoreResults
+with the boolean value true, otherwise it will not contain this predicate.
+Therefore, to retrieve all available results, the client must request each page
+one at a time, until the response does not contain knora-api:mayHaveMoreResults.
+
Inference
+
Gravsearch queries are understood to imply a subset of
+RDFS reasoning. This is done by the API by expanding the incoming query.
+
Specifically, if a statement pattern specifies a property, the pattern will
+also match subproperties of that property, and if a statement specifies that
+a subject has a particular rdf:type, the statement will also match subjects
+belonging to subclasses of that type.
+
If you know that reasoning will not return any additional results for
+your query, you can disable it by adding this line to the WHERE clause, which may improve query performance:
Every Gravsearch query is a valid SPARQL 1.1
+CONSTRUCT query.
+However, Gravsearch only supports a subset of the elements that can be used
+in a SPARQL Construct query, and a Gravsearch
+CONSTRUCT Clause has to indicate which variable
+is to be used for the main resource in each search result.
+
Supported SPARQL Syntax
+
The current version of Gravsearch accepts CONSTRUCT queries whose WHERE
+clauses use the following patterns, with the specified restrictions:
+
+
OPTIONAL: cannot be nested in a UNION.
+
UNION: cannot be nested in a UNION.
+
FILTER: may contain a complex expression using the Boolean
+ operators AND and OR, as well as comparison operators. The left
+ argument of a comparison operator must be a query variable.
+ A Knora ontology entity IRI used in a FILTER must be a property IRI.
+
FILTER NOT EXISTS
+
MINUS
+
OFFSET: the OFFSET is needed for paging. It does not actually
+ refer to the number of triples to be returned, but to the
+ requested page of results. The default value is 0, which refers
+ to the first page of results.
+
ORDER BY: In SPARQL, the result of a CONSTRUCT query is an
+ unordered set of triples. However, a Gravsearch query returns an
+ ordered list of resources, which can be ordered by the values of
+ specified properties. If the query is written in the complex schema,
+ items below the level of DSP-API values may not be used in ORDER BY.
+
BIND: The value assigned must be a DSP resource IRI.
+
+
Resources, Properties, and Values
+
Resources can be represented either by an IRI or by a variable, except for the
+main resource, which must be represented by a variable.
+
It is possible to do a Gravsearch query in which the IRI of the main resource
+is already known, e.g. to request specific information about that resource and
+perhaps about linked resources. In this case, the IRI of the main resource must
+be assigned to a variable using BIND. Note that BIND statements slow the query down,
+therefore we recommend that you do not use them unless you have to.
+
Properties can be represented by an IRI or a query variable. If a
+property is represented by a query variable, it can be restricted to
+certain property IRIs using a FILTER.
+
A Knora value (i.e. a value attached to a knora-api:Resource)
+must be represented as a query variable.
+
Filtering on Values
+
Filtering on Values in the Simple Schema
+
In the simple schema, a variable representing a DSP-API value can be used
+directly in a FILTER expression. For example:
+
?book incunabula:title ?title .
+FILTER(?title = "Zeitglöcklein des Lebens und Leidens Christi")
+
+
Here the type of ?title is xsd:string.
+
The following value types can be compared with literals in FILTER
+expressions in the simple schema:
+
+
Text values (xsd:string)
+
URI values (xsd:anyURI)
+
Integer values (xsd:integer)
+
Decimal values (xsd:decimal)
+
Boolean values (xsd:boolean)
+
Date values (knora-api:Date)
+
List values (knora-api:ListNode)
+
+
List values can only be searched for using the equal operator (=),
+performing an exact match on a list node's label. Labels can be given in different languages for a specific list node.
+If one of the given list node labels matches, it is considered a match.
+Note that in the simple schema, uniqueness is not guaranteed (as opposed to the complex schema).
+
A DSP-API value may not be represented as the literal object of a predicate;
+for example, this is not allowed:
+
?book incunabula:title "Zeitglöcklein des Lebens und Leidens Christi" .
+
+
Filtering on Values in the Complex Schema
+
In the complex schema, variables representing DSP-API values are not literals.
+You must add something to the query (generally a statement) to get a literal
+from a DSP-API value. For example:
+
?book incunabula:title ?title .
+?title knora-api:valueAsString "Zeitglöcklein des Lebens und Leidens Christi" .
+
+
Here the type of ?title is knora-api:TextValue. Note that no FILTER is needed
+in this example. But if you want to use a different comparison operator,
+you need a FILTER:
To match a date value in the complex schema, you must use the
+knora-api:toSimpleDate function in a FILTER
+(see Date Comparisons). The predicates of
+knora-api:DateValue (knora-api:dateValueHasStartYear, etc.) are not
+available in Gravsearch.
+
Date Comparisons
+
In the simple schema, you can compare a date value directly with a knora-api:Date
+in a FILTER:
In the complex schema, you must use the function knora-api:toSimpleDate,
+passing it the variable representing the date value. The date literal used
+in the comparison must still be a knora-api:Date in the simple schema.
+This is the only case in which you can use both schemas in a single query:
E.g. an exact date like GREGORIAN:2015-12-03 or a period like GREGORIAN:2015-12-03:2015-12-04.
+Dates may also have month or year precision, e.g. ISLAMIC:1407-02 (the whole month of december) or JULIAN:1330
+(the whole year 1330). An optional ERA indicator term (BCE, CE, or BC, AD) can be added to the date, when no
+era is provided the default era AD will be considered. Era can be given as GREGORIAN:1220 BC or in range as
+GREGORIAN:600 BC:480 BC.
+
Searching for Matching Words
+
The function knora-api:matchText searches for matching words anywhere in a
+text value and is implemented using a full-text search index if available.
+The first argument must represent a text value (a knore-api:TextValue in
+the complex schema, or an xsd:string in the simple schema). The second
+argument is a string literal containing the words to be matched, separated by spaces.
+The function supports the
+Lucene Query Parser syntax.
+Note that Lucene's default operator is a logical OR when submitting several search terms.
+
This function can only be used as the top-level expression in a FILTER.
+
For example, to search for titles that contain the words 'Zeitglöcklein' and
+'Lebens':
To refer to standoff markup in text values, you must write your query in the complex
+schema.
+
A knora-api:TextValue can have the property
+knora-api:textValueHasStandoff, whose objects are the standoff markup
+tags in the text. You can match the tags you're interested in using
+rdf:type or other properties of each tag.
+
Matching Text in a Standoff Tag
+
The function knora-api:matchTextInStandoff searches for standoff tags containing certain terms.
+The implementation is optimised using the full-text search index if available. The
+function takes three arguments:
+
+
A variable representing a text value.
+
A variable representing a standoff tag.
+
A string literal containing space-separated search terms.
+
+
This function can only be used as the top-level expression in a FILTER.
+For example:
Here we are looking for letters containing the words "Grund" and "Richtigkeit"
+within a single paragraph.
+
Matching Standoff Links
+
If you are only interested in specifying that a resource has some text
+value containing a standoff link to another resource, the most efficient
+way is to use the property knora-api:hasStandoffLinkTo, whose subjects and objects
+are resources. This property is automatically maintained by the API. For example:
Here we are looking for letters containing a link to the historian
+Claude Jordan, who is identified by his Integrated Authority File
+identifier, (VIAF)271899510.
+
However, if you need to specify the context in which the link tag occurs, you must
+use the function knora-api:standoffLink. It takes three arguments:
+
+
A variable or IRI representing the resource that is the source of the link.
+
A variable representing the standoff link tag.
+
A variable or IRI representing the resource that is the target of the link.
+
+
This function can only be used as the top-level expression in a FILTER.
+For example:
This has the same effect as the previous example, except that because we are matching
+the link tag itself, we can specify that its immediate parent is a
+StandoffItalicTag.
+
If you actually want to get the target of the link (in this example, ?person)
+in the search results, you need to add a statement like
+?letter knora-api:hasStandoffLinkTo ?person . to the WHERE clause and to the
+CONSTRUCT clause:
You can use the knora-api:toSimpleDate function (see @refDate Comparisons)
+to match dates in standoff date tags, i.e. instances of knora-api:StandoffDateTag or
+of one of its subclasses. For example, here we are looking for a text containing
+an anything:StandoffEventTag (which is a project-specific subclass of knora-api:StandoffDateTag)
+representing an event that occurred sometime during the month of December 2016:
Suppose we want to search for a standoff date in a paragraph, but we know
+that the paragraph tag might not be the immediate parent of the date tag.
+For example, the date tag might be in an italics tag, which is in a paragraph
+tag. In that case, we can use the inferred property
+knora-api:standoffTagHasStartAncestor. We can modify the previous example to
+do this:
The rdfs:label of a resource is not a DSP-API value, but you can still search for it.
+This can be done in the same ways in the simple or complex schema:
+
Using a string literal object:
+
?book rdfs:label "Zeitglöcklein des Lebens und Leidens Christi" .
+
+
Using a variable and a FILTER:
+
?book rdfs:label ?label .
+FILTER(?label = "Zeitglöcklein des Lebens und Leidens Christi")
+
To match words in an rdfs:label using the full-text search index, use the
+knora-api:matchLabel function, which works like knora-api:matchText,
+except that the first argument is a variable representing a resource:
A FILTER can compare a variable with another variable or IRI
+representing a resource. For example, to find a letter whose
+author and recipient are different persons:
In the CONSTRUCT clause of a Gravsearch query, the variable representing the
+main resource must be indicated with knora-api:isMainResource true. Exactly
+one variable representing a resource must be marked in this way.
+
Any other statements in the CONSTRUCT clause must also be present in the WHERE
+clause. If a variable representing a resource or value is used in the WHERE
+clause but not in the CONSTRUCT clause, the matching resources or values
+will not be included in the results.
+
If the query is written in the complex schema, all variables in the CONSTRUCT
+clause must refer to DSP-API resources, DSP-API values, or properties. Data below
+the level of values may not be mentioned in the CONSTRUCT clause.
+
Predicates from the rdf, rdfs, and owl ontologies may not be used
+in the CONSTRUCT clause. The rdfs:label of each matching resource is always
+returned, so there is no need to mention it in the query.
+
Gravsearch by Example
+
In this section, we provide some sample queries of different complexity
+to illustrate the usage of Gravsearch.
+
Getting All the Components of a Compound Resource
+
In order to get all the components of a compound resource, the following
+Gravsearch query can be sent to the API.
+
In this case, the compound resource is an incunabula:book identified
+by the IRI http://rdfh.ch/0803/c5058f3a and the components are of
+type incunabula:page (test data for the Incunabula project). Since
+inference is assumed, we can use knora-api:StillImageRepresentation
+(incunabula:page is one of its subclasses). This makes the query more
+generic and allows for reuse (for instance, a client would like to query
+different types of compound resources defined in different ontologies).
+
ORDER BY is used to sort the components by their sequence number.
+
OFFSET is set to 0 to get the first page of results.
+
PREFIX knora-api: <http://api.knora.org/ontology/knora-api/simple/v2#>
+
+CONSTRUCT {
+ ?component knora-api:isMainResource true . # marking of the component searched for as the main resource, required
+ ?component knora-api:seqnum ?seqnum . # return the sequence number in the response
+ ?component knora-api:hasStillImageFileValue ?file . # return the StillImageFile in the response
+} WHERE {
+ ?component a knora-api:StillImageRepresentation . # restriction of the type of component
+ ?component knora-api:isPartOf <http://rdfh.ch/0803/c5058f3a> . # component relates to a compound resource via this property
+ ?component knora-api:seqnum ?seqnum . # component must have a sequence number
+ ?component knora-api:hasStillImageFileValue ?file . # component must have a StillImageFile
+}
+ORDER BY ASC(?seqnum) # order by sequence number, ascending
+OFFSET 0 # get first page of results
+
+
The incunabula:book with the IRI http://rdfh.ch/0803/c5058f3a has
+402 pages. (This result can be obtained by doing a count query; see
+Submitting Gravsearch Queries.)
+However, with OFFSET 0, only the first page of results is returned.
+The same query can be sent again with OFFSET 1 to get the next page of
+results, and so forth. When a page of results is not full (see settings
+in app/v2 in application.conf) or is empty, no more results are
+available.
+
By design, it is not possible for the client to get more than one page
+of results at a time; this is intended to prevent performance problems
+that would be caused by huge responses. A client that wants to download
+all the results of a query must request each page sequentially.
+
Let's assume the client is not interested in all of the book's pages,
+but just in first ten of them. In that case, the sequence number can be
+restricted using a FILTER that is added to the query's WHERE clause:
+
FILTER (?seqnum <= 10)
+
+
The first page starts with sequence number 1, so with this FILTER only
+the first ten pages are returned.
+
This query would be exactly the same in the complex schema, except for
+the expansion of the knora-api prefix:
If we remove the line ?book incunabula:title ?title . from the CONSTRUCT
+clause, so that the CONSTRUCT clause no longer mentions ?title, the response
+will contain the same matching resources, but the titles of those resources
+will not be included in the response.
+
Requesting a Graph Starting with a Known Resource
+
Here the IRI of the main resource is already known and we want specific information
+about it, as well as about related resources. In this case, the IRI of the main
+resource must be assigned to a variable using BIND:
Searching for a List Value Referring to a Particular List Node
+
Since list nodes are represented by their IRI in the complex schema,
+uniqueness is guranteed (as opposed to the simple schema).
+Also all the subnodes of the given list node are considered a match.
Gravsearch needs to be able to determine the types of the entities that
+query variables and IRIs refer to in the WHERE clause. In most cases, it can
+infer these from context and from the ontologies used. In particular, it needs to
+know:
+
+
The type of the subject and object of each statement.
+
The type that is expected as the object of each predicate.
+
+
Type Annotations
+
When one or more types cannot be inferred, Gravsearch will return an error message
+indicating the entities for which it could not determine types. The missing
+information must then be given by adding type annotations to the query. This can always done by
+adding statements with the predicate rdf:type. The subject must be a resource or value,
+and the object must either be knora-api:Resource (if the subject is a resource)
+or the subject's specific type (if it is a value).
+
For example, consider this query that uses a non-DSP property:
The types of one or more entities could not be determined:
+ ?book, <http://purl.org/dc/terms/title>, ?title
+
+
To solve this problem, it is enough to specify the types of ?book and
+?title; the type of the expected object of dcterms:title can then be inferred
+from the type of ?title.
One or more entities have inconsistent types:
+
+<http://0.0.0.0:3333/ontology/0803/incunabula/simple/v2#pubdate>
+ knora-api:objectType <http://api.knora.org/ontology/knora-api/simple/v2#Date> ;
+ knora-api:objectType <http://www.w3.org/2001/XMLSchema#string> .
+
+?pubdate rdf:type <http://api.knora.org/ontology/knora-api/simple/v2#Date> ;
+ rdf:type <http://www.w3.org/2001/XMLSchema#string> .
+
+
This is because the incunabula ontology says that the object of incunabula:pubdate must be a knora-api:Date,
+but the FILTER expression compares ?pubdate with an xsd:string. The solution is to specify the
+type of the literal in the FILTER:
SPARQL is evaluated from the bottom up.
+A UNION block therefore opens a new scope, in which variables bound at
+higher levels are not necessarily in scope. This can cause unexpected results if queries
+are not carefully designed. Gravsearch tries to prevent this by rejecting queries in the
+following cases.
+
FILTER in UNION
+
A FILTER in a UNION block can only use variables that are bound in the same block, otherwise the query will be rejected. This query is invalid because ?text is not bound in the UNION block containing the FILTER where the variable is used:
A variable used in ORDER BY must be bound at the top level of the WHERE clause. This query is invalid, because ?int is not bound at the top level of the WHERE clause:
The query performance of triplestores, such as Fuseki, is highly dependent on the order of query
+patterns. To improve performance, Gravsearch automatically reorders the
+statement patterns in the WHERE clause according to their dependencies on each other, to minimise
+the number of possible matches for each pattern.
To retrieve an existing resource, the HTTP method GET has to be used.
+Reading resources may require authentication, since some resources may
+have restricted viewing permissions.
Operations for reading and searching resources can return responses in either the
+simple or the complex ontology schema. The complex schema is used by default.
+To receive a response in the simple schema, use the HTTP request header or URL
+parameter described in API Schema.
+
Each DSP-API v2 response describing one or more resources returns a
+single RDF graph. For example, a request for a single resource returns that
+resource and all its values. In a full-text search, the resource is returned with the
+values that matched the search criteria. A response to an extended search
+may represent a whole graph of interconnected resources.
+
In JSON-LD, if only one resource is returned, it is the top-level object;
+if more than one resource is returned, they are represented as an array
+of objects of the @graph member of the top-level object (see
+Named Graphs in the
+JSON-LD specification).
+
In the complex schema, dependent resources, i.e. resources that are referred
+to by other resources on the top level, are nested in link value objects.
+If resources on the top level are referred to by other resources and
+these links are part of the response, virtual incoming links are generated;
+see Gravsearch: Virtual Graph Search).
+
See the interfaces Resource and ResourcesSequence in module
+ResourcesResponse (exists for both API schemas: ApiV2Simple and
+ApiV2WithValueObjects).
+
Requesting Text Markup as XML
+
When requesting a text value with standoff markup, there are three possibilities:
+
+
The text value uses standard mapping.
+
The text value uses a custom mapping which does not specify an XSL transformation.
+
The text value uses a custom mapping which specifies an XSL transformation.
+
+
In the first case, the mapping will be defined as:
where the content of <text> is a limited set of HTML tags that can be handled by CKEditor in DSP-APP.
+This allows for both displaying and editing the text value.
+
In the second and third case, kb:textValueHasMapping will point to the custom mapping
+that may or may not specify an XSL transformation.
+
If no transformation is specified (second case), the text value will be returned only as kb:textValueAsXml.
+This property will be a string containing the contents of the initially uploaded XML.
+
Note: The returned XML document is equivalent to the uploaded document but it is not necessarily identical -
+the order of the attributes in one element may vary from the original.
+
In the third case, when a transformation is specified, both kb:textValueAsXml and kb:textValueAsHtml will be returned.
+kb:textValueAsHtml is the result of the XSL transformation applied to kb:textValueAsXml.
+The HTML representation is intended to display the text value in a human readable and properly styled way,
+while the XML representation can be used to update the text value.
+
Get the Representation of a Resource by IRI
+
Get a Full Representation of a Resource by IRI
+
A full representation of resource can be obtained by making a GET
+request to the API providing its IRI. Because a DSP IRI has the format
+of a URL, its IRI has to be URL-encoded.
+
To get the resource with the IRI http://rdfh.ch/c5058f3a (a
+book from the sample Incunabula project, which is included in the DSP-API
+server's test data), make a HTTP GET request to the resources
+route (path segment resources in the API call) and append the
+URL-encoded IRI:
+
HTTP GET to http://host/v2/resources/http%3A%2F%2Frdfh.ch%2Fc5058f3a
+
+
If necessary, several resources can be queried at the same time, their
+IRIs separated by slashes. Please note that the amount of resources that
+can be queried in one requested is limited. See the settings for
+app/v2 in application.conf.
+
More formally, the URL looks like this:
+
HTTP GET to http://host/v2/resources/resourceIRI(/anotherResourceIri)*
+
+
Get a Full Representation of a Version of a Resource by IRI
+
To get a specific past version of a resource, use the route described in
+Get a Full Representation of a Resource by IRI,
+and add the URL parameter ?version=TIMESTAMP, where TIMESTAMP is an
+xsd:dateTimeStamp in the
+UTC timezone. The timestamp can either be URL-encoded, or submitted with all
+punctuation (-, :, and .) removed (this is to accept timestamps
+from DSP's ARK URLs).
+
The resource will be returned with the values that it had at the specified
+time. Since DSP only versions values, not resource metadata (e.g.
+rdfs:label), the current metadata will be returned.
+
Each value will be returned with the permissions that are attached to
+the current version of the value
+(see Permissions).
+
The returned resource will include the predicate knora-api:versionDate,
+containing the timestamp that was submitted, and its knora-api:versionArkUrl
+(see Resource Permalinks) will contain the
+same timestamp.
+
Get a Value in a Resource
+
To get a specific value of a resource, use this route:
+
HTTP GET to http://host/v2/values/resourceIRI/valueUUID
+
+
The resource IRI must be URL-encoded. The path element valueUUID is the
+string object of the value's knora-api:valueHasUUID.
+
The value will be returned within its containing resource, in the same format
+as for Responses Describing Resources,
+but without any of the resource's other values.
+
Get a Version of a Value in a Resource
+
To get a particular version of a specific value of a resource, use the route
+described in Get a Value in a Resource,
+and add the URL parameter ?version=TIMESTAMP, where TIMESTAMP is an
+xsd:dateTimeStamp in the
+UTC timezone. The timestamp can either be URL-encoded, or submitted with all
+punctuation (-, :, and .) removed (this is to accept timestamps
+from DSP's ARK URLs).
+
The value will be returned within its containing resource, in the same format
+as for Responses Describing Resources,
+but without any of the resource's other values.
+
Since DSP only versions values, not resource metadata (e.g.
+rdfs:label), the current resource metadata will be returned.
+
The value will be returned with the permissions that are attached to
+its current version
+(see Permissions).
+
Get the Version History of a Resource
+
To get a list of the changes that have been made to a resource since its creation,
+use this route:
+
HTTP GET to http://host/v2/resources/history/resourceIRI[?startDate=START_DATE&endDate=END_DATE]
+
+
The resource IRI must be URL-encoded. The start and end dates are optional, and
+are URL-encoded timestamps in
+xsd:dateTimeStamp format.
+The start date is inclusive, and the end date is exclusive.
+If the start date is not provided, the resource's history since its creation is returned.
+If the end date is not provided, the resource's history up to the present is returned.
+
The response is a list of changes made to the resource, in reverse chronological order.
+Each entry has the properties knora-api:author (the IRI of the user who made the change) and
+knora-api:versionDate (the date when the change was made). For example:
The entries include all the dates when the resource's values were created or modified (within
+the requested date range), as well as the date when the resource was created (if the requested
+date range allows it). Each date is included only once. Since DSP only versions values, not
+resource metadata (e.g. rdfs:label), changes to a resource's metadata are not included in its
+version history.
In some cases, the client may only want to request the preview of a
+resource, which just provides its metadata (e.g. its IRI, rdfs:label,
+and type), without its values.
+
This works exactly like making a conventional resource request, using
+the path segment resourcespreview:
+
HTTP GET to http://host/v2/resourcespreview/resourceIRI(/anotherResourceIri)*
+
+
Get a Graph of Resources
+
DSP can return a graph of connections between resources, e.g. for generating a network diagram.
+
HTTP GET to http://host/v2/graph/resourceIRI[depth=Integer]
+[direction=outbound|inbound|both][excludeProperty=propertyIri]
+
+
The first parameter must be preceded by a question mark ?, any
+following parameter by an ampersand &.
+
+
depth must be at least 1. The maximum depth is a DSP configuration setting.
+ The default is 4.
+
direction specifies the direction of the links to be queried, i.e. links to
+ and/or from the given resource. The default is outbound.
+
excludeProperty is an optional link property to be excluded from the
+ results.
+
+
To accommodate large graphs, the graph response format is very concise, and is therefore
+simpler than the usual resources response format. Each resource represented only by its IRI,
+class, and label. Direct links are shown instead of link values. For example:
DSP offers the possibility to search for resources by their
+rdfs:label. The use case for this search is to find a specific
+resource as you type. E.g., the user wants to get a list of resources
+whose rdfs:label contain some search terms separated by a whitespace
+character:
+
+
Zeit
+
Zeitg
+
...
+
Zeitglöcklein d
+
...
+
Zeitglöcklein des Lebens
+
+
With each character added to the last term, the selection gets more
+specific. The first term should at least contain three characters. To
+make this kind of "search as you type" possible, a wildcard character is
+automatically added to the last search term.
+
Characters provided by the user that have a special meaning in the Lucene Query Parser
+syntax need to be escaped. If a user wants to search for the string "Zeit-Glöcklein", she
+needs to type "Zeit-Glöcklein". The special characters that need escaping are:
++, -, &, |, !, (, ), [, ], {, }, ^, ", ~, *, ?, :, \, /
+
HTTP GET to http://host/v2/searchbylabel/searchValue[limitToResourceClass=resourceClassIRI]
+[limitToProject=projectIRI][offset=Integer]
+
+
The first parameter must be preceded by a question mark ?, any
+following parameter by an ampersand &.
+
The default value for the parameter offset is 0, which returns the
+first page of search results. Subsequent pages of results can be fetched
+by increasing offset by one. The amount of results per page is defined
+in app/v2 in application.conf.
+
For performance reasons, standoff markup is not queried for this route.
+
To request the number of results rather than the results themselves, you can
+do a count query:
+
HTTP GET to http://host/v2/searchbylabel/count/searchValue[limitToResourceClass=resourceClassIRI][limitToProject=projectIRI][offset=Integer]
+
+
The response to a count query request is an object with one predicate,
+http://schema.org/numberOfItems, with an integer value.
+
Full-text Search
+
DSP offers a full-text search that searches through all textual
+representations of values and rdfs:label of resources.
+Full-text search supports the
+Lucene Query Parser syntax.
+Note that Lucene's default operator is a logical OR when submitting several search terms.
+
The search index used by DSP transforms all text into lower case characters and splits text into tokens by whitespace.
+For example, if a text value is: The cake needs flour, sugar, and butter.,
+the tokens are the, cake, needs, flour,, sugar,, and, butter..
+Note that punctuation marks like , and . are left with the word where they occurred.
+Therefore, if you search for sugar you would have to use sugar* or sugar?
+to get results that contain sugar, or sugar. as well.
+The reason for this kind of tokenization is
+that some users need to be able to search explicitly for special characters including punctuation marks.
+
Alphabetic, numeric, symbolic, and diacritical Unicode characters
+which are not in the first 127 ASCII characters (the "Basic Latin" Unicode block)
+are converted into their ASCII equivalents, if one exists, e.g. é or ä are converted into e and a.
+
Please note that the search terms have to be URL-encoded.
+
HTTP GET to http://host/v2/search/searchValue[limitToResourceClass=resourceClassIRI]
+[limitToStandoffClass=standoffClassIri][limitToProject=projectIRI][offset=Integer]
+
+
The first parameter has to be preceded by a question mark ?, any following parameter by an ampersand &.
+
A search value must have a minimal length of three characters (default value)
+as defined in search-value-min-length in application.conf.
+
A search term may contain wildcards. A ? represents a single character.
+It has to be URL-encoded as %3F since it has a special meaning in the URL syntax.
+For example, the term Uniform can be search for like this:
+
HTTP GET to http://host/v2/search/Unif%3Frm
+
+
A * represents zero, one or multiple characters. For example, the term Uniform can be searched for like this:
+
HTTP GET to http://host/v2/search/Uni*m
+
+
The default value for the parameter offset is 0 which returns the
+first page of search results. Subsequent pages of results can be fetched
+by increasing offset by one. The amount of results per page is defined
+in results-per-page in application.conf.
+
If the parameter limitToStandoffClass is provided, DSP will look for search terms
+that are marked up with the indicated standoff class.
+
If the parameter returnFiles=true is provided, DSP will return any
+file value attached to each matching resource.
+
To request the number of results rather than the results themselves, you can
+do a count query:
+
HTTP GET to http://host/v2/search/count/searchValue[limitToResourceClass=resourceClassIRI][limitToStandoffClass=standoffClassIri][limitToProject=projectIRI][offset=Integer]
+
+
The first parameter has to be preceded by a question
+mark ?, any following parameter by an ampersand &.
+
The response to a count query request is an object with one predicate,
+http://schema.org/numberOfItems, with an integer value.
To convert standoff markup to TEI/XML, see TEI/XML.
+
IIIF Manifests
+
This is an experimental feature and may change.
+
To generate a IIIF manifest for a resource, containing
+the still image representations that have knora-api:isPartOf (or a subproperty)
+pointing to that resource:
+
HTTP GET to http://host/v2/resources//iiifmanifest/RESOURCE_IRI
+
+
Reading Resources by Class from a Project
+
To facilitate the development of tabular user interfaces for data entry, it is
+possible to get a paged list of all the resources belonging to a particular
+class in a given project, sorted by the value of a property:
+
HTTP GET to http://host/v2/resources?resourceClass=RESOURCE_CLASS_IRI&page=PAGE[&orderByProperty=PROPERTY_IRI]
+
+
This is useful only if the project does not contain a large amount of data;
+otherwise, you should use Gravsearch to search
+using more specific criteria.
+
The specified class and property are used without inference; they will not
+match subclasses or subproperties.
+
The HTTP header X-Knora-Accept-Project must be submitted; its value is
+a DSP project IRI. In the request URL, the values of resourceClass and orderByProperty
+are URL-encoded IRIs in the complex schema.
+The orderByProperty parameter is optional; if it is not supplied, resources will
+be sorted alphabetically by resource IRI (an arbitrary but consistent order).
+The value of page is a 0-based integer page number. Paging works as it does
+in Gravsearch).
+
Get the Full History of a Resource and its Values as Events
+
To get a list of the changes that have been made to a resource and its values since its creation as events ordered by
+date:
+
HTTP GET to http://host/v2/resources/resourceHistoryEvents/<resourceIRI>
+
+
The resource IRI must be URL-encoded. The response is a list of events describing changes made to the resource and its values,
+ in chronological order. Each entry has the properties:
+ knora-api:eventType (the type of the operation performed on a specific date. The operation can be either
+ createdResource, updatedResourceMetadata, deletedResource, createdValue, updatedValueContent,
+ updatedValuePermissions, or deletedValue.),
+knora-api:versionDate (the date when the change was made),
+knora-api:author (the IRI of the user who made the change),
+knora-api:eventBody (the information necessary to make the same request).
+
For example, the following response contains the list of events describing the version history of the resource
+http://rdfh.ch/0001/thing-with-history ordered by date:
Since the history of changes made to the metadata of a resource is not part of resouce's version history, there are no
+events describing the changes on metadata elements like its rdfs:label or rdfs:comment.
+The only record depicting a change in a resource's metadata is the knora-api:lastModificationDate of the resource. Thus
+the event updatedResourceMetadata indicates a change in a resource's metadata, its knora-api:eventBody contains the
+payload needed to update the value of the resource's lastModificationDate, see
+modifying metadata of a resource.
+
Get the Full History of all Resources of a Project as Events
+
To get a list of the changes that have been made to the resources and their values of a project as events ordered by
+date:
+
HTTP GET to http://host/v2/resources/projectHistoryEvents/<projectIRI>
+
+
The project IRI must be URL-encoded. The response contains the resource history events of all resources that belong to
+the specified project.
Reading the User's Permissions on Resources and Values
+
In the complex API schema, each
+resource and value is returned with the predicate knora-api:userHasPermission.
+The object of this predicate is a string containing a permission code, which
+indicates the requesting user's maximum permission on the resource or value.
+These are the possible permission codes, in ascending order:
+
+
RV: restricted view permission (least privileged)
+
V: view permission
+
M modify permission
+
D: delete permission
+
CR: change rights permission (most privileged)
+
+
Each permission implies all lesser permissions. For more details, see
+Permissions.
The DSP-API's standard standoff mapping only supports a few HTML tags. In order to
+submit more complex XML markup, a custom mapping has to be
+created first. A mapping expresses the relations between XML
+elements and attributes, and their corresponding standoff classes and
+properties. The relations expressed in a mapping are one-to-one
+relations, so the XML can be recreated from the data in RDF. However,
+since HTML offers a very limited set of elements, custom mappings support
+the combination of element names and classes. In this way, the same
+element can be used several times in combination with another classname
+(please note that <a> without a class is a hyperlink whereas <a class="salsah-link"> is an internal link/standoff link).
+
With a mapping, a default XSL transformation may be provided to
+transform the XML to HTML before sending it back to the client. This is
+useful when the client is a web-browser expecting HTML (instead of XML).
+
Basic Structure of a Mapping
+
The mapping is written in XML itself (for a formal description, see
+webapi/src/resources/mappingXMLToStandoff.xsd). It has the following
+structure (the indentation corresponds to the nesting in XML):
+
+
<mapping>: the root element
+
<defaultXSLTransformation> (optional): the IRI of the
+ default XSL transformation to be applied to the XML when
+ reading it back from DSP-API. The XSL transformation is
+ expected to produce HTML. If given, the IRI has to refer to
+ a resource of type knora-base:XSLTransformation.
+
<mappingElement>: an element of the mapping (at least one)
+
<tag>: information about the XML element that is mapped to a standoff class
+
<name>: name of the XML element
+
<class>: value of the class attribute of
+ the XML element, if any. If the element has
+ no class attribute, the keyword noClass
+ has to be used.
+
<namespace>: the namespace the XML element
+ belongs to, if any. If the element does not
+ belong to a namespace, the keyword
+ noNamespace has to be used.
+
<separatesWords>: a Boolean value
+ indicating whether this tag separates words
+ in the text. Once an XML document is
+ converted to RDF-standoff the markup is
+ stripped from the text, possibly leading to
+ continuous text that has been separated by
+ tags before. For structural tags like
+ paragraphs etc., <separatesWords> can be
+ set to true in which case a special
+ separator is inserted in the text in the
+ RDF representation. In this way, words stay
+ separated and are represented in the
+ fulltext index as such.
+
+
+
<standoffClass>: information about the standoff class the XML element is mapped to
+
<classIri>: IRI of the standoff class the XML element is mapped to
+
<attributes>: XML attributes to be mapped to standoff properties (other than id or class), if any
+
<attribute>: an XML attribute to be mapped to a standoff property, may be repeated
+
<attributeName>: the name of the XML attribute
+
<namespace>: the namespace the attribute belongs to, if any.
+ If the attribute does not belong to a namespace, the keyword noNamespace has to be used.
+
<propertyIri>: the IRI of the standoff property the XML attribute is mapped to.
+
+
+
+
+
<datatype>: the data type of the standoff class, if any.
+
<type>: the IRI of the data type standoff class
+
<attributeName>: the name of the attribute holding the typed value in the expected standard format
Please note that the absence of an XML namespace and/or a class have to
+be explicitly stated using the keywords noNamespace and
+noClass. This is because we use XML Schema validation to ensure the one-to-one
+relations between XML elements and standoff classes. XML Schema validation's unique checks
+do not support optional values.
+
id and class Attributes
+
The id and class attributes are supported by default and do not have
+to be included in the mapping like other attributes. The id attribute
+identifies an element and must be unique in the document. id is an
+optional attribute. The class attribute allows for the reuse of an
+element in the mapping, i.e. the same element can be combined with
+different class names and mapped to different standoff classes (mapping
+element <class> in <tag>).
+
Respecting Cardinalities
+
A mapping from XML elements and attributes to standoff classes and
+standoff properties must respect the cardinalities defined in the
+ontology for those very standoff classes. If an XML element is mapped to
+a certain standoff class and this class requires a standoff property, an
+attribute must be defined for the XML element mapping to that very
+standoff property. Equally, all mappings for attributes of an XML
+element must have corresponding cardinalities for standoff properties
+defined for the standoff class the XML element maps to.
+
However, since an XML attribute may occur once at maximum, it makes
+sense to make the corresponding standoff property required
+(owl:cardinality of one) in the ontology or optional
+(owl:maxCardinality of one), but not allowing it more than once.
+
Standoff Data Types
+
DSP-API allows the use of all its value types as standoff data types
+(defined in knora-base.ttl):
+
+
knora-base:StandoffLinkTag: Represents a reference to a
+ resource (the IRI of the target resource must be submitted in the
+ data type attribute).
+
knora-base:StandoffInternalReferenceTag: Represents an internal
+ reference inside a document (the id of the target element inside the
+ same document must be indicated in the data type attribute); see
+ Internal References in an XML Document.
+
knora-base:StandoffUriTag: Represents a reference to a URI (the
+ URI of the target resource must be submitted in the data type
+ attribute).
+
knora-base:StandoffDateTag: Represents a date (a date
+ string must be submitted in the data type attribute, e.g.
+ GREGORIAN:2017-01-27).
+
knora-base:StandoffColorTag: Represents a color (a hexadecimal
+ RGB color string must be submitted in the data type attribute, e.g.
+ #0000FF).
+
knora-base:StandoffIntegerTag: Represents an integer (the integer
+ must be submitted in the data type attribute).
+
knora-base:StandoffDecimalTag: Represents a number with fractions
+ (the decimal number must be submitted in the data type attribute,
+ e.g. 1.1).
+
knora-base:StandoffIntervalTag: Represents an interval (two
+ decimal numbers separated with a comma must be submitted in the data
+ type attribute, e.g. 1.1,2.2).
+
knora-base:StandoffBooleanTag: Represents a Boolean value (true
+ or false must be submitted in the data type attribute).
+
knora-base:StandoffTimeTag: Represents a timestamp value (an xsd:dateTimeStamp
+ must be submitted in the data type attribute).
+
+
The basic idea is that parts of a text can be marked up in a way that
+allows using DSP-API's built-in data types. In order to do so, the typed
+values have to be provided in a standardized way in an attribute that
+has to be defined in the mapping.
+
Data type standoff classes are standoff classes with predefined
+properties (e.g., a knora-base:StandoffLinkTag has a
+knora-base:standoffTagHasLink and a knora-base:StandoffIntegerTag
+has a knora-base:valueHasInteger). Please note the data type standoff
+classes can not be combined, i.e. a standoff class can only be the
+subclass of one data type standoff class. However, standoff data
+type classes can be subclassed and extended further by assigning
+properties to them (see below).
+
The following simple mapping illustrates this principle:
<datatype>must hold the IRI of a standoff data type class (see
+list above). The <classIri> must be a subclass of this type or this
+type itself (the latter is probably not recommendable since semantics
+are missing: what is the meaning of the date?). In the example above,
+the standoff class is anything:StandoffEventTag which has the
+following definition in the ontology anything-onto.ttl:
+
anything:StandoffEventTag rdf:type owl:Class ;
+
+ rdfs:subClassOf knora-base:StandoffDateTag,
+ [
+ rdf:type owl:Restriction ;
+ owl:onProperty :standoffEventTagHasDescription ;
+ owl:cardinality "1"^^xsd:nonNegativeInteger
+ ] ;
+
+ rdfs:label "Represents an event in a TextValue"@en ;
+
+ rdfs:comment """Represents an event in a TextValue"""@en .
+
+
anything:StandoffEventTag is a subclass of
+knora-base:StandoffDateTag and therefore has the data type date. It
+also requires the standoff property
+anything:standoffEventTagHasDescription which is defined as an
+attribute in the mapping.
+
Once the mapping has been created, an XML like the following could be
+sent to DSP-API and converted to standoff:
The attribute holds the date in the format of a DSP-API date string (the
+format is also documented in the typescript type alias dateString in
+module basicMessageComponents. There you will also find documentation
+about the other types like color etc.). DSP-API date strings have this
+format: GREGORIAN|JULIAN):YYYY[-MM[-DD]][:YYYY[-MM[-DD]]]. This allows
+for different formats as well as for imprecision and periods. Intervals
+are submitted as one attribute in the following format:
+interval-attribute="1.0,2.0" (two decimal numbers separated with a
+comma).
+
You will find a sample mapping with all the data types and a sample XML
+file in the the test data:
+test_data/test_route/texts/mappingForHTML.xml and
+test_data/test_route/texts/HTML.xml.
+
Internal References in an XML Document
+
Internal references inside an XML document can be represented using the
+data type standoff class knora-base:StandoffInternalReferenceTag or a
+subclass of it. This class has a standoff property that points to a
+standoff node representing the target XML element when converted to RDF.
+
The following example shows the definition of a mapping element for an
+internal reference (for reasons of simplicity, only the mapping element
+for the element is question is depicted):
Predefined standoff classes may be used by various projects, each
+providing a custom mapping to be able to recreate the original XML from
+RDF. Predefined standoff classes may also be inherited and extended in
+project specific ontologies.
When mapping XML attributes to standoff properties, attention has to be
+paid to the properties' object constraints.
+
In the ontology, standoff property literals may have one of the
+following knora-base:objectDatatypeConstraint:
+
+
xsd:string
+
xsd:integer
+
xsd:boolean
+
xsd:decimal
+
xsd:anyURI
+
+
In XML, all attribute values are submitted as strings. However, these
+string representations need to be convertible to the types defined in
+the ontology. If they are not, the request will be rejected. It is
+recommended to enforce types on attributes by applying XML Schema
+validations (restrictions).
+
Links (object property) to a knora-base:Resource can be represented
+using the data type standoff class knora-base:StandoffLinkTag,
+internal links using the data type standoff class
+knora-base:StandoffInternalReferenceTag.
+
Validating a Mapping and sending it to DSP-API
+
A mapping can be validated before sending it to DSP-API with the following
+XML Schema file: webapi/src/resources/mappingXMLToStandoff.xsd. Any
+mapping that does not conform to this XML Schema file will be rejected
+by DSP-API.
+
The mapping has to be sent as a multipart request to the standoff route
+using the path segment mapping:
+
HTTP POST http://host/v2/mapping
+
+
The multipart request consists of two named parts:
A successful response returns the IRI of the mapping. However, the IRI
+of a mapping is predictable: it consists of the project Iri followed by
+/mappings/ and the knora-api:mappingHasName submitted in the JSON-LD (if the name
+already exists, the request will be rejected). Once created, a mapping
+can be used to create TextValues in Knora. The formats are documented in
+the v2 typescript interfaces AddMappingRequest and AddMappingResponse
+in module MappingFormats
DSP-API supports various ways of handling textual data:
+
Text in RDF
+
Textual data can be included directly in the data stored in DSP-API.
+This is the default way of handling text in the DSP.
+There are three ways of representing textual data in DSP-API,
+two of which are fully supported by DSP-APP and DSP-TOOLS.
+
Texts stored in RDF can be searched using both full-text search and structured queries.
+
Simple Text
+
If a text requires no formatting, it can simply be stored as a string in a knora-base:TextValue.
+This is sufficient in many cases, especially for shorter texts like names, titles, identifiers, etc.
+
Text with Formatting
+
For text requiring regular markup, knora-base:TextValue can be used
+in combination with the DSP's standard standoff markup.
+
This allows for the following markup:
+
+
structural markup
+
paragraphs
+
headings levels 1-6
+
ordered lists
+
unordered lists
+
tables
+
line breaks
+
horizontal rules
+
code blocks
+
block quotes
+
+
+
typographical markup
+
italics
+
bold
+
underline
+
strikethrough
+
subscript
+
superscript
+
+
+
semantic markup
+
links
+
DSP internal links
+
+
+
+
DSP-APP provides a text editor for conveniently editing text with standard standoff markup.
It is possible to create custom XML-to-Schema mappings,
+which allows for creating project specific custom markup for text values.
+Details can be found here.
+
+
Info
+
Custom markup is not supported by DSP-TOOLS and is view-only in DSP-APP.
+Creating custom markup is relatively involved,
+so that it should only be used by projects working with complex textual data.
+
+
File Based
+
Text files of various formats (Word, PDF, XML, etc.) can be uploaded to the media file server.
+For more details, see here
+
This allows for easy upload and retrieval of the file.
+However, it does not allow for searching within the file content.
+
TEI XML
+
All text values in DSP-API using standoff markup can be converted to TEI XML as described here.
A mapping allows for the conversion of XML to standoff representation in RDF and back.
+In order to create a TextValue with markup,
+the text has to be provided in XML format,
+along with the IRI of the mapping that will be used to convert the markup to standoff.
+
DSP-API offers a standard mapping with the IRI http://rdfh.ch/standoff/mappings/StandardMapping.
+The standard mapping covers the HTML elements and attributes
+supported by the GUI's text editor, CKEditor.
+(Please note that the HTML has to be encoded in strict XML syntax.
+CKeditor offers the possibility to define filter rules.
+They should reflect the elements supported by the mapping.)
+The standard mapping contains the following elements and attributes
+that are mapped to standoff classes and properties defined in the ontology:
<h1> to <h6> → standoff:StandoffHeader1Tag to standoff:StandoffHeader6Tag
+
<ol> → standoff:StandoffOrderedListTag
+
<ul> → standoff:StandoffUnrderedListTag
+
<li> → standoff:StandoffListElementTag
+
<tbody> → standoff:StandoffTableBodyTag
+
<thead> → standoff:StandoffTableHeaderTag
+
<table> → standoff:StandoffTableTag
+
<tr> → standoff:StandoffTableRowTag
+
<th> → standoff:StandoffTableHeaderCellTag
+
<td> → standoff:StandoffTableCellTag
+
<br> → standoff:StandoffBrTag
+
<hr> → standoff:StandoffLineTag
+
<pre> → standoff:StandoffPreTag
+
<cite> → standoff:StandoffCiteTag
+
<blockquote> → standoff:StandoffBlockquoteTag
+
<code> → standoff:StandoffCodeTag
+
+
The HTML produced by CKEditor is wrapped in an XML doctype and a pair of root tags <text>...</text>
+and then sent to the DSP-API.
+The XML sent to the GUI by the DSP-API is unwrapped accordingly.
+Although the GUI supports HTML5, it is treated as if it was XHTML in strict XML notation.
+
Text with standard standoff markup can be transformed to TEI XML as described here.
DSP-API offers a way to convert standoff markup to TEI/XML.
+The conversion is based on the assumption that a whole resource is to be turned into a TEI document.
+There is a basic distinction between the body and the header of a TEI document.
+The resource's property that contains the text with standoff markup is mapped to the TEI document's body.
+Other of the resource's property may be mapped to the TEI header.
+
Standard Standoff to TEI Conversion
+
DSP-API offers a built-in conversion form standard standoff entities (defined in the standoff ontology) tags to TEI.
+
+
Note
+
As TEI provides a wide range of Elements and Attributes
+which can have different meaning depending on the markup practices of a project,
+whereas DSP standard standoff has a very limited tagset,
+this conversion is oppinionated by necessity
+and may not be appropriate for all projects.
+
+
In order to obtain a resource as a TEI document, the following request has to be performed.
+Please note that the URL parameters have to be URL-encoded.
+
HTTP GET to http://host/v2/tei/resourceIri?textProperty=textPropertyIri
+
+
In addition to the resource's Iri, the Iri of the property containing the text with standoff has to be submitted.
+This will be converted to the TEI body.
+Please note that the resource can only have one instance of this property and the text must have standoff markup.
+
The test data contain the resource http://rdfh.ch/0001/thing_with_richtext_with_markup
+with the text property http://0.0.0.0:3333/ontology/0001/anything/v2#hasRichtext
+that can be converted to TEI as follows:
+
HTTP GET to http://host/v2/tei/http%3A%2F%2Frdfh.ch%2F0001%2Fthing_with_richtext_with_markup?textProperty=http%3A%2F%2F0.0.0.0%3A3333%2Fontology%2F0001%2Fanything%2Fv2%23hasRichtext
+
+
The response to this request is a TEI XML document:
The body of the TEI document contains the standoff markup as XML.
+The header contains contains some basic metadata about the resource such as the rdfs:label an its IRI.
+However, this might not be sufficient for more advanced use cases like digital edition projects.
+In that case, a custom conversion has to be performed (see below).
+
Custom Conversion
+
If a project defines its own standoff entities, a custom conversion can be provided (body of the TEI document).
+Also for the TEI header, a custom conversion can be provided.
+
For the custom conversion, additional configuration is required.
+
TEI body:
+
+
additional mapping from standoff to XML (URL parameter mappingIri)
+
XSL transformation to turn the XML into a valid TEI body (referred to by the mapping).
+
+
The mapping has to refer to a defaultXSLTransformation that transforms the XML that was created from standoff markup
+(see XML To Standoff Mapping).
+This step is necessary because the mapping assumes a one to one relation
+between standoff classes and properties and XML elements and attributes.
+For example, we may want to convert a standoff:StandoffItalicTag into TEI/XML.
+TEI expresses this as <hi rend="italic">...</hi>.
+In the mapping, the standoff:StandoffItalicTag may be mapped to a temporary XML element
+that is going to be converted to <hi rend="italic">...</hi> in a further step by the XSLT.
+
For sample data, see webapi/_test_data/test_route/texts/beol/BEOLTEIMapping.xml (mapping)
+and webapi/_test_data/test_route/texts/beol/standoffToTEI.xsl.
+The standoff entities are defined in beol-onto.ttl.
+
TEI header:
+
+
Gravsearch template to query the resources metadata, results are serialized to RDF/XML (URL parameter gravsearchTemplateIri)
+
XSL transformation to turn that RDF/XML into a valid TEI header (URL parameter teiHeaderXSLTIri)
+
+
The Gravsearch template is expected to be of type knora-base:TextRepresentation
+and to contain a placeholder $resourceIri that is to be replaced by the actual resource Iri.
+The Gravsearch template is expected to contain a query involving the text property (URL parameter textProperty)
+and more properties that are going to be mapped to the TEI header.
+The Gravsearch template is a simple text file with the files extension .txt.
+
A Gravsearch template may look like this (see test_data/test_route/texts/beol/gravsearch.txt):
Note the placeholder BIND(<$resourceIri> as ?letter) that is going to be replaced
+by the Iri of the resource the request is performed for.
+The query asks for information about the letter's text beol:hasText and information about its author and recipient.
+This information is converted to the TEI header in the format required by correspSearch.
+
To write the XSLT, do the Gravsearch query and request the data as RDF/XML using content negotiation
+(see Introduction).
+
The Gravsearch query's result may look like this (RDF/XML):
In order to convert the metadata (not the actual standoff markup),
+a knora-base:knora-base:XSLTransformation has to be provided.
+For our example, it looks like this (see test_data/test_route/texts/beol/header.xsl):
You can use the functions knora-api:iaf and knora-api:dateformat in your own XSLT in case you want to support correspSearch.
+
The complete request looks like this:
+
HTTP GET request to http://host/v2/tei/resourceIri&textProperty=textPropertyIri&mappingIri=mappingIri&gravsearchTemplateIri=gravsearchTemplateIri&teiHeaderXSLTIri=teiHeaderXSLTIri
+
The instrumentation endpoints are running on a separate port (default 3339)
+defined in application.conf under the key: app.instrumentaion-server-config.port
+and can also be set through the environment variable: KNORA_INSTRUMENTATION_SERVER_PORT.
+
The exposed endpoints are:
+
+
/metrics - a metrics endpoint, backed by the ZIO metrics backend exposing metrics in the prometheus format
+
/health - provides information about the health state, see Health Endpoint
The metrics endpoint exposes metrics gathered through the ZIO metrics frontend in the Prometheus
+format. Additionally, ZIO runtime, JVM and ZIO-HTTP metrics are also exposed.
+
Configuration
+
The refresh interval is configured in application.conf under the key: app.instrumentaion-server-config.interval
+which es per default set to 5 seconds.
+
Example request
+
GET /metrics
+
Example response
+
# TYPE jvm_memory_pool_allocated_bytes_total counter
+# HELP jvm_memory_pool_allocated_bytes_total Some help
+jvm_memory_pool_allocated_bytes_total{pool="G1 Survivor Space"} 4828024.0 1671021037947
+# TYPE jvm_memory_pool_allocated_bytes_total counter
+# HELP jvm_memory_pool_allocated_bytes_total Some help
+jvm_memory_pool_allocated_bytes_total{pool="G1 Eden Space"} 3.3554432E7 1671021037947
+# TYPE zio_fiber_successes counter
+# HELP zio_fiber_successes Some help
+zio_fiber_successes 17.0 1671021037947
+# TYPE zio_fiber_lifetimes histogram
+# HELP zio_fiber_lifetimes Some help
+zio_fiber_lifetimes_bucket{le="1.0"} 17.0 1671021037947
+zio_fiber_lifetimes_bucket{le="2.0"} 17.0 1671021037947
+...
+
+
ZIO-HTTP metrics
+
Metrics of all routes served by ZIO-HTTP (default: port 5555) are exposed through a default metrics middleware.
+However, instead of http_concurrent_requests_total etc. they are labeled zio_http_concurrent_requests_total etc.
+with zio prepended, so that they are clearly distinguishable while we still run ZIO-HTTP and Pekko-HTTP in parallel.
+
To prevent excessive amounts of labels, it is considered good practice,
+to replace dynamic path segments with slugs (e.g. /projects/shortcode/0000 with /projects/shortcode/:shortcode).
+Like this, requesting different projects by identifier will add multiple values to the histogram of a single route,
+instead of creating a histogram for each project:
This is achieved by providing the middleware a pathLabelMapper;
+when adding new routes, it is advisable to assert that this replacement works correctly for the newly added route.
All configuration for Knora is done in application.conf. Besides the Knora application
+specific configuration, there we can also find configuration for the underlying Pekko library.
+
For optimal performance it is important to tune the configuration to the hardware used, mainly
+to the number of CPUs and cores per CPU.
+
The relevant sections for tuning are:
+
+
pekko.actor.deployment
+
knora-actor-dispatcher
+
knora-blocking-dispatcher
+
+
System Environment Variables
+
A number of core settings is additionally configurable through system environment variables. These are:
DSP's Docker images are published automatically through Github CI each time a
+pull-request is merged into the main branch.
+
Each image is tagged with a version number, which is derived by
+using the result of git describe. The describe version is built from the
+last tag + number of commits since tag + short hash, e.g., 8.0.0-7-ga7827e9.
ADR-0002 Change Cache Service Manager from Akka-Actor to ZLayer
+
Date: 2022-04-06
+
Status
+
Accepted
+
Context
+
The org.knora.webapi.store.cacheservice.CacheServiceManager was implemented as an Akka-Actor.
+
Decision
+
As part of the move from Akka to ZIO,
+it was decided that the CacheServiceManager
+and the whole implementation of the in-memory and Redis backed cache
+is refactored using ZIO.
+
Consequences
+
The usage from other actors stays the same. The actor messages and responses don't change.
ADR-0004 Change Triplestore Service Manager and Fuseki implementation to ZLayer
+
Date: 2022-05-23
+
Status
+
Accepted
+
Context
+
Both org.knora.webapi.store.triplestore.TriplestoreServiceManager
+and org.knora.webapi.store.triplestore.impl.TriplestoreServiceHttpConnectorImpl
+where implemented as Akka-Actors.
+
Decision
+
As part of the move from Akka to ZIO,
+it was decided that the TriplestoreServiceManager
+and the TriplestoreServiceHttpConnectorImpl
+is refactored using ZIO.
+
Consequences
+
The usage from other actors stays the same. The actor messages and responses don't change.
ADR-0005 Change ResponderManager to a simple case class
+
Date: 2022-06-06
+
Status
+
Accepted
+
Context
+
The org.knora.webapi.responders.ResponderManager was implemented as an Akka-Actor.
+
Decision
+
In preparation of the move from Akka to ZIO, it was decided that the ResponderManager is refactored using plain case classes.
+
Consequences
+
The actor messages and responses don't change.
+All calls made previously to the ResponderManager and the StorageManager
+are now changed to the ApplicationActor
+which will route the calls to either the ResponderManager
+or the StorageManager, based on the message type.
+The ApplicationActor is the only actor that is allowed to make calls
+to either the ResponderManager or the StorageManager.
+All requests from routes are now routed to the ApplicationActor.
The current routes use the Akka Http library.
+Because of changes to the licensing of the Akka framework,
+we want to move away from using Akka Http.
+This also fits the general strategic decision to use ZIO for the backend.
+
Decision
+
In preparation of the move from Akka to ZIO,
+it was decided that the routes should be ported to use the ZIO HTTP server / library instead of Akka Http.
+
Consequences
+
In a first step only the routes are going to be ported, one by one,
+to use ZIO HTTP instead of being routed through Akka Http.
+The Akka Actor System still remains and will be dealt with later.
In order to remove all Akka dependencies, we have to migrate the existing Responders to a ZIO based
+implementation.
+This migration should be possible to do on a per Responder basis so that we do not do a single "big-bang" release with
+too much code changed at once.
+
Status Quo
+
The central and only Actor is the RoutingActor which contains instances of each Responder as a field.
+Each of the Responders needs an ActorRef to the RoutingActor and used the Akka "ask pattern" for communication
+with the other Responders.
+This means a Responder can only be created inside the RoutingActor because the RoutingActor must know
+every Responder in order to route the message but the Responder needs the ActorRef in order to communicate with
+the other Responders.
+This leads to a circular dependency between the RoutingActor and all Akka based Responders.
+
Goal
+
In the long term all Responders do not contain any Akka dependency anymore and all implementations currently
+returning a Future will return a zio.Task.
+
The zio.Task is a very suitable replacement for the Future because:
+
+
a Future[A] will complete with either a value A or with a failure Throwable.
+
a zio.Task[A] will succeed with either a value A or fail with an error of type Throwable.
+
+
Ideally all Responders will directly call the necessary components directly through invoking methods.
+However, this will not be possible in the beginning as there are Responders who call on each other creating yet another
+circular dependency which we cannot simply recreate with ZLayer dependency injection.
+Hence, a message like communication pattern through a central component the MessageRelay will be introduced which
+can replace the existing Akka "ask pattern" one to one in the ziofied component.
+
Solution
+
The MessageRelay is capable of relaying message to subscribed MessageHandlers and replaces the existing Akka "ask
+pattern" with the RoutingActor.
+Messages which will have a MessageHandler implementation must extend the RelayedMessage trait so that these are
+routed to the MessageRelay from the RoutingActor.
+All other messages will be handled as before.
+
In ziofied Responders we can use the MessageRelay for communication with all other Responders in a similar fashion
+as the Akka "ask pattern" by invoking the method MessageRelay#ask(ResponderRequest): Task[Any].
+A special MessageHandler will route all messages which do not implement the RelayedMessage trait back to
+the RoutingActor, this is the AppRouterRelayingMessageHandler.
+
In the long run we will prefer to invoke methods on the respective ziofied services directly.
+This is now already possible for example with the TriplestoreServive, i.e. instead of
+calling MessageRelay#ask[SparqlSelectResul](SparqlSelectRequest) it is much easier and more importantly typesafe to
+call TriplestoreService#sparqlHttpSelect(String): UIO[SparqlSelectResult].
+
Communication between Akka based Responder and another Akka based Responder
+
Nothing changes with regard to existing communication patterns:
Communication between Akka based Responder and ziofied Responder
+
The AkkaResponder code remains unchanged and will still ask the ActorRef to the RoutingActor.
+The RoutingActor will forward the message to the MessageRelay and return its response to the AkkaResponder.
In preparation of the move from Akka to ZIO,
+it was decided that the Responders should be ported to use return ZIOs and the MessageRelay
+instead of Futures and the ActorRef to the RoutingActor.
+
Consequences
+
In a first step only the Responders are going to be ported, one by one, to use the above pattern.
+The Akka Actor System still remains, will be used in the test and will be removed in a later step.
+Due to the added indirections and the blocking nature of Unsafe.unsafe(implicit u => r.unsafe.run(effect))
+it is necessary to spin up more RoutingActor instances as otherwise deadlocks will occur.
+This should not be a problem as any shared state, e.g. caches,
+is not held within the RoutingActor or one of its contained Responder instances.
On 7. September 2022 Lightbend announced a
+license change for the Akka project,
+the TL;DR being that you will need a commercial license to use future versions of Akka (2.7+) in production
+if you exceed a certain revenue threshold.
+
For now, we have staid on Akka 2.6, the current latest version that is still available under the original license.
+Historically Akka has been incredibly stable, and combined with our limited use of features,
+we did not expect this to be a problem.
Will critical vulnerabilities and bugs be patched in 2.6.x?
+Yes, critical security updates and critical bugs will be patched in Akka v2.6.x
+ under the current Apache 2 license until September of 2023.
+
+
As a result, we will not receive further updates and we will never get support for Scala 3 for Akka.
Our current migration to another http server implementation is currently on hold,
+but we might want to switch to Pekko so that we could receive security updates and bugfixes.
+
The proof of concept implementation has been shared in the pull request
+here,
+allowing for further testing and validation of the proposed switch to Pekko.
+
Decision
+
We replace Akka and Akka/Http with Apache Pekko.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/05-internals/design/api-admin/administration-fig1.dot b/05-internals/design/api-admin/administration-fig1.dot
new file mode 100644
index 0000000000..5c52b261f7
--- /dev/null
+++ b/05-internals/design/api-admin/administration-fig1.dot
@@ -0,0 +1,30 @@
+digraph G {
+ a [label="Start"];
+
+ b [label="Get all groups for user"];
+
+ c1 [label="Get all Resource Creation Permissions"];
+ c2 [label="Decide if user is allowed to create the resource type"];
+
+ d1 [label="Get all Default Object Access Permissions"];
+ d2 [label="Get Default Object Access Permissions attached to Groups"];
+ d3 [label="Get Default Object Access Permissions attached to Resources/Values"];
+ d4 [label="Calculate maximum Default Object Access Permissions"];
+
+ e [label="Create Resource/Values with maximum Default Object Access Permissions"];
+
+ z [label="End"];
+
+ a -> b;
+ b -> c1;
+ c1 -> c2;
+ c2 -> e;
+
+ b -> d1;
+ d1 -> d2;
+ d2 -> d3;
+ d3 -> d4;
+ d4 -> e;
+
+ e -> z;
+}
diff --git a/05-internals/design/api-admin/administration-fig1.dot.png b/05-internals/design/api-admin/administration-fig1.dot.png
new file mode 100644
index 0000000000..a7bdfdf3ef
Binary files /dev/null and b/05-internals/design/api-admin/administration-fig1.dot.png differ
diff --git a/05-internals/design/api-admin/administration-fig2.dot b/05-internals/design/api-admin/administration-fig2.dot
new file mode 100644
index 0000000000..02375f7efe
--- /dev/null
+++ b/05-internals/design/api-admin/administration-fig2.dot
@@ -0,0 +1,18 @@
+digraph G {
+ a [label="Start"];
+
+ b [label="Get all groups for user"];
+ c [label="Get all permissions attached to Resource/Value"];
+ d [label="Calculate max permission user has on Resource/Value through group membership"];
+ e [label="Decide if user is allowed to perform operation"];
+
+ z [label="End"];
+
+ a -> b;
+ a -> c;
+ b -> d;
+ c -> d;
+ d -> e;
+
+ e -> z;
+}
diff --git a/05-internals/design/api-admin/administration-fig2.dot.png b/05-internals/design/api-admin/administration-fig2.dot.png
new file mode 100644
index 0000000000..29167f662c
Binary files /dev/null and b/05-internals/design/api-admin/administration-fig2.dot.png differ
diff --git a/05-internals/design/api-admin/administration-fig3.dot b/05-internals/design/api-admin/administration-fig3.dot
new file mode 100644
index 0000000000..1f9f94a6f0
--- /dev/null
+++ b/05-internals/design/api-admin/administration-fig3.dot
@@ -0,0 +1,14 @@
+digraph G {
+ a [label="Start"];
+
+ b [label="Get all groups for user"];
+ c [label="Get all Project Administration Permissions received through group membership"];
+ d [label="Decide if user is allowed to perform operation"];
+
+ z [label="End"];
+
+ a -> b;
+ b -> c;
+ c -> d;
+ d -> z;
+}
diff --git a/05-internals/design/api-admin/administration-fig3.dot.png b/05-internals/design/api-admin/administration-fig3.dot.png
new file mode 100644
index 0000000000..1949abad12
Binary files /dev/null and b/05-internals/design/api-admin/administration-fig3.dot.png differ
diff --git a/05-internals/design/api-admin/administration-fig4.dot b/05-internals/design/api-admin/administration-fig4.dot
new file mode 100644
index 0000000000..263377acac
--- /dev/null
+++ b/05-internals/design/api-admin/administration-fig4.dot
@@ -0,0 +1,12 @@
+digraph G {
+ rankdir="BT"
+
+ oc [label="owl:Class"]
+ p [label="knora-base:Permission"]
+ ap [label ="knora-base:AdministrativePermission"]
+ doap [label ="knora-base:DefaultObjectAccessPermission"]
+
+ p -> oc [label="rdf:type"]
+ ap -> p [label="rdf:subClassOf"]
+ doap -> p [label="rdf:subClassOf"]
+}
diff --git a/05-internals/design/api-admin/administration-fig4.dot.png b/05-internals/design/api-admin/administration-fig4.dot.png
new file mode 100644
index 0000000000..668e7a5854
Binary files /dev/null and b/05-internals/design/api-admin/administration-fig4.dot.png differ
diff --git a/05-internals/design/api-admin/administration-fig5.dot b/05-internals/design/api-admin/administration-fig5.dot
new file mode 100644
index 0000000000..01cb297d35
--- /dev/null
+++ b/05-internals/design/api-admin/administration-fig5.dot
@@ -0,0 +1,12 @@
+digraph AdministrativePermissions {
+ rankdir="LR"
+
+ ap [label="knora-base:AdministrativePermission"]
+ kp [label="knora-base:knoraProject"]
+ ug [label="knora-base:UserGroup"]
+
+ ap -> kp [ label="knora-base:forProject" ]
+ ap -> ug [ label="knora-base:forGroup" ]
+
+ ap -> "Administrative permissions compact format string" [ label="knora-base:hasPermissions" ]
+}
diff --git a/05-internals/design/api-admin/administration-fig5.dot.png b/05-internals/design/api-admin/administration-fig5.dot.png
new file mode 100644
index 0000000000..823a271b81
Binary files /dev/null and b/05-internals/design/api-admin/administration-fig5.dot.png differ
diff --git a/05-internals/design/api-admin/administration-fig6.dot b/05-internals/design/api-admin/administration-fig6.dot
new file mode 100644
index 0000000000..6163784bee
--- /dev/null
+++ b/05-internals/design/api-admin/administration-fig6.dot
@@ -0,0 +1,16 @@
+digraph DefaultObjectAccessPermissions {
+ rankdir="LR"
+
+ doap [label="knora-base:DefaultObjectAccessPermission"]
+ kp [label="knora-base:knoraProject"]
+ ug [label="knora-base:UserGroup"]
+ rc [label="Resource Class Name"]
+ pr [label="Resource Property Name"]
+
+ doap -> kp [ label="knora-base:forProject" ]
+ doap -> ug [ label="knora-base:forGroup" ]
+ doap -> rc [ label="knora-base:forResourceClass" ]
+ doap -> pr [ label="knora-base:forProperty" ]
+
+ doap -> "Default object access permission compact format string" [ label="knora-base:hasPermissions"]
+}
diff --git a/05-internals/design/api-admin/administration-fig6.dot.png b/05-internals/design/api-admin/administration-fig6.dot.png
new file mode 100644
index 0000000000..fd1e4f23e4
Binary files /dev/null and b/05-internals/design/api-admin/administration-fig6.dot.png differ
diff --git a/05-internals/design/api-admin/administration/index.html b/05-internals/design/api-admin/administration/index.html
new file mode 100644
index 0000000000..a889020dbb
--- /dev/null
+++ b/05-internals/design/api-admin/administration/index.html
@@ -0,0 +1,3769 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ Admin API Design - DSP-API
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
The default permissions when a project is created are described
+here.
+
DSP's concept of access control is that permissions
+can only be granted to groups and not to individual users.
+There are two distinct ways of granting permission.
+
+
An object (a resource or value) can grant permissions to groups of users.
+
Permissions can be granted directly to a group of users (not bound to a specific object).
+
+
There are six built-in groups:
+UnknownUser, KnownUser, Creator, ProjectMember, ProjectAdmin, and SystemAdmin.
+These groups can be used in the same way as normal user created groups for permission
+management, i.e. can be used to give certain groups of users, certain
+permissions, without the need to explicitly create them.
+
A user becomes implicitly a member of such a group by satisfying certain conditions:
+
+
+
knora-admin:UnknownUser:
+ Any user who has not logged into the DSP is automatically assigned to this group.
+ Group IRI: http://www.knora.org/ontology/knora-admin#UnknownUser
+
+
+
knora-admin:KnownUser:
+ Any user who has logged into the DSP is automatically assigned to this group.
+ Group IRI: http://www.knora.org/ontology/knora-admin#KnownUser
+
+
+
knora-admin:Creator:
+ When checking a user’s permissions on an object, the user is automatically assigned to this group
+ if they are the creator of the object.
+ Group IRI: http://www.knora.org/ontology/knora-admin#Creator
+
+
+
knora-admin:ProjectMember:
+ When checking a user’s permissions, the user is automatically assigned to this group
+ by being a member of a project designated by the knora-admin:isInProject property.
+ Group IRI: http://www.knora.org/ontology/knora-admin#ProjectMember
+
+
+
knora-admin:ProjectAdmin:
+ When checking a user's permission, the user is automatically assigned to this group
+ through the knora-admin:isInProjectAdminGroup property, which points to the project in question.
+ Group IRI: http://www.knora.org/ontology/knora-admin#ProjectAdmin
+
+
+
knora-admin:SystemAdmin:
+ Membership is received by setting the property knora-admin:isInSystemAdminGroup to true on a knora-admin:User.
+ Group IRI: http://www.knora.org/ontology/knora-admin#SystemAdmin
+
+
+
There are three kinds of permissions:
+
+
object access permissions, which contain permissions
+ that point from explicit objects (resources/values) to groups.
+
administrative permissions, which contain permissions
+ that are put on instances of knora-admin:Permission objects directly affecting groups.
+
default object access permissions which are also put on instances of knora-admin:Permission,
+ and which also directly affect groups.
+
+
Object Access Permissions
+
An object (resource / value) can grant the following permissions, which
+are stored in a compact format in a single string, which is the object
+of the predicate knora-base:hasPermissions:
+
+
Restricted view permission (RV): Allows a restricted view of
+ the object, e.g. a view of an image with a watermark.
+
View permission (V): Allows an unrestricted view of the
+ object. Having view permission on a resource only affects the
+ user's ability to view information about the resource other than
+ its values. To view a value, she must have view permission on the
+ value itself.
+
Modify permission (M): For values, this permission allows a
+ new version of a value to be created. For resources, this allows
+ the user to create a new value (as opposed to a new version of an
+ existing value), or to change information about the resource other
+ than its values. When he wants to make a new version of a value,
+ his permissions on the containing resource are not relevant.
+ However, when he wants to change the target of a link, the old
+ link must be deleted and a new one created, so he needs modify
+ permission on the resource.
+
Delete permission (D): Allows the item to be marked as deleted.
+
Change rights permission (CR): Allows the permissions granted by the object to be changed.
+
+
Each permission in the above list implies all lower-numbered
+permissions.
+
A user's permission level on a particular object is calculated in
+the following way:
+
+
Make a list of the groups that the user belongs to, including
+ Creator and/or ProjectMember and/or ProjectAdmin if applicable.
+
Make a list of the permissions that she can obtain on the
+ object, by iterating over the permissions that the object
+ grants. For each permission, if she is in the specified group,
+ add the specified permission to the list of permissions she can
+ obtain.
+
From the resulting list, select the highest-level permission.
+
If the result is that she would have no permissions, give her
+ whatever permission UnknownUser would have.
+
+
The format of the object of knora-base:hasPermissions is as
+follows:
+
+
Each permission is represented by the one-letter or two-letter
+ abbreviation given above.
+
Each permission abbreviation is followed by a space, then a
+ comma-separated list of groups that the permission is granted
+ to.
+
The IRIs of built-in groups are shortened using the knora-admin
+ prefix.
+
Multiple permissions are separated by a vertical bar (|).
+
+
For example, if an object grants view permission to unknown and known
+users, and modify permission to project members, the resulting
+permission literal would be: V knora-admin:UnknownUser,knora-admin:KnownUser|M knora-admin:ProjectMember.
+
Administrative Permissions
+
The following permissions can be set via instances of
+knora-admin:AdministrativePermission on any group belonging to a
+project. For users that are members of a number of groups with
+administrative permissions attached, the final set of permissions is
+additive and most permissive. The administrative permissions are stored
+in a compact format in a single string, which is the object of the
+predicate knora-base:hasPermissions attached to an instance of the
+knora-admin:AdministrativePermission class. The following permission
+values can be used:
+
+
Resource / Value Creation Permissions:
+ 1) ProjectResourceCreateAllPermission:
+ - description: gives the permission to create resources inside the project.
+ - usage: used as a value for knora-base:hasPermissions.
+ 2) ProjectResourceCreateRestrictedPermission:
+ - description: gives restricted resource creation permission inside the project.
+ - usage: used as a value for knora-base:hasPermissions.
+ - value: RestrictedProjectResourceCreatePermission
+ followed by a comma-separated list of ResourceClasses
+ the user should only be able to create instances of.
+
Project Administration Permissions:
+ 1) ProjectAdminAllPermission:
+ - description: gives the user the permission to do anything
+ on project level, i.e. create new groups, modify all
+ existing groups (group info, group membership,
+ resource creation permissions, project administration
+ permissions, and default permissions).
+ - usage: used as a value for knora-base:hasPermissions.
+ 2) ProjectAdminGroupAllPermission:
+ - description: gives the user the permission to modify
+ group info and group membership on all groups
+ belonging to the project.
+ - usage: used as a value for the knora-base:hasPermissions property.
+ 3) ProjectAdminGroupRestrictedPermission:
+ - description: gives the user the permission to modify
+ group info and group membership on certain groups
+ belonging to the project.
+ - usage: used as a value for knora-base:hasPermissions
+ - value: ProjectGroupAdminRestrictedPermission followed by
+ a comma-separated list of knora-admin:UserGroup.
+ 4) ProjectAdminRightsAllPermission:
+ - description: gives the user the permission to change the
+ permissions on all objects belonging to the project
+ (e.g., default permissions attached to groups and
+ permissions on objects).
+ - usage: used as a value for the knora-base:hasPermissions property.
+
+
The administrative permissions are stored in a compact format in a
+single string, which is the object of the predicate
+knora-base:hasPermissions attached to an instance of the
+knora-admin:AdministrativePermission class.
+
+
The format of the object of knora-base:hasPermissions is as follows:
+
Each permission is represented by the name given above.
+
Each permission is followed by a space, then if applicable, by comma separated list of IRIs, as defined above.
+
The IRIs of built-in values (e.g., built-in groups, resource
+ classes, etc.) are shortened using the knora-admin prefix knora-admin:.
+
Multiple permissions are separated by a vertical bar (|).
+
+
+
+
For example, if an administrative permission grants the
+knora-admin:ProjectMember group the permission to create all resources
+(ProjectResourceCreateAllPermission), the resulting administrative
+permission object with the compact form literal would be: :
Default Object Access Permissions are used when new objects (resources
+and/or values) are created. They represent object access permissions
+with which the new object will be initially outfitted. As with
+administrative permissions, these default object access permissions can
+be defined for any number of groups. Additionally, they can be also
+defined for resource classes and properties.
+
The following default object access permissions can be attached to
+groups, resource classes and/or properties via instances of
+knora-admin:DefaultObjectAccessPermission (described further bellow).
+The default object access permissions correspond to the earlier
+described object access permission:
+
+
Default Restricted View Permission (RV):
+
description: any object, created by a user inside a group
+ holding this permission, is restricted to carry this permission
+
value: RV followed by a comma-separated list of knora-admin:UserGroup
+
+
+
Default View Permission (V):
+
description: any object, created by a user inside a group
+ holding this permission, is restricted to carry this permission
+
value: V followed by a comma-separated list of knora-admin:UserGroup
+
+
+
Default Modify Permission (M) accompanied by a list of groups.
+
description: any object, created by a user inside a group
+ holding this permission, is restricted to carry this permission
+
value: M followed by a comma-separated list of knora-admin:UserGroup
+
+
+
Default Delete Permission (D) accompanied by a list of groups.
+
description: any object, created by a user inside a group
+ holding this permission, is restricted to carry this permission
+
value: D followed by a comma-separated list of knora-admin:UserGroup
+
+
+
Default Change Rights Permission (CR) accompanied by a list of groups.
+
description: any object, created by a user inside a group
+ holding this permission, is restricted to carry this permission
+
value: CR followed by a comma-separated list of knora-admin:UserGroup
+
+
+
+
A single instance of knora-admin:DefaultObjectAccessPermission must
+always reference a project, but can only reference either a group
+(knora-admin:forGroup property), a resource class
+(knora-admin:forResourceClass), a property (knora-admin:forProperty),
+or a combination of resource class and property.
+
Example default object access permission instance:
This instance is setting default object access permissions to the
+project member group of a project, giving change right permission to the
+creator, modify permission to all project members, and view permission
+to known users. Further, this implicitly applies to all resource
+classes and all their properties inside the project.
+
Permission Precedence Rules
+
For both administrative permissions and default object access
+permissions, the resulting permissions are derived by applying
+precedence rules, for the case that the user is member of more than one
+group.
+
The following list is sorted by the permission precedence level in
+descending order:
+
+
permissions on knora-admin:ProjectAdmin (highest level)
+
permissions on resource classes and property combination (own project)
+
permissions on properties (own project, when creating a Value)
+
permissions on resource classes (own project, when creating a Resource)
+
permissions on custom groups
+
permissions on knora-admin:ProjectMember
+
+
The permissions on resource classes / properties are only relevant for
+default object access permissions.
+
Administrative Permissions: When a user performs an operation
+requiring administrative permissions, then only the permissions from
+the highest level are taken into account. If a user is a member of
+more than one group on the same level (only possible for custom groups)
+then the defined permissions are summed up and all are taken into
+account.
+
Default Object Access Permissions: When a user creates a resource or
+value, then only the default object permissions from the highest
+level are applied. If a user is a member of more than one group on the
+same level (only possible for custom groups) then the defined
+permissions are summed up and the most permissive are applied.
+
In the case of the user belonging to the SystemAdmin group, but which
+is not member of a project and thus not member of any group belonging
+to the project, the default object access permissions from the
+ProjectAdmin, or ProjectMember group will be
+applied in the order of precedence. If no permissions are defined on
+either of these groups, then the resulting permission will be CR knora-admin:Creator.
+
Implicit Permissions
+
The knora-admin:SystemAdmin group receives implicitly the following
+permissions:
+
+
receives implicitly ProjectAdminAllPermission for all projects.
+
receives implicitly ProjectResourceCreateAllPermission for all projects.
+
receives implicitly CR on all objects from all projects.
+
+
These permissions are baked into the system, and cannot be changed.
+
Default Permissions Matrix for new Projects
+
The access control matrix defines what are the default operations a
+subject (i.e. User), being a member of a built-in group (represented
+by row headers), is permitted to perform on an object (represented by
+column headers). The different operation abbreviations used are defined
+as follows:
+
+
+
C: Create - the subject inside the group is allowed to create the object.
+
+
+
U: Update - the subject inside the group is allowed to update the object.
+
+
+
R: Read - the subject inside the group is allowed to readall information about the object.
+
+
+
D: Delete - the subject inside the group is allowed to delete the object.
+
+
+
P: Permission - the subject inside the group is allowed to change the permissions on the object.
+
+
+
-: none - none or not applicable
+
+
+
+
+
+
Built-In Group
+
Project
+
Group
+
User
+
Resource
+
Value
+
+
+
+
+
SystemAdmin
+
CRUD
+
CRUDP
+
CRUDP all
+
CRUDP all
+
CRUDP all
+
+
+
ProjectAdmin
+
-RUD
+
CRUDP
+
CRUDP +/- project
+
CRUDP (in project)
+
CRUDP (in project)
+
+
+
ProjectMember
+
----
+
-----
+
-----
+
CRU-- (in project)
+
----- (in project)
+
+
+
Creator
+
----
+
-----
+
-----
+
----- (his resource)
+
----- (his value)
+
+
+
KnownUser
+
C---
+
C----
+
CRUD- himself
+
----- (in project)
+
----- (in project)
+
+
+
+
Default Permissions Matrix for new Projects
+
The explicitly defined default permissions for a new project are as follows:
+
+
+
knora-admin:ProjectAdmin group:
+
+
Administrative Permissions:
+
ProjectResourceCreateAllPermission.
+
ProjectAdminAllPermission.
+
+
+
Default Object Access Permissions:
+
CR for the knora-admin:ProjectAdmin group
+
D for the knora-admin:ProjectAdmin group
+
M for the knora-admin:ProjectAdmin group
+
V for the knora-admin:ProjectAdmin group
+
RV for the knora-admin:ProjectAdmin group
+
+
+
+
+
+
The knora-admin:ProjectMember group:
+
+
Administrative Permissions:
+
ProjectResourceCreateAllPermission.
+
+
+
Default Object Access Permissions:
+
M for the knora-admin:ProjectMember group
+
V for the knora-admin:ProjectMember group
+
RV for the knora-admin:ProjectMember group
+
+
+
+
+
+
Basic Workflows involving Permissions
+
Creating a new Resource
+
+
Accessing a Resource/Value
+
+
Project / Group Administration
+
+
Implementation
+
The requirements for defining default permissions imposed by all the
+different use cases are very broad. Potentially, we need to be able to
+define default permissions per project, per group, per resource class,
+per resource property, and all their possible combinations.
+
For this reason, we introduce the knora-admin:Permission class with two
+sub-classes, namely knora-admin:AdministrativePermission and
+knora-admin:DefaultObjectAccessPermission, which instances will carry
+all the necessary information.
+
Permission Class Hierarchy and Structure
+
The following graphs show the class hierarchy and the structure of each
+permission class.
The properties forProject and either of forGroup,
+forResourceClass, and forProperty form together a compound
+key, allowing finding existing permission instances, that address the
+same set of Project / Group / ResourceClass / Property combination, thus
+making it possible to extend or change the attached permissions.
+
Administrative Permission Instances: For each group inside the
+project, there can be zero or one instance holding
+administrative permission information. Querying is straitforward by
+using the knora-admin:forProject and knora-admin:forGroup properties
+as the compound key.
+
Default Object Access Permission Instances: For each group, resource
+class, or property inside the project, there can be zero or one
+instances holding default object access permission informations.
+Querying is straitforward by using the knora-admin:forProject and
+either knora-admin:forGroup, knora-admin:forResourceClass, or
+knora-admin:forProperty properties as part of the compound key.
+
Example Data stored in the permissions graph
+
Administrative permissions on a 'ProjectAdmin' group:
When the user's UserProfile is queried, all permissions for all
+projects and groups the user is a member of are also queried. This
+information is then stored as an easy accessible object inside the
+UserProfile, being readily available wherever needed. As this is a
+somewhat expensive operation, built-in caching mechanism at different
+levels (e.g., UsersResponder, PermissionsResponder), will be applied.
Knora must produce an ARK URL for each resource and each value. The ARK identifiers used
+by Knora must respect
+the draft ARK specification.
+The format of Knora’s ARK URLs must be able to change over
+time, while ensuring that previously generated ARK URLs still work.
VALUE_UUID: optionally, the knora-base:valueHasUUID of one of the
+ resource's values, normally a
+ base64url-encoded UUID, as described in
+ IRIs for Data.
+
+
+
TIMESTAMP: an optional timestamp indicating that the ARK URL represents
+ the state of the resource at a specific time in the past. The format
+ of the timestamp is an ISO 8601
+ date in Coordinated universal time (UTC), including date, time, and an optional
+ nano-of-second field (of at most 9 digits), without the characters -, :, and . (because
+ - and . are reserved characters in ARK, and : would have to be URL-encoded).
+ Example: 20180528T155203897Z.
+
+
+
Following the ARK ID spec, /
+represents object hierarchy
+and .represents an object variant.
+A value is thus contained in a resource, which is contained in its project,
+which is contained in a repository (represented by the URL version number).
+A timestamp is a type of variant.
+
Since sub-objects are optional, there is also implicitly an ARK URL
+for each project, as well as for the repository as a whole.
+
The RESOURCE_UUID and VALUE_UUID are processed as follows:
+
+
+
A check digit is calculated, using the algorithm in
+ the Scala class org.knora.webapi.util.Base64UrlCheckDigit, and appended
+ to the UUID.
+
+
+
Any - characters in the resulting string are replaced with =, because
+ base64url encoding uses -, which is a reserved character in ARK URLs.
+
+
+
For example, given a project with ID 0001, and using the DaSCH's ARK resolver
+hostname and NAAN, the ARK URL for the project itself is:
+
http://ark.dasch.swiss/ark:/72163/1/0001
+
+
Given the Knora resource IRI http://rdfh.ch/0001/0C-0L1kORryKzJAJxxRyRQ,
+the corresponding ARK URL without a timestamp is:
Given a value with knora-api:valueHasUUID "4OOf3qJUTnCDXlPNnygSzQ" in the resource
+http://rdfh.ch/0001/0C-0L1kORryKzJAJxxRyRQ, and using the DaSCH's ARK resolver
+hostname and NAAN, the corresponding ARK URL without a timestamp is:
SmartIri converts Knora resource IRIs to ARK URLs. This conversion is invoked in ReadResourceV2.toJsonLD,
+when returning a resource's metadata in JSON-LD format.
Whenever possible, the same data structures are used to represent the same
+types of data, regardless of the API operation (reading, creating, or
+modifying). However, often more data is available in output than in input. For
+example, when a value is read from the triplestore, its IRI is
+available, but when it is being created, it does not yet have an IRI.
+
The implementation of API v2 therefore uses content wrappers. For each type,
+there is a case class that represents the lowest common denominator of the
+type, the data that will be present regardless of the API operation. For
+example, the trait ValueContentV2 represents a Knora value, regardless
+of whether it is received as input or returned as output. Case classes
+such as DateValueContentV2 and TextValueContentV2 implement this trait.
+
An instance of this lowest-common-denominator class, or "content class", can then
+be wrapped in an instance of an operation-specific class that carries additional
+data. For example, when a Knora value is returned from the triplestore, a
+ValueContentV2 is wrapped in a ReadValueV2, which additionally contains the
+value's IRI. When a value is created, it is wrapped in a CreateValueV2, which
+has the resource IRI and the property IRI, but not the value IRI.
+
A read wrapper can be wrapped in another read wrapper; for
+example, a ReadResourceV2 contains ReadValueV2 objects.
+
In general, DSP-API v2 responders deal only with the internal schema.
+(The exception is OntologyResponderV2, which can return ontology information
+that exists only in an external schema.) Therefore, a content class needs
+to be able to convert itself from the internal schema to an external schema
+(when it is being used for output) and vice versa (when it is being used for
+input). Each content class class should therefore extend KnoraContentV2, and
+thus have a toOntologySchema method or converting itself between internal and
+external schemas, in either direction:
+
/**
+ * A trait for content classes that can convert themselves between internal and internal schemas.
+ *
+ * @tparam C the type of the content class that extends this trait.
+ */
+trait KnoraContentV2[C <: KnoraContentV2[C]] {
+ this: C =>
+ def toOntologySchema(targetSchema: OntologySchema): C
+}
+
+
Since read wrappers are used only for output, they need to be able convert
+themselves only from the internal schema to an external schema. Each read wrapper class
+should extend KnoraReadV2, and thus have a method for doing this:
+
/**
+ * A trait for read wrappers that can convert themselves to external schemas.
+ *
+ * @tparam C the type of the read wrapper that extends this trait.
+ */
+trait KnoraReadV2[C <: KnoraReadV2[C]] {
+ this: C =>
+ def toOntologySchema(targetSchema: ApiV2Schema): C
+}
+
The code that converts Gravsearch queries into SPARQL queries, and processes the query results, needs to know the
+types of the entities that are used in the input query. As explained in
+Type Inference, these types can be inferred,
+or they can be specified in the query using type annotations.
+
Type inspection is implemented in the package org.knora.webapi.messages.util.search.gravsearch.types.
+The entry point to this package is GravsearchTypeInspectionRunner, which is instantiated by SearchResponderV2.
+The result of type inspection is a GravsearchTypeInspectionResult, in which each typeable entity in the input query is
+associated with a GravsearchEntityTypeInfo, which can be either:
+
+
A PropertyTypeInfo, which specifies the type of object that a property is expected to have.
+
A NonPropertyTypeInfo, which specifies the type of a variable, or the type of an IRI representing a resource or value.
+
+
Identifying Typeable Entities
+
After parsing a Gravsearch query, SearchResponderV2 calls GravsearchTypeInspectionRunner.inspectTypes, passing
+the WHERE clause of the input query. This method first identifies the entities whose types need to be determined. Each
+of these entities is represented as a TypeableEntity. To do this, GravsearchTypeInspectionRunner uses QueryTraverser
+to traverse the WHERE clause, collecting typeable entities in a visitor called TypeableEntityCollectingWhereVisitor.
+The entities that are considered to need type information are:
+
+
All variables.
+
All IRIs except for those that represent type annotations or types.
+
+
The Type Inspection Pipeline
+
GravsearchTypeInspectionRunner contains a pipeline of type inspectors, each of which extends GravsearchTypeInspector.
+There are two type inspectors in the pipeline:
+
+
AnnotationReadingGravsearchTypeInspector: reads
+ type annotations included in a Gravsearch query.
+
InferringGravsearchTypeInspector: infers the types of entities from the context in which they are used, as well
+ as from ontology information that it requests from OntologyResponderV2.
+
+
Each type inspector takes as input, and returns as output, an IntermediateTypeInspectionResult, which
+associates each TypeableEntity with zero or more types. Initially, each TypeableEntity has no types.
+Each type inspector adds whatever types it finds for each entity.
+
At the end of the pipeline, each entity should
+have exactly one type. Therefore, to only keep the most specific type for an entity,
+the method refineDeterminedTypes refines the determined types by removing those that are base classes of others. However,
+it can be that inconsistent types are determined for entities. For example, in cases where multiple resource class types
+are determined, but one is not a base class of the others. From the following statement
+
{ ?document a beol:manuscript . } UNION { ?document a beol:letter .}
+
+
two inconsistent types can be inferred for ?document: beol:letter and beol:manuscript.
+In these cases, a sanitizer sanitizeInconsistentResourceTypes replaces the inconsistent resource types by
+their common base resource class (in the above example, it would be beol:writtenSource).
+
Lastly, an error is returned if
+
+
An entity's type could not be determined. The client must add a type annotation to make the query work.
+
Inconsistent types could not be sanitized (an entity appears to have more than one type). The client must correct the query.
+
+
If there are no errors, GravsearchTypeInspectionRunner converts the pipeline's output to a
+GravsearchTypeInspectionResult, in which each entity is associated with exactly one type.
+
AnnotationReadingGravsearchTypeInspector
+
This inspector uses QueryTraverser to traverse the WHERE clause, collecting type annotations in a visitor called
+AnnotationCollectingWhereVisitor. It then converts each annotation to a GravsearchEntityTypeInfo.
+
InferringGravsearchTypeInspector
+
This inspector first uses QueryTraverser to traverse the WHERE clause, assembling an index of
+usage information about typeable entities in a visitor called UsageIndexCollectingWhereVisitor. The UsageIndex contains,
+for example, an index of all the entities that are used as subjects, predicates, or objects, along with the
+statements in which they are used. It also contains sets of all the Knora class and property IRIs
+that are used in the WHERE clause. InferringGravsearchTypeInspector then asks OntologyResponderV2 for information
+about those classes and properties, as well as about the classes that are subject types or object types of those properties.
+
Next, the inspector runs inference rules (which extend InferenceRule) on each TypeableEntity. Each rule
+takes as input a TypeableEntity, the usage index, the ontology information, and the IntermediateTypeInspectionResult,
+and returns a new IntermediateTypeInspectionResult. For example, TypeOfObjectFromPropertyRule infers an entity's type
+if the entity is used as the object of a statement and the predicate's knora-api:objectType is known. For each TypeableEntity,
+if a type is inferred from a property, the entity and the inferred type are added to
+IntermediateTypeInspectionResult.entitiesInferredFromProperty.
+
The inference rules are run repeatedly, because the output of one rule may allow another rule to infer additional
+information. There are two pipelines of rules: a pipeline for the first iteration of type inference, and a
+pipeline for subsequent iterations. This is because some rules can return additional information if they are run
+more than once on the same entity, while others cannot.
+
The number of iterations is limited to InferringGravsearchTypeInspector.MAX_ITERATIONS, but in practice
+two iterations are sufficient for most realistic queries, and it is difficult to design a query that requires more than
+six iterations.
+
Transformation of a Gravsearch Query
+
A Gravsearch query submitted by the client is parsed by GravsearchParser and preprocessed by GravsearchTypeInspector
+to get type information about the elements used in the query (resources, values, properties etc.)
+and do some basic sanity checks.
+
In SearchResponderV2, two queries are generated from a given Gravsearch query: a prequery and a main query.
+
Query Transformers
+
The Gravsearch query is passed to QueryTraverser along with a query transformer. Query transformers are classes
+that implement traits supported by QueryTraverser:
+
+
WhereTransformer: instructions how to convert statements in the WHERE clause of a SPARQL query
+ (to generate the prequery's Where clause).
+
+
To improve query performance, this trait defines the method optimiseQueryPatterns whose implementation can call
+private methods to optimise the generated SPARQL. For example, before transformation of statements in WHERE clause, query
+pattern orders must be optimised by moving LuceneQueryPatterns to the beginning and isDeleted statement patterns to the end of the WHERE clause.
+
+
AbstractPrequeryGenerator (extends WhereTransformer): converts a Gravsearch query into a prequery;
+ this one has two implementations for regular search queries and for count queries.
+
SelectTransformer (extends WhereTransformer): transforms a Select query into a Select query with simulated RDF inference.
+
ConstructTransformer: transforms a Construct query into a Construct query with simulated RDF inference.
+
+
Prequery
+
The purpose of the prequery is to get an ordered collection of results representing only the IRIs of one page of matching resources and values.
+Sort criteria can be submitted by the user, but the result is always deterministic also without sort criteria.
+This is necessary to support paging.
+A prequery is a SPARQL SELECT query.
+
The classes involved in generating prequeries can be found in org.knora.webapi.messages.util.search.gravsearch.prequery.
+
If the client submits a count query, the prequery returns the overall number of hits, but not the results themselves.
+
In a first step, before transforming the WHERE clause, query patterns must be further optimised by removing
+the rdfs:type statement for entities whose type could be inferred from their use with a property IRI, since there would be no need
+for explicit rdfs:type statements for them (unless the property IRI from which the type of an entity must be inferred from
+is wrapped in an OPTIONAL block). This optimisation takes the Gravsearch query as input (rather than the generated SPARQL),
+because it uses type information that refers to entities in the Gravsearch query, and the generated SPARQL might
+have different entities.
+
Next, the Gravsearch query's WHERE clause is transformed and the prequery (SELECT and WHERE clause) is generated from this result.
+The transformation of the Gravsearch query's WHERE clause relies on the implementation of the abstract class AbstractPrequeryGenerator.
+
AbstractPrequeryGenerator contains members whose state is changed during the iteration over the statements of the input query.
+They can then be used to create the converted query.
+
+
mainResourceVariable: Option[QueryVariable]:
+ SPARQL variable representing the main resource of the input query.
+ Present in the prequery's SELECT clause.
+
dependentResourceVariables: mutable.Set[QueryVariable]:
+ a set of SPARQL variables representing dependent resources in the input query.
+ Used in an aggregation function in the prequery's SELECT clause (see below).
+
dependentResourceVariablesGroupConcat: Set[QueryVariable]:
+ a set of SPARQL variables representing an aggregation of dependent resources.
+ Present in the prequery's SELECT clause.
+
valueObjectVariables: mutable.Set[QueryVariable]:
+ a set of SPARQL variables representing value objects.
+ Used in an aggregation function in the prequery's SELECT clause (see below).
+
valueObjectVarsGroupConcat: Set[QueryVariable]:
+ a set of SPARQL variables representing an aggregation of value objects.
+ Present in the prequery's SELECT clause.
+
+
The variables mentioned above are present in the prequery's result rows because they are part of the prequery's SELECT clause.
+
The following example illustrates the handling of variables.
+The following Gravsearch query looks for pages with a sequence number of 10 that are part of a book:
The prequery's SELECT clause is built by
+NonTriplestoreSpecificGravsearchToPrequeryTransformer.getSelectColumns,
+based on the variables used in the input query's CONSTRUCT clause.
+The resulting SELECT clause looks as follows:
+
SELECTDISTINCT
+ ?page
+ (GROUP_CONCAT(DISTINCT(IF(BOUND(?book),STR(?book),""));SEPARATOR='')AS?book__Concat)
+ (GROUP_CONCAT(DISTINCT(IF(BOUND(?seqnum),STR(?seqnum),""));SEPARATOR='')AS?seqnum__Concat)
+ (GROUP_CONCAT(DISTINCT(IF(BOUND(?book__LinkValue),STR(?book__LinkValue),""));SEPARATOR='')AS?book__LinkValue__Concat)
+ WHERE{...}
+ GROUP BY?page
+ ORDER BYASC(?page)
+ LIMIT25
+
+
?page represents the main resource. When accessing the prequery's result rows, ?page contains the IRI of the main resource.
+The prequery's results are grouped by the main resource so that there is exactly one result row per matching main resource.
+?page is also used as a sort criterion although none has been defined in the input query.
+This is necessary to make paging work: results always have to be returned in the same order (the prequery is always deterministic).
+Like this, results can be fetched page by page using LIMIT and OFFSET.
+
Grouping by main resource requires other results to be aggregated using the function GROUP_CONCAT.
+?book is used as an argument of the aggregation function.
+The aggregation's result is accessible in the prequery's result rows as ?book__Concat.
+The variable ?book is bound to an IRI.
+Since more than one IRI could be bound to a variable representing a dependent resource, the results have to be aggregated.
+GROUP_CONCAT takes two arguments: a collection of strings (IRIs in our use case) and a separator
+(we use the non-printing Unicode character INFORMATION SEPARATOR ONE).
+When accessing ?book__Concat in the prequery's results containing the IRIs of dependent resources,
+the string has to be split with the separator used in the aggregation function.
+The result is a collection of IRIs representing dependent resources.
+The same logic applies to value objects.
+
Each GROUP_CONCAT checks whether the concatenated variable is bound in each result in the group; if a variable
+is unbound, we concatenate an empty string. This is necessary because, in Apache Jena (and perhaps other
+triplestores), "If GROUP_CONCAT has an unbound value in the list of values to concat, the overall result is 'error'"
+(see this Jena issue).
+
If the input query contains a UNION, and a variable is bound in one branch
+of the UNION and not in another branch, it is possible that the prequery
+will return more than one row per main resource. To deal with this situation,
+SearchResponderV2 merges rows that contain the same main resource IRI.
+
Main Query
+
The purpose of the main query is to get all requested information
+about the main resource, dependent resources, and value objects.
+The IRIs of those resources and value objects were returned by the prequery.
+Since the prequery only returns resources and value objects matching the input query's criteria,
+the main query can specifically ask for more detailed information on these resources and values
+without having to reconsider these criteria.
+
Generating the Main Query
+
The main query is a SPARQL CONSTRUCT query. Its generation is handled by the
+method GravsearchMainQueryGenerator.createMainQuery.
+It takes three arguments:
+mainResourceIris: Set[IriRef], dependentResourceIris: Set[IriRef], valueObjectIris: Set[IRI].
+
These sets are constructed based on information about variables representing
+dependent resources and value objects in the prequery, which is provided by
+NonTriplestoreSpecificGravsearchToPrequeryTransformer:
From the given Iris, statements are
+generated that ask for complete information on exactly these resources and
+values. For any given resource Iri, only the values present in
+valueObjectIris are to be queried. This is achieved by using SPARQL's
+VALUES expression for the main resource and dependent resources as well as
+for values.
+
Processing the Main Query's results
+
To do the permission checking, the results of the main query are passed to
+ConstructResponseUtilV2.splitMainResourcesAndValueRdfData,
+which transforms a SparqlConstructResponse (a set of RDF triples)
+into a structure organized by main resource Iris. In this structure, dependent
+resources and values are nested and can be accessed via their main resource,
+and resources and values that the user does not have permission to see are
+filtered out. As a result, a page of results may contain fewer than the maximum
+allowed number of results per page, even if more pages of results are available.
+
MainQueryResultProcessor.getRequestedValuesFromResultsWithFullGraphPattern
+then filters out values that the user did not explicitly ask for in the input
+query.
+
Finally, ConstructResponseUtilV2.createApiResponse transforms the query
+results into an API response (a ReadResourcesSequenceV2). If the number
+of main resources found (even if filtered out because of permissions) is equal
+to the maximum allowed page size, the predicate
+knora-api:mayHaveMoreResults: true is included in the response.
+
Inference
+
Gravsearch queries support a subset of RDFS reasoning
+(see Inference in the API documentation on Gravsearch).
+This is implemented as follows:
+
To simulate RDF inference, the API expands all rdfs:subClassOf and rdfs:subPropertyOf statements
+using UNION statements for all subclasses and subproperties from the ontologies
+(equivalent to rdfs:subClassOf* and rdfs:subPropertyOf*).
+Similarly, the API replaces knora-api:standoffTagHasStartAncestor with knora-base:standoffTagHasStartParent*.
+
Optimisation of generated SPARQL
+
The triplestore-specific transformers in SparqlTransformer.scala can run optimisations on the generated SPARQL, in
+the method optimiseQueryPatterns inherited from WhereTransformer. For example, moveLuceneToBeginning moves
+Lucene queries to the beginning of the block in which they occur.
+
Query Optimization by Topological Sorting of Statements
+
In Jena Fuseki, the performance of a query highly depends on the order of the query statements.
+For example, a query such as the one below:
takes a very long time with Fuseki. The performance of this query can be improved
+by moving up the statements with literal objects that are not dependent on any other statement:
Since users cannot be expected to know about performance of triplestores in order to write efficient queries,
+an optimization method to automatically rearrange the statements of the given queries has been implemented.
+Upon receiving the Gravsearch query, the algorithm converts the query to a graph. For each statement pattern,
+the subject of the statement is the origin node, the predicate is a directed edge, and the object
+is the target node. For the query above, this conversion would result in the following graph:
The algorithm returns the nodes of the graph ordered in several layers, where the
+root element ?letter is in layer 0, [?date, ?person1, ?person2] are in layer 1, [?gnd1, ?gnd2] in layer 2, and the
+leaf nodes [(DE-588)118531379, (DE-588)118696149] are given in the last layer (i.e. layer 3).
+According to Kahn's algorithm, there are multiple valid permutations of the topological order. The graph in the example
+above has 24 valid permutations of topological order. Here are two of them (nodes are ordered from left to right with the
+highest order to the lowest):
From all valid topological orders, one is chosen based on certain criteria; for example, the leaf node should not
+belong to a statement that has predicate rdf:type, since that could match all resources of the specified type.
+Once the best order is chosen, it is used to re-arrange the query statements. Starting from the last leaf node, i.e.
+(DE-588)118696149, the method finds the statement pattern which has this node as its object, and brings this statement
+to the top of the query. This rearrangement continues so that the statements with the fewest dependencies on other
+statements are all brought to the top of the query. The resulting query is as follows:
Note that position of the FILTER statements does not play a significant role in the optimization.
+
If a Gravsearch query contains statements in UNION, OPTIONAL, MINUS, or FILTER NOT EXISTS, they are reordered
+by defining a graph per block. For example, consider the following query with UNION:
This would result in one graph per block of the UNION. Each graph is then sorted, and the statements of its
+block are rearranged according to the topological order of graph. This is the result:
The topological sorting algorithm can only be used for DAGs (directed acyclic graphs). However,
+a Gravsearch query can contains statements that result in a cyclic graph, e.g.:
Add any SPARQL templates you need to src/main/twirl/queries/sparql/v2,
+using the Twirl template
+engine.
+
Write Responder Request and Response Messages
+
Add a file to the org.knora.webapi.messages.v2.responder
+package, containing case classes for your responder's request and
+response messages. Add a trait that the responder's request messages
+extend. Each request message type should contain a UserADM.
+
Request and response messages should be designed following the patterns described
+in JSON-LD Parsing and Formatting. Each responder's
+request messages should extend a responder-specific trait, so that
+ResponderManager will know which responder to route those messages to.
+
Write a Responder
+
Write a Pekko actor class that extends org.knora.webapi.responders.Responder,
+and add it to the org.knora.webapi.responders.v2 package.
+
Give your responder a receive(msg: YourCustomType) method that handles each of your
+request message types by generating a Future containing a response message.
+
Add the path of your responder to the org.knora.webapi.responders package object,
+and add code to ResponderManager to instantiate the new responder. Then add a case to
+the receive method in ResponderManager, to match messages that extend your request
+message trait, and pass them to that responder's receive method.
+The responder's resulting Future must be passed to the ActorUtil.future2Message.
+See Error Handling for details.
+
Write a Route
+
Add a class to the org.knora.webapi.routing.v2 package for your
+route, using the Pekko HTTP Routing DSL.
+See the routes in that package for examples. Typically, each route
+route will construct a responder request message and pass it to
+RouteUtilV2.runRdfRouteWithFuture to handle the request.
+
Finally, add your route's knoraApiPath function to the apiRoutes member
+variable in KnoraService. Any exception thrown inside the route will
+be handled by the KnoraExceptionHandler, so that the correct client
+response (including the HTTP status code) will be returned.
Knora provides a utility object called JsonLDUtil, which wraps the
+titanium-json-ld Java library, and parses JSON-LD text to a
+Knora data structure called JsonLDDocument. These classes provide commonly needed
+functionality for extracting and validating data from JSON-LD documents, as well
+as for constructing new documents.
+
Parsing JSON-LD
+
A route that expects a JSON-LD request must first parse the JSON-LD using
+JsonLDUtil . For example, this is how ValuesRouteV2 parses a JSON-LD request to create a value:
This is done in a Future, because the processing of JSON-LD input
+could in itself involve sending messages to responders.
+
Each request message case class (in this case CreateValueRequestV2) has a companion object
+that implements the KnoraJsonLDRequestReaderV2 trait:
+
/**
+ * A trait for objects that can generate case class instances based on JSON-LD input.
+ *
+ * @tparam C the type of the case class that can be generated.
+ */
+traitKnoraJsonLDRequestReaderV2[C]{
+/**
+ * Converts JSON-LD input into a case class instance.
+ *
+ * @param jsonLDDocument the JSON-LD input.
+ * @param apiRequestID the UUID of the API request.
+ * @param requestingUser the user making the request.
+ * @param responderManager a reference to the responder manager.
+ * @param storeManager a reference to the store manager.
+ * @param settings the application settings.
+ * @param log a logging adapter.
+ * @param timeout a timeout for `ask` messages.
+ * @param executionContext an execution context for futures.
+ * @return a case class instance representing the input.
+ */
+deffromJsonLD(jsonLDDocument:JsonLDDocument,
+apiRequestID:UUID,
+requestingUser:UserADM,
+responderManager:ActorRef,
+storeManager:ActorRef,
+settings:KnoraSettingsImpl,
+log:LoggingAdapter)(implicittimeout:Timeout,executionContext:ExecutionContext):Future[C]
+}
+
+
This means that the companion object has a method fromJsonLD that takes a
+JsonLDDocument and returns an instance of the case class. The fromJsonLD method
+can use the functionality of the JsonLDDocument data structure for extracting
+and validating the content of the request. For example, JsonLDObject.requireStringWithValidation
+gets a required member of a JSON-LD object, and validates it using a function
+that is passed as an argument. Here is an example of getting and validating
+a SmartIri:
The validation function (in this case stringFormatter.toSmartIriWithErr) has to take
+two arguments: a string to be validated, and a function that that throws an exception
+if the string is invalid. The return value of requireStringWithValidation is the
+return value of the validation function, which in this case is a SmartIri. If
+the string is invalid, requireStringWithValidation throws BadRequestException.
+
It is also possible to get and validate an optional JSON-LD object member:
Here JsonLDObject.maybeStringWithValidation returns an Option that contains
+the return value of the validation function (DateEraV2.parse) if it was given,
+otherwise None.
+
Returning a JSON-LD Response
+
Each API response is represented by a message class that extends
+KnoraJsonLDResponseV2, which has a method toJsonLDDocument that specifies
+the target ontology schema. The implementation of this method constructs a JsonLDDocument,
+in which all object keys are full IRIs (no prefixes are used), but in which
+the JSON-LD context also specifies the prefixes that will be used when the
+document is returned to the client. The function JsonLDUtil.makeContext
+is a convenient way to construct the JSON-LD context.
+
Since toJsonLDDocument has to return an object that uses the specified
+ontology schema, the recommended design is to separate schema conversion as much
+as possible from JSON-LD generation. As a first step, schema conversion (or at the very
+least, the conversion of Knora type IRIs to the target schema) can be done via an
+implementation of KnoraReadV2:
+
/**
+ * A trait for read wrappers that can convert themselves to external schemas.
+ *
+ * @tparam C the type of the read wrapper that extends this trait.
+ */
+traitKnoraReadV2[C<:KnoraReadV2[C]]{
+this:C=>
+deftoOntologySchema(targetSchema:ApiV2Schema):C
+}
+
+
This means that the response message class has the method toOntologySchema, which returns
+a copy of the same message, with Knora type IRIs (and perhaps other content) adjusted
+for the target schema. (See Smart IRIs on how to convert Knora
+type IRIs to the target schema.)
+
The response message class could then have a private method called generateJsonLD, which
+generates a JsonLDDocument that has the correct structure for the target schema, like
+this:
Most routes complete by calling RouteUtilV2.runRdfRouteWithFuture, which calls
+the response message's toJsonLDDocument method. The runRdfRouteWithFuture function
+has a parameter that enables the route to select the schema that should be used in
+the response. It is up to each route to determine what the appropriate response schema
+should be. Some routes support only one response schema. Others allow the client
+to choose. To use the schema requested by the client, the route can call
+RouteUtilV2.getOntologySchema:
RouteUtilV2.runRdfRouteWithFuture implements
+HTTP content negotiation. After
+determining the client's preferred format, it asks the KnoraResponseV2 to convert
+itself into that format. KnoraResponseV2 has an abstract format method, whose implementations
+select the most efficient conversion between the response message's internal
+representation (which could be JSON-LD or Turtle) and the requested format.
The core of Knora's ontology management logic is OntologyResponderV2.
+It is responsible for:
+
+
Loading ontologies from the triplestore when Knora starts.
+
Maintaining an ontology cache to improve performance.
+
Returning requested ontology entities from the cache. Requests for ontology
+ information never access the triplestore.
+
Creating and updating ontologies in response to API requests.
+
Ensuring that all user-created ontologies are consistent and conform to knora-base.
+
+
When Knora starts it will load all ontologies from the triplestore into the ontology cache:
+
+
Loads all ontologies found in the triplestore into suitable Scala data structures,
+ which include indexes of relations between entities (e.g. rdfs:subClassOf relations),
+ to facilitate validity checks.
The ontology responder assumes that nothing except itself modifies ontologies
+in the triplestore while Knora is running. Therefore, the ontology cache is updated
+only when the ontology responder processes a request to update an ontology.
+
By design, the ontology responder can update only one ontology entity per request,
+to simplify the necessary validity checks. This requires the client to
+construct an ontology by submitting a sequence of requests in a certain order,
+as explained in
+Ontology Updates.
+
The ontology responder mainly works with ontologies in the internal schema.
+However, it knows that some entities in built-in ontologies have hard-coded
+definitions in external schemas, and it checks the relevant
+transformation rules and returns those entities directly when they are requested
+(see Generation of Ontologies in External Schemas).
As explained in API Schema,
+Knora can represent the same RDF data in different forms: an "internal schema"
+for use in the triplestore, and different "external schemas" for use in Knora
+API v2. Different schemas use different IRIs, as explained in
+Knora IRIs. Internally,
+Knora uses a SmartIri class to convert IRIs between
+schemas.
+
The data type representing a schema itself is OntologySchema, which
+uses the sealed trait
+pattern:
+
packageorg.knora.webapi
+
+/**
+ * Indicates the schema that a Knora ontology or ontology entity conforms to.
+ */
+sealedtraitOntologySchema
+
+/**
+ * The schema of DSP ontologies and entities that are used in the triplestore.
+ */
+caseobjectInternalSchemaextendsOntologySchema
+
+/**
+ * The schema of DSP ontologies and entities that are used in API v2.
+ */
+sealedtraitApiV2SchemaextendsOntologySchema
+
+/**
+ * The simple schema for representing DSP ontologies and entities. This schema represents values as literals
+ * when possible.
+ */
+caseobjectApiV2SimpleextendsApiV2Schema
+
+/**
+ * The default (or complex) schema for representing DSP ontologies and entities. This
+ * schema always represents values as objects.
+ */
+caseobjectApiV2ComplexextendsApiV2Schema
+
+/**
+ * A trait representing options that can be submitted to configure an ontology schema.
+ */
+sealedtraitSchemaOption
+
+/**
+ * A trait representing options that affect the rendering of markup when text values are returned.
+ */
+sealedtraitMarkupRenderingextendsSchemaOption
+
+/**
+ * Indicates that markup should be rendered as XML when text values are returned.
+ */
+caseobjectMarkupAsXmlextendsMarkupRendering
+
+/**
+ * Indicates that markup should not be returned with text values, because it will be requested
+ * separately as standoff.
+ */
+caseobjectMarkupAsStandoffextendsMarkupRendering
+
+/**
+ * Indicates that no markup should be returned with text values. Used only internally.
+ */
+caseobjectNoMarkupextendsMarkupRendering
+
+/**
+ * Utility functions for working with schema options.
+ */
+objectSchemaOptions{
+/**
+ * A set of schema options for querying all standoff markup along with text values.
+ */
+valForStandoffWithTextValues:Set[SchemaOption]=Set(MarkupAsXml)
+
+/**
+ * A set of schema options for querying standoff markup separately from text values.
+ */
+valForStandoffSeparateFromTextValues:Set[SchemaOption]=Set(MarkupAsStandoff)
+
+/**
+ * Determines whether standoff should be queried when a text value is queried.
+ *
+ * @param targetSchema the target API schema.
+ * @param schemaOptions the schema options submitted with the request.
+ * @return `true` if standoff should be queried.
+ */
+defqueryStandoffWithTextValues(targetSchema:ApiV2Schema,schemaOptions:Set[SchemaOption]):Boolean={
+targetSchema==ApiV2Complex&&!schemaOptions.contains(MarkupAsStandoff)
+}
+
+/**
+ * Determines whether markup should be rendered as XML.
+ *
+ * @param targetSchema the target API schema.
+ * @param schemaOptions the schema options submitted with the request.
+ * @return `true` if markup should be rendered as XML.
+ */
+defrenderMarkupAsXml(targetSchema:ApiV2Schema,schemaOptions:Set[SchemaOption]):Boolean={
+targetSchema==ApiV2Complex&&!schemaOptions.contains(MarkupAsStandoff)
+}
+
+/**
+ * Determines whether markup should be rendered as standoff, separately from text values.
+ *
+ * @param targetSchema the target API schema.
+ * @param schemaOptions the schema options submitted with the request.
+ * @return `true` if markup should be rendered as standoff.
+ */
+defrenderMarkupAsStandoff(targetSchema:ApiV2Schema,schemaOptions:Set[SchemaOption]):Boolean={
+targetSchema==ApiV2Complex&&schemaOptions.contains(MarkupAsStandoff)
+}
+}
+
+
This class hierarchy allows method declarations to restrict the schemas
+they accept. A method that can accept any schema can take a parameter of type
+OntologySchema, while a method that accepts only external schemas can take
+a parameter of type ApiV2Schema. For examples, see Content Wrappers.
+
Generation of Ontologies in External Schemas
+
Ontologies are stored only in the internal schema, and are converted on the fly
+to external schemas. For each external schema, there is a Scala object in
+org.knora.webapi.messages.v2.responder.ontologymessages that provides rules
+for this conversion:
+
+
KnoraApiV2SimpleTransformationRules for the API v2 simple schema
+
KnoraApiV2WithValueObjectsTransformationRules for the API v2 complex schema
+
+
Since these are Scala objects rather than classes, they are initialised before
+the Akka ActorSystem starts, and therefore need a special instance of
+Knora's StringFormatter class (see Smart IRIs).
+
Each of these rule objects implements this trait:
+
/**
+ * A trait for objects that provide rules for converting an ontology from the internal schema to an external schema.
+ * * See also [[OntologyConstants.CorrespondingIris]].
+ */
+traitOntologyTransformationRules{
+/**
+ * The metadata to be used for the transformed ontology.
+ */
+valontologyMetadata:OntologyMetadataV2
+
+/**
+ * Properties to remove from the ontology before converting it to the target schema.
+ * See also [[OntologyConstants.CorrespondingIris]].
+ */
+valinternalPropertiesToRemove:Set[SmartIri]
+
+/**
+ * Classes to remove from the ontology before converting it to the target schema.
+ */
+valinternalClassesToRemove:Set[SmartIri]
+
+/**
+ * After the ontology has been converted to the target schema, these cardinalities must be
+ * added to the specified classes.
+ */
+valexternalCardinalitiesToAdd:Map[SmartIri,Map[SmartIri,KnoraCardinalityInfo]]
+
+/**
+ * Classes that need to be added to the ontology after converting it to the target schema.
+ */
+valexternalClassesToAdd:Map[SmartIri,ReadClassInfoV2]
+
+/**
+ * Properties that need to be added to the ontology after converting it to the target schema.
+ * See also [[OntologyConstants.CorrespondingIris]].
+ */
+valexternalPropertiesToAdd:Map[SmartIri,ReadPropertyInfoV2]
+}
+
+
These rules are applied to knora-base as well as to user-created ontologies.
+For example, knora-base:Resource has different cardinalities depending on its
+schema (knora-api:Resource has an additional cardinality on knora-api:hasIncomingLink),
+and this is therefore also true of its user-created subclasses. The transformation
+is implemented:
+
+
In the implementations of the toOntologySchema method in classes defined in
+ OntologyMessagesV2.scala: ReadOntologyV2, ReadClassInfoV2, ClassInfoContentV2,
+ PropertyInfoContentV2, and OntologyMetadataV2.
+
In OntologyResponderV2.getEntityInfoResponseV2, which handles requests for
+ specific ontology entities. If the requested entity is hard-coded in a transformation
+ rule, this method returns the hard-coded external entity, otherwise it returns the relevant
+ internal entity.
DSP-API v2 requests and responses are RDF documents. Any API v2
+ response can be returned as JSON-LD,
+ Turtle,
+ or RDF/XML.
+
Each class or property used in a request or response has a definition in an ontology, which Knora can serve.
+
Response formats are reused for different requests whenever
+ possible, to minimise the number of different response formats a
+ client has to handle. For example, any request for one or more
+ resources (such as a search result, or a request for one specific
+ resource) returns a response in the same format.
+
Response size is limited by design. Large amounts of data must be
+ retrieved by requesting small pages of data, one after the other.
+
Responses that provide data are distinct from responses that provide
+ definitions (i.e. ontology entities). Data responses indicate which
+ types are used, and the client can request information about these
+ types separately.
+
+
API Schemas
+
The types used in the triplestore are not exposed directly in the API.
+Instead, they are mapped onto API 'schemas'. Two schemas are currently
+provided.
+
+
A complex schema, which is suitable both for reading and for editing
+ data. The complex schema represents values primarily as complex objects.
+
A simple schema, which is suitable for reading data but not for
+ editing it. The simple schema facilitates interoperability between
+ DSP ontologies and non-DSP ontologies, since it represents
+ values primarily as literals.
+
+
Each schema has its own type IRIs, which are derived from the ones used
+in the triplestore. For details of these different IRI formats, see
+Knora IRIs.
+
Implementation
+
JSON-LD Parsing and Formatting
+
Each API response is represented by a class that extends
+KnoraResponseV2, which has a method toJsonLDDocument that specifies
+the target schema. It is currently up to each route to determine what
+the appropriate response schema should be. Some routes will support only
+one response schema. Others will allow the client to choose, and there
+will be one or more standard ways for the client to specify the desired
+response schema.
+
A route calls RouteUtilV2.runRdfRoute, passing a request message and
+a response schema. When RouteUtilV2 gets the response message from the
+responder, it calls toJsonLDDocument on it, specifying that schema.
+The response message returns a JsonLDDocument, which is a simple data
+structure that is then converted to Java objects and passed to the
+JSON-LD Java library for formatting. In general, toJsonLDDocument is
+implemented in two stages: first the object converts itself to the
+target schema, and then the resulting object is converted to a
+JsonLDDocument.
+
A route that receives JSON-LD requests should use
+JsonLDUtil.parseJsonLD to convert each request to a JsonLDDocument.
Whenever possible, the same data structures are used for input and
+output. Often more data is available in output than in input. For
+example, when a value is read from the triplestore, its IRI is
+available, but when it is being created, it does not yet have an IRI. In
+such cases, there is a class like ValueContentV2, which represents the
+data that is used both for input and for output. When a value is read, a
+ValueContentV2 is wrapped in a ReadValueV2, which additionally
+contains the value's IRI. When a value is created, it is wrapped in a
+CreateValueV2, which has the resource IRI and the property IRI, but
+not the value IRI.
+
A Read* wrapper can be wrapped in another Read* wrapper; for
+example, a ReadResourceV2 contains ReadValueV2 objects.
+
Each *Content* class should extend KnoraContentV2 and thus have a
+toOntologySchema method or converting itself between internal and
+external schemas, in either direction.
+
Each Read* wrapper class should have a method for converting itself to
+JSON-LD in a particular external schema. If the Read* wrapper is a
+KnoraResponseV2, this method is toJsonLDDocument.
+
Smart IRIs
+
Usage
+
The SmartIri trait can be used to parse and validate IRIs, and in
+particular for converting Knora type IRIs between internal and external
+schemas. It validates each IRI it parses. To use it, import the
+following:
You can then use methods such as SmartIri.isKnoraApiV2EntityIri and
+SmartIri.getProjectCode to obtain information about the IRI. To
+convert it to another schema, call SmartIri.toOntologySchema.
+Converting a non-Knora IRI returns the same IRI.
+
If the IRI represents a Knora internal value class such as
+knora-base:TextValue, converting it to the ApiV2Simple schema will
+return the corresponding simplified type, such as xsd:string. But this
+conversion is not performed in the other direction (external to
+internal), since this would require knowledge of the context in which
+the IRI is being used.
+
The performance penalty for using a SmartIri instead of a string is
+very small. Instances are automatically cached once they are
+constructed. Parsing and caching a SmartIri instance takes about 10-20
+µs, and retrieving a cached SmartIri takes about 1 µs.
+
There is no advantage to using SmartIri for data IRIs, since they are
+not schema-specific (and are not cached). If a data IRI has been
+received from a client request, it is better just to validate it using
+StringFormatter.validateAndEscapeIri.
+
Smart IRI Implementation
+
The smart IRI implementation, SmartIriImpl, is nested in the
+StringFormatter class, because it uses Knora's
+hostname, which isn't available until the Akka ActorSystem has started.
+However, this means that the type of a SmartIriImpl instance is
+dependent on the instance of StringFormatter that constructed it.
+Therefore, instances of SmartIriImpl created by different instances of
+StringFormatter can't be compared directly.
+
There are in fact two instances of StringFormatter:
+
+
one returned by StringFormatter.getGeneralInstance which is
+ available after Akka has started and has the API server's hostname
+ (and can therefore provide SmartIri instances capable of parsing
+ IRIs containing that hostname). This instance is used throughout the
+ DSP-API server.
+
one returned by StringFormatter.getInstanceForConstantOntologies,
+ which is available before Akka has started, and is used only by the
+ hard-coded constant knora-api ontologies.
+
+
This is the reason for the existence of the SmartIri trait, which is a
+top-level definition and has its own equals and hashCode methods.
+Instances of SmartIri can thus be compared (e.g. to use them as unique
+keys in collections), regardless of which instance of StringFormatter
+created them.
DSP-API does not require the triplestore to perform inference,
+as different triplestores implement inference quite differently,
+so that taking advantage of inference would require triplestore specific code, which is not well maintainable.
+Instead, the API simulates inference for each Gravsearch query, so that the expected results are returned.
+
Gravsearch queries currently need to do the following:
+
+
Given a base property, find triples using a subproperty as predicate, and
+ return the subproperty used in each case.
+
Given a base class, find triples using an instance of subclass as subject or
+ object, and return the subclass used in each case.
+
+
Without inference, this can be done using property path syntax.
Checks that the queried resource belongs to a subclass of knora-base:Resource.
+
+
+
Returns the class that the resource explicitly belongs to.
+
+
+
Finds the Knora values attached to the resource, and returns each value along with
+ the property that explicitly attaches it to the resource.
+
+
+
However, such a query is very inefficient.
+Instead, the API does inference on the query, so that the relevant information can be found in a timely manner.
+
For this, the query is analyzed to check which project ontologies are relevant to the query.
+If an ontology is not relevant to a query,
+then all class and property definitions of this ontology are disregarded for inference.
+
Then, each statement that requires inference (i.e. that could be phrased with property path syntax, as described above)
+is cross-referenced with the relevant ontologies,
+to see which property/class definitions would fit the statement according to the rules of RDF inference.
+And each of those definitions is added to the query as a separate UNION statement.
+
E.g.: Given the resource class B is a subclass of A and the property hasY is a subproperty of hasX,
+then the following query
Value versions are a linked list, starting with the current version. Each value points to
+the previous version via knora-base:previousValue. The resource points only to the current
+version.
+
Past value versions are queried in getResourcePropertiesAndValues.scala.txt, which can
+take a timestamp argument. Given the current value version, we must find the most recent
+past version that existed at the target date.
+
First, we get the set of previous values that were created on or before the target
+date:
The resulting versions are now possible values of ?valueObject. Next, out of this set
+of versions, we exclude all versions except for the most recent one. We do this by checking,
+for each ?valueObject, whether there is another version, ?otherValueObject, that is more
+recent and was also created before the target date. If such a version exists, we exclude
+the one we are looking at.
The DSP-API specific configuration and scripts for Sipi are in the
+sipi subdirectory of the DSP-API source tree. See the README.md for
+instructions on how to start Sipi with DSP-API.
+
Lua Scripts
+
DSP-API v2 uses custom Lua scripts to control Sipi. These scripts can be
+found in sipi/scripts in the DSP-API source tree.
+
Each of these scripts expects a JSON Web Token in the
+URL parameter token. In all cases, the token must be signed by DSP-API,
+it must have an expiration date and not have expired, its issuer must equal
+the hostname and port of the API, and its audience must include Sipi.
+The other contents of the expected tokens are described below.
+
upload.lua
+
The upload.lua script is available at Sipi's upload route. It processes one
+or more file uploads submitted to Sipi. It converts uploaded images to JPEG 2000
+format, and stores them in Sipi's tmp directory. The usage of this script is described in
+Upload Files to Sipi.
+
upload_without_processing.lua
+
The upload_without_processing.lua script is available at Sipi's upload_without_processing route.
+It receives files submitted to Sipi but does not process them.
+Instead, it stores them as is in Sipi's tmp directory.
+
store.lua
+
The store.lua script is available at Sipi's store route. It moves a file
+from temporary to permanent storage. It expects an HTTP POST request containing
+application/x-www-form-urlencoded data with the parameters prefix (the
+project shortcode) and filename (the internal Sipi-generated filename of the file
+to be moved).
+
The JWT sent to this script must contain the key knora-data, whose value
+must be a JSON object containing:
+
+
permission: must be StoreFile
+
prefix: the project shortcode submitted in the form data
+
filename: the filename submitted in the form data
+
+
delete_temp_file.lua
+
The delete_temp_file.lua script is available at Sipi's delete_temp_file route.
+It is used only if DSP-API rejects a file value update request. It expects an
+HTTP DELETE request, with a filename as the last component of the URL.
+
The JWT sent to this script must contain the key knora-data, whose value
+must be a JSON object containing:
+
+
permission: must be DeleteTempFile
+
filename: must be the same as the filename submitted in the URL
+
+
clean_temp_dir.lua
+
The clean_temp_dir.lua script is available at Sipi's clean_temp_dir route.
+When called, it deletes old temporary files from tmp and (recursively) from any subdirectories.
+The maximum allowed age of temporary files can be set in Sipi's configuration file,
+using the parameter max_temp_file_age, which takes a value in seconds.
+
The clean_temp_dir route requires basic authentication.
+
SipiConnector
+
In DSP-API, the org.knora.webapi.iiif.SipiConnector handles all communication
+with Sipi. It blocks while processing each request, to ensure that the number of
+concurrent requests to Sipi is not greater than
+akka.actor.deployment./storeManager/iiifManager/sipiConnector.nr-of-instances.
+If it encounters an error, it returns SipiException.
+
The Image File Upload Workflow
+
+
The client uploads an image file to the upload route, which runs
+ upload.lua. The image is converted to JPEG 2000 and stored in Sipi's tmp
+ directory. In the response, the client receives the JPEG 2000's unique,
+ randomly generated filename.
+
The client submits a JSON-LD request to a DSP-API route (/v2/values or /v2/resources)
+ to create or change a file value. The request includes Sipi's internal filename.
+
During parsing of this JSON-LD request, a StillImageFileValueContentV2
+ is constructed to represent the file value. During the construction of this
+ object, a GetFileMetadataRequestV2 is sent to SipiConnector, which
+ uses Sipi's built-in knora.json route to get the rest of the file's
+ metadata.
+
A responder (ResourcesResponderV2 or ValuesResponderV2) validates
+ the request and updates the triplestore. (If it is ResourcesResponderV2,
+ it asks ValuesResponderV2 to generate SPARQL for the values.)
+
The responder that did the update calls ValueUtilV2.doSipiPostUpdate.
+ If the triplestore update was successful, this method sends
+ MoveTemporaryFileToPermanentStorageRequestV2 to SipiConnector, which
+ makes a request to Sipi's store route. Otherwise, the same method sends
+ DeleteTemporaryFileRequestV2 to SipiConnector, which makes a request
+ to Sipi's delete_temp_file route.
+
+
If the request to DSP-API cannot be parsed, the temporary file is not deleted
+immediately, but it will be deleted during the processing of a subsequent
+request by Sipi's upload route.
+
If Sipi's store route fails, DSP-API returns the SipiException to the client.
+In this case, manual intervention may be necessary to restore consistency
+between DSP-API and Sipi.
+
If Sipi's delete_temp_file route fails, the error is not returned to the client,
+because there is already a DSP-API error that needs to be returned to the client.
+In this case, the Sipi error is simply logged.
The SmartIri trait can be used to parse and validate IRIs, and in
+particular for converting Knora type IRIs
+between internal and external schemas. It validates each IRI it parses. To use it,
+import the following:
Then, if you have a string representing an IRI, you can can convert
+it to a SmartIri like this:
+
valpropertyIri:SmartIri="http://0.0.0.0:3333/ontology/0001/anything/v2#hasInteger".toSmartIri
+````
+
+If the IRI came from a request, use this method to throw a specific
+exception if the IRI is invalid:
+
+```scala
+valpropertyIri:SmartIri=propertyIriStr.toSmartIriWithErr(throwBadRequestException(s"Invalid property IRI: <$propertyIriStr>"))
+
+
You can then use methods such as SmartIri.isKnoraApiV2EntityIri and
+SmartIri.getProjectCode to obtain information about the IRI. To
+convert it to another schema, call SmartIri.toOntologySchema.
+Converting a non-Knora IRI returns the same IRI.
+
If the IRI represents a Knora internal value class such as
+knora-base:TextValue, converting it to the ApiV2Simple schema will
+return the corresponding simplified type, such as xsd:string. But this
+conversion is not performed in the other direction (external to
+internal), since this would require knowledge of the context in which
+the IRI is being used.
+
The performance penalty for using a SmartIri instead of a string is
+very small. Instances are automatically cached once they are
+constructed.
+
There is no advantage to using SmartIri for data IRIs, since they are
+not schema-specific (and are not cached). If a data IRI has been
+received from a client request, it is better just to validate it using
+StringFormatter.validateAndEscapeIri, and represent it as an
+org.knora.webapi.IRI (an alias for String).
+
Implementation
+
The smart IRI implementation, SmartIriImpl, is nested in the
+StringFormatter class, because it uses Knora's
+hostname, which isn't available until the Akka ActorSystem has started.
+However, this means that the Scala type of a SmartIriImpl instance is
+dependent on the instance of StringFormatter that constructed it.
+Therefore, instances of SmartIriImpl created by different instances of
+StringFormatter can't be compared directly.
+
There are in fact two instances of StringFormatter:
+
+
one returned by StringFormatter.getGeneralInstance, which is
+ available after Akka has started and has the API server's hostname
+ (and can therefore provide SmartIri instances capable of parsing
+ IRIs containing that hostname). This instance is used throughout the
+ DSP-API server.
+
one returned by StringFormatter.getInstanceForConstantOntologies,
+ which is available before Akka has started, and is used only by the
+ hard-coded constant knora-api ontologies (see
+ Generation of Ontologies in External Schemas).
+
+
This is the reason for the existence of the SmartIri trait, which is a
+top-level definition and has its own equals and hashCode methods.
+Instances of SmartIri can thus be compared (e.g. to use them as unique
+keys in collections), regardless of which instance of StringFormatter
+created them.
Markup should be stored as RDF, so it can be searched and analysed using the same tools that are used
+ with other data managed by Knora.
+
+
+
In particular, Gravsearch queries should be able
+ to specify search criteria that refer to the markup tags attached to a text, together with
+ any other search criteria relating to the resource that contains the text.
+
+
+
It should be possible to import any XML document into Knora, store the markup as standoff, and
+ at any time export the document as an equivalent XML document.
Since the number of standoff tags that can be attached to a text value is unlimited, standoff is queried
+in pages of a limited size, to avoid requesting huge SPARQL query results from the triplestore.
+
When ResourcesResponderV2 or SearchResponderV2 need to return a text value with all its markup,
+they first query the text value with at most one page of standoff. If the text value has more than one page of
+standoff, ConstructResponseUtilV2.makeTextValueContentV2 then sends a GetRemainingStandoffFromTextValueRequestV2
+message to StandoffResponderV2, which queries the rest of the standoff in the text value, one page at a time.
+The resulting standoff is concatenated together and returned.
+
To optimise query performance:
+
+
+
Each text value with standoff has the predicate knora-base:valueHasMaxStandoffStartIndex, so that when Knora
+ queries a page of standoff, it knows whether it has reached the last page.
+
+
+
The last path component of the IRI of a standoff tag is the integer object of its
+ knora-base:standoffTagHasStartIndex predicate. When querying standoff, it is necessary to convert
+ the IRI objects of knora-base:standoffTagHasStartParent and knora-base:standoffTagHasEndParent to
+ integer indexes (the start indexes of those tags). Including each tag's start index in its IRI makes it
+ unnecessary to query the parent tags to determine their start indexes.
+
+
+
Conversion Between Standoff and XML
+
XMLToStandoffUtil does the low-level conversion of documents between standoff and XML, using a simple
+data structure to represent standoff. This data structure knows nothing about RDF, and each standoff tag
+contains its XML element name and namespace and those of its attributes.
+
In DSP-API, it is possible to define mappings to
+control how standoff/RDF is converted to XML and vice versa. Different mappings can be used to convert the same
+standoff/RDF to different sorts of XML documents. StandoffTagUtilV2 converts between standoff/RDF and XML using
+mappings, delegating the lower-level work to XMLToStandoffUtil.
While knora-admin and salsah-gui have relatively flat class hierarchies,
+in knora-base there are very complicated - yet highly relevant - inheritance structures.
+The following class diagrams try to model these structures.
+For the sake of comprehensibility, it was necessary to split the ontology into multiple diagrams,
+even though this obliterates the evident connections between those diagrams.
+
+
Legend
+
dotted lines: the boxes are copies from another diagram.
In the context of DEV-1415: Domain Model
+we attempted to gain a clear overview over the DSP's domain,
+as implicitly modelled by the ontologies, code, validations and documentation of the DSP-API.
+
The following document aims to give a higher level overview of said domain.
+
+
Note
+
+
As a high level overview, this document does not aim for exhaustivity.
+
Naming is tried to be kept as simple as possible,
+ while trying to consolidate different naming schemes
+ (ontologies, code, API),
+ which in result means that no naming scheme is strictly followed.
+
The split between V2 and Admin is arbitrary as those are intertwined within the system.
+ It merely serves the purpose of organizing the presented entities.
+
+
+
Domain Entities
+
The following Diagrams visualize the top level entities present in the DSP.
+The attributes of these entities should be exhaustive.
+Cardinalities or validation constraints are normally not depicted.
+The indicated relationships are of conceptual nature and are more complicated in the actual system.
+
Admin
+
erDiagram
+ %% entities
+ User {
+ IRI id
+ string userName "unique"
+ string email "unique"
+ string givenName
+ string familyName
+ string password
+ string language "2 character ISO language code"
+ boolean status
+ boolean systemAdmin
+ }
+ Project {
+ IRI id
+ string shortcode "4 character hex"
+ string shortname "xsd:NCNAME"
+ string longname "optional"
+ langstring description
+ string keywords
+ boolean status
+ boolean selfjoin
+ string logo "optional"
+ string restrictedViewSize
+ string restrictedViewWatermark
+ }
+ Group {
+ IRI id
+ string name
+ langstring description
+ boolean status
+ boolean selfjoin
+ }
+ ListNode {
+ IRI id
+ IRI projectIri "only for root node"
+ langstring labels
+ langstring comments
+ string name
+ boolean isRootNode
+ integer listNodePosition
+ }
+ DefaultObjectAccessPermission {
+ IRI id
+ string hasPermission "the 'RV, V, M, D, CR' string"
+ }
+ AdministrativePermission {
+ IRI id
+ string hasPermission "a different string representation"
+ }
+ Property {}
+ ResourceClass {}
+
+ %% relations
+ User }|--|{ Project: "is member/admin of"
+ User }o--|{ Group: "is member of"
+ Group }o--|| Project: "belongs to"
+ ListNode }o--|| Project: "belongs to"
+ ListNode }o--o{ ListNode: "hasSubListNode"
+ ListNode |o--o| ListNode: "hasRootNode"
+ AdministrativePermission |{--o| Project: "points to"
+ AdministrativePermission |{--|{ Group: "points to"
+
+ DefaultObjectAccessPermission |{--o{ Group: "points to"
+ DefaultObjectAccessPermission |{--|| Project: "points to"
+ DefaultObjectAccessPermission |{--o{ Property: "points to"
+ DefaultObjectAccessPermission |{--o{ ResourceClass: "points to"
+
Apart from class and property definitions,
+knora-base and knora-admin provide a small number of class instances
+that should be present in any running DSP stack:
Authentication is the process of making sure that if someone is
+accessing something then this someone is actually also the person they
+pretend to be. The process of making sure that someone is authorized,
+(i.e. has the permission to access something, is handled as described
+in Authorisation).
+
Implementation
+
The authentication in Knora is based on Basic Auth HTTP basic
+authentication,
+URL parameters, JSON Web Token, and cookies. This means
+that on every request (to any of the routes), credentials need to be
+sent either via authorization header, URL parameters or cookie header.
+
All routes are always accessible and if there are no credentials
+provided, a default user is assumed. If credentials are sent and they
+are not correct (e.g., wrong username, password incorrect, token
+expired), then the request will end in an error message.
+
Skipping Authentication
+
There is the possibility to turn skipping authentication on and use a
+hardcoded user (Test User). In application.conf set the
+skip-authentication = true and Test User will be always assumed.
Attention! GraphDB is not supported anymore, therefore parts related
+to it in this document are redundant.
+
Requirements
+
Knora is designed to prevent inconsistencies in RDF data,
+as far as is practical, in a triplestore-independent way (see
+Triplestore Updates). However, it is also
+useful to enforce consistency constraints in the triplestore itself, for
+two reasons:
+
+
To prevent inconsistencies resulting from bugs in the DSP-API server.
+
To prevent users from inserting inconsistent data directly into the triplestore, bypassing Knora.
+
+
The design of the knora-base ontology supports two ways of specifying
+constraints on data (see knora-base: Consistency Checking
+for details):
+
+
A property definition should specify the types that are allowed as
+ subjects and objects of the property, using
+ knora-base:subjectClassConstraint and (if it is an object
+ property) knora-base:objectClassConstraint. Every subproperty of
+ knora-base:hasValue or a knora-base:hasLinkTo (i.e. every
+ property of a resource that points to a knora-base:Value or to
+ another resource) is required have this constraint, because the
+ DSP-API server relies on it to know what type of object to expect
+ for the property. Use of knora-base:subjectClassConstraint is
+ recommended but not required.
+
A class definition should use OWL cardinalities
+ (see OWL 2 Quick Reference Guide)
+ to indicate the properties that instances of the class are allowed to
+ have, and to constrain the number of objects that each property can
+ have. Subclasses of knora-base:Resource are required to have a
+ cardinality for each subproperty of knora-base:hasValue or a
+ knora-base:hasLinkTo that resources of that class can have.
+
+
Specifically, consistency checking should prevent the following:
+
+
An object property or datatype property has a subject of the wrong
+ class, or an object property has an object of the wrong class
+ (GraphDB's consistency checke cannot check the types of literals).
+
An object property has an object that does not exist (i.e. the
+ object is an IRI that is not used as the subject of any statements
+ in the repository). This can be treated as if the object is of the
+ wrong type (i.e. it can cause a violation of
+ knora-base:objectClassConstraint, because there is no compatible
+ rdf:type statement for the object).
+
A class has owl:cardinality 1 or owl:minCardinality 1 on an
+ object property or datatype property, and an instance of the class
+ does not have that property.
+
A class has owl:cardinality 1 or owl:maxCardinality 1 on an
+ object property or datatype property, and an instance of the class
+ has more than one object for that property.
+
An instance of knora-base:Resource has an object property pointing
+ to a knora-base:Value or to another Resource, and its class has
+ no cardinality for that property.
+
An instance of knora-base:Value has a subproperty of
+ knora-base:valueHas, and its class has no cardinality for that
+ property.
+
A datatype property has an empty string as an object.
+
+
Cardinalities in base classes are inherited by derived classes. Derived
+classes can override inherited cardinalities by making them more
+restrictive, i.e. by specifying a subproperty of the one specified in
+the original cardinality.
+
Instances of Resource and Value can be marked as deleted, using the
+property isDeleted. This must be taken into account as follows:
+
+
With owl:cardinality 1 or owl:maxCardinality 1, if the object of
+ the property can be marked as deleted, the property must not have
+ more than one object that has not been marked as deleted. In other
+ words, it's OK if there is more than one object, as long only one of
+ them has knora-base:isDeleted false.
+
With owl:cardinality 1 or owl:minCardinality 1, the property
+ must have an object, but it's OK if the property's only object is
+ marked as deleted. We allow this because the subject and object may
+ have different owners, and it may not be feasible for them to
+ coordinate their work. The owner of the object should always be able
+ to mark it as deleted. (It could be useful to notify the owner of
+ the subject when this happens, but that is beyond the scope of
+ consistency checking.)
+
+
Design
+
Ontotext GraphDB provides a
+mechanism for checking the consistency of data in a repository each time
+an update transaction is committed. Knora provides GraphDB-specific
+consistency rules that take advantage of this feature to provide an
+extra layer of consistency checks, in addition to the checks that are
+implemented in Knora.
+
When a repository is created in GraphDB, a set of consistency rules can
+be provided, and GraphDB's consistency checker can be turned on to
+ensure that each update transaction respects these rules, as described
+in the section
+Reasoning
+of the GraphDB documentation. Like custom inference rules, consistency
+rules are defined in files with the .pie filename extension, in a
+GraphDB-specific syntax.
+
We have added rules to the standard RDFS inference rules file
+builtin_RdfsRules.pie, to create the file KnoraRules.pie. The .ttl
+configuration file that is used to create the repository must contain
+these settings:
The path to KnoraRules.pie must be an absolute path. The scripts
+provided with Knora to create test repositories set this path
+automatically.
+
Consistency checking in GraphDB relies on reasoning. GraphDB's reasoning
+is
+Forward-chaining,
+which means that reasoning is applied to the contents of each update,
+before the update transaction is committed, and the inferred statements
+are added to the repository.
+
A GraphDB rules file can contain two types of rules: inference rules and
+consistency rules. Before committing an update transaction, GraphDB
+applies inference rules, then consistency rules. If any of the
+consistency rules are violated, the transaction is rolled back.
The premises are a pattern that tries to match statements found in the
+data. Optional constraints, which are enclosed in square brackets, make
+it possible to specify the premises more precisely, or to specify a
+named graph (see examples below). Consequences are the statements that
+will be inferred if the premises match. A line of hyphens separates
+premises from consequences.
The differences between inference rules and consistency rules are:
+
+
A consistency rule begins with Consistency instead of Id.
+
In a consistency rule, the consequences are optional. Instead of
+ representing statements to be inferred, they represent statements
+ that must exist if the premises are satisfied. In other words, if
+ the premises are satisfied and the consequences are not found, the
+ rule is violated.
+
If a consistency rule doesn't specify any consequences, and the
+ premises are satisfied, the rule is violated.
+
+
Rules use variable names for subjects, predicates, and objects, and they
+can use actual property names.
+
Empty string as object
+
If subject i has a predicate p whose object is an empty string, the
+constraint is violated:
+
Consistency: empty_string
+ i p ""
+ ------------------------------------
+
+
Subject and object class constraints
+
If subject i has a predicate p that requires a subject of type t,
+and i is not a t, the constraint is violated:
+
Consistency: subject_class_constraint
+ p <knora-base:subjectClassConstraint> t
+ i p j
+ ------------------------------------
+ i <rdf:type> t
+
+
If subject i has a predicate p that requires an object of type t,
+and the object of p is not a t, the constraint is violated:
+
Consistency: object_class_constraint
+ p <knora-base:objectClassConstraint> t
+ i p j
+ ------------------------------------
+ j <rdf:type> t
+
+
Cardinality constraints
+
A simple implementation of a consistency rule to check
+owl:maxCardinality 1, for objects that can be marked as deleted, could
+look like this:
+
Consistency: max_cardinality_1_with_deletion_flag
+ i <rdf:type> r
+ r <owl:maxCardinality> "1"^^xsd:nonNegativeInteger
+ r <owl:onProperty> p
+ i p j
+ i p k [Constraint j != k]
+ j <knora-base:isDeleted> "false"^^xsd:boolean
+ k <knora-base:isDeleted> "false"^^xsd:boolean
+ ------------------------------------
+
+
This means: if resource i is a subclass of an owl:Restrictionr
+with owl:maxCardinality 1 on property p, and the resource has two
+different objects for that property, neither of which is marked as
+deleted, the rule is violated. Note that this takes advantage of the
+fact that Resource and Value have owl:cardinality 1 on isDeleted
+(isDeleted must be present even if false), so we do not need to check
+whether i is actually something that can be marked as deleted.
+
However, this implementation would be much too slow. We therefore use
+two optimisations suggested by Ontotext:
+
+
Add custom inference rules to make tables (i.e. named graphs) of
+ pre-calculated information about the cardinalities on properties of
+ subjects, and use those tables to simplify the consistency rules.
+
Use the [Cut] constraint to avoid generating certain redundant
+ compiled rules (see Entailment
+ rules).
+
+
For example, to construct a table of subjects belonging to classes that
+have owl:maxCardinality 1 on some property p, we use the following
+custom inference rule:
+
Id: maxCardinality_1_table
+ i <rdf:type> r
+ r <owl:maxCardinality> "1"^^xsd:nonNegativeInteger
+ r <owl:onProperty> p
+ ------------------------------------
+ i p r [Context <onto:_maxCardinality_1_table>]
+
+
The constraint [Context <onto:_maxCardinality_1_table>] means that the
+inferred triples are added to the context (i.e. the named graph)
+http://www.ontotext.com/_maxCardinality_1_table. (Note that we have
+defined the prefix onto as http://www.ontotext.com/ in the
+Prefices section of the rules file.) As the GraphDB documentation on
+Rules
+explains:
+
+
If the context is provided, the statements produced as rule
+consequences are not ‘visible’ during normal query answering. Instead,
+they can only be used as input to this or other rules and only when
+the rule premise explicitly uses the given context.
+
+
Now, to find out whether a subject belongs to a class with that
+cardinality on a given property, we only need to match one triple. The
+revised implementation of the rule
+max_cardinality_1_with_deletion_flag is as follows:
+
Consistency: max_cardinality_1_with_deletion_flag
+ i p r [Context <onto:_maxCardinality_1_table>]
+ i p j [Constraint j != k]
+ i p k [Cut]
+ j <knora-base:isDeleted> "false"^^xsd:boolean
+ k <knora-base:isDeleted> "false"^^xsd:boolean
+ ------------------------------------
+
+
The constraint [Constraint j != k] means that the premises will be
+satisfied only if the variables j and k do not refer to the same
+thing.
+
With these optimisations, the rule is faster by several orders of
+magnitude.
+
Since properties whose objects can be marked as deleted must be handled
+differently to properties whose objects cannot be marked as deleted, the
+knora-base ontology provides a property called
+objectCannotBeMarkedAsDeleted. All properties in knora-base whose
+objects cannot take the isDeleted flag (including datatype properties)
+should be derived from this property. This is how it is used to check
+owl:maxCardinality 1 for objects that cannot be marked as deleted:
+
Consistency: max_cardinality_1_without_deletion_flag
+ i p r [Context <onto:_maxCardinality_1_table>]
+ p <rdfs:subPropertyOf> <knora-base:objectCannotBeMarkedAsDeleted>
+ i p j [Constraint j != k]
+ i p k [Cut]
+ ------------------------------------
+
+
To check owl:minCardinality 1, we do not care whether the object can
+be marked as deleted, so we can use this simple rule:
+
Consistency: min_cardinality_1_any_object
+ i p r [Context <onto:_minCardinality_1_table>]
+ ------------------------------------
+ i p j
+
+
This means: if a subject i belongs to a class that has
+owl:minCardinality 1 on property p, and i has no object for p,
+the rule is violated.
+
To check owl:cardinality 1, we need two rules: one that checks whether
+there are too few objects, and one that checks whether there are too
+many. To check whether there are too few objects, we don't care whether
+the objects can be marked as deleted, so the rule is the same as
+min_cardinality_1_any_object, except for the cardinality:
+
Consistency: cardinality_1_not_less_any_object
+ i p r [Context <onto:_cardinality_1_table>]
+ ------------------------------------
+ i p j
+
+
To check whether there are too many objects, we need to know whether the
+objects can be marked as deleted or not. In the case where the objects
+can be marked as deleted, the rule is the same as
+max_cardinality_1_with_deletion_flag, except for the cardinality:
+
Consistency: cardinality_1_not_greater_with_deletion_flag
+ i p r [Context <onto:_cardinality_1_table>]
+ i p j [Constraint j != k]
+ i p k [Cut]
+ j <knora-base:isDeleted> "false"^^xsd:boolean
+ k <knora-base:isDeleted> "false"^^xsd:boolean
+ ------------------------------------
+
+
In the case where the objects cannot be marked as deleted, the rule is
+the same as max_cardinality_1_without_deletion_flag, except for the
+cardinality:
+
Consistency: cardinality_1_not_less_any_object
+ i p r [Context <onto:_cardinality_1_table>]
+ ------------------------------------
+ i p j
+
+
Knora allows a subproperty of knora-base:hasValue or
+knora-base:hasLinkTo to be a predicate of a resource only if the
+resource's class has some cardinality for the property. For convenience,
+knora-base:hasValue and knora-base:hasLinkTo are subproperties of
+knora-base:resourceProperty, which is used to check this constraint in
+the following rule:
+
Consistency: resource_prop_cardinality_any
+ i <knora-base:resourceProperty> j
+ ------------------------------------
+ i p j
+ i <rdf:type> r
+ r <owl:onProperty> p
+
+
If resource i has a subproperty of knora-base:resourceProperty, and
+i is not a member of a subclass of an owl:Restrictionr with a
+cardinality on that property (or on one of its base properties), the
+rule is violated.
+
A similar rule, value_prop_cardinality_any, ensures that if a value
+has a subproperty of knora-base:valueHas, the value's class has some
+cardinality for that property.
Creating, updating and deleting data models (ontologies)
+
Managing projects and users
+
Authentication of clients
+
Authorisation of clients' requests
+
+
DSP-API is developed with Scala and uses the
+Akka framework for message-based concurrency. It is
+designed to work with the Apache Jena Fuseki triplestore
+which is compliant to the SPARQL 1.1 Protocol.
+For file storage, it uses Sipi.
+
DSP-API Versions
+
+
DSP-API v2: the latest DSP-API that should be used.
+
DSP-API v1: has been removed after a long period of deprecation.
+
+
There is also an Admin API for administrating DSP projects.
+
Error Handling
+
The error-handling design has these aims:
+
+
Simplify the error-handling code in actors as much as possible.
+
Produce error messages that clearly indicate the context in which
+ the error occurred (i.e. what the application was trying to do).
+
Ensure that clients receive an appropriate error message when an
+ error occurs.
+
Ensure that ask requests are properly terminated with an
+ akka.actor.Status.Failure message in the event of an error,
+ without which they will simply time out (see
+ Ask: Send and Receive Future).
+
When a actor encounters an error that isn't the client's fault (e.g.
+ a triplestore failure), log it, but don't do this with errors caused
+ by bad input.
+
When logging errors, include the full JVM stack trace.
+
+
A hierarchy of exception classes is defined in Exceptions.scala,
+representing different sorts of errors that could occur. The hierarchy
+has two main branches:
+
+
RequestRejectedException, an abstract class for errors that are
+ the client's fault. These errors are not logged.
+
InternalServerException, an abstract class for errors that are not
+ the client's fault. These errors are logged.
+
+
Exception classes in this hierarchy can be defined to include a wrapped
+cause exception. When an exception is logged, its stack trace will be
+logged along with the stack trace of its cause. It is therefore
+recommended that low-level code should catch low-level exceptions, and
+wrap them in one of our higher-level exceptions, in order to clarify the
+context in which the error occurred.
+
To simplify error-handling in responders, a utility method called
+future2Message is provided in ActorUtils. It is intended to be used
+in an actor's receive method to respond to messages in the ask
+pattern. If the responder's computation is successful, it is sent to the
+requesting actor as a response to the ask. If the computation fails,
+the exception representing the failure is wrapped in a Status.Failure,
+which is sent as a response to the ask. If the error is a subclass of
+RequestRejectedException, only the sender is notified of the error;
+otherwise, the error is also logged and rethrown (so that the
+KnoraExceptionHandler can handle the exception).
+
In many cases, we transform data from the triplestore into a Map
+object. To simplify checking for required values in these collections,
+the class ErrorHandlingMap is provided. You can wrap any Map in an
+ErrorHandlingMap. You must provide a function that will generate an
+error message when a required value is missing, and optionally a
+function that throws a particular exception. Rows of SPARQL query
+results are already returned in ErrorHandlingMap objects.
+
If you want to add a new exception class, see the comments in
+Exceptions.scala for instructions.
+
Transformation of Exception to Client Responses
+
The org.knora.webapi.KnoraExceptionHandler is brought implicitly into
+scope of pekko-http, and by doing so registered and used to handle the
+transformation of all KnoraExceptions into HttpResponses. This
+handler handles only exceptions thrown inside the route and not the
+actors. However, the design of reply message passing from actors (by
+using future2Message), makes sure that any exceptions thrown inside
+actors, will reach the route, where they will be handled.
+
API Routing
+
The API routes in the routing package are defined using the DSL
+provided by the
+pekko-http
+library. A routing function has to do the following:
+
+
Authenticate the client.
+
Figure out what the client is asking for.
+
Construct an appropriate request message and send it to the appropriate responder.
+
Return a result to the client.
+
+
To simplify the coding of routing functions, they are contained in
+objects that extend org.knora.webapi.routing.Authenticator. Each
+routing function performs the following operations:
+
+
Authenticator.getUserADM is called to authenticate the user.
+
The request parameters are interpreted and validated, and a request
+ message is constructed to send to the responder. If the request is
+ invalid, BadRequestException is thrown. If the request message is
+ requesting an update operation, it must include a UUID generated by
+ UUID.randomUUID, so the responder can obtain a write lock on the
+ resource being updated.
+
+
The routing function then passes the message to a function in an API-specific
+routing utility: RouteUtilV2 or RouteUtilADM.
+This utility function sends the message to ResponderManager (which
+forwards it to the relevant responder), returns a response to the client
+in the appropriate format, and handles any errors.
+
Logging
+
Logging in DSP-API is configurable through logback.xml, allowing fine
+grain configuration of what classes / objects should be logged from which level.
+
The Akka Actors use Akka Logging
+while logging inside plain Scala Objects and Classes is done through
+Scala Logging.
DSP provides an API for parsing and formatting RDF data and
+for working with RDF graphs. This allows DSP developers to use a single,
+idiomatic Scala API as a façade for a Java RDF library.
+
Overview
+
The API is in the package org.knora.webapi.messages.util.rdf. It includes:
+
+
+
RdfModel, which represents a set of RDF graphs (a default graph and/or one or more named graphs).
+ A model can be constructed from scratch, modified, and searched.
+
+
+
RdfNode and its subclasses, which represent RDF nodes (IRIs, blank nodes, and literals).
+
+
+
Statement, which represents a triple or quad.
+
+
+
RdfNodeFactory, which creates nodes and statements.
+
+
+
RdfModelFactory, which creates empty RDF models.
+
+
+
RdfFormatUtil, which parses and formats RDF models.
+
+
+
JsonLDUtil, which provides specialised functionality for working
+ with RDF in JSON-LD format, and for converting between RDF models
+ and JSON-LD documents. RdfFormatUtil uses JsonLDUtil when appropriate.
+
+
+
ShaclValidator, which validates RDF models using SHACL shapes.
+
+
+
To work with RDF models, start with RdfFeatureFactory, which returns instances
+of RdfNodeFactory, RdfModelFactory, RdfFormatUtil, and ShaclValidator.
+JsonLDUtil does not need a feature factory.
+
To iterate efficiently over the statements in an RdfModel, use its iterator method.
+An RdfModel cannot be modified while you are iterating over it.
+If you are iterating to look for statements to modify, you can
+collect a Set of statements to remove and a Set of statements
+to add, and perform these update operations after you have finished
+the iteration.
+
RDF stream processing
+
To read or write a large amount of RDF data without generating a large string
+object, you can use the stream processing methods in RdfFormatUtil.
+
To parse an InputStream to an RdfModel, use inputStreamToRdfModel.
+To format an RdfModel to an OutputStream, use rdfModelToOutputStream.
+
To parse RDF data from an InputStream and process it one statement at a time,
+you can write a class that implements the RdfStreamProcessor trait, and
+use it with the RdfFormatUtil.parseWithStreamProcessor method.
+Your RdfStreamProcessor can also send one statement at a time to a
+formatting stream processor, which knows how to write RDF to an OutputStream
+in a particular format. Use RdfFormatUtil.makeFormattingStreamProcessor to
+construct one of these.
+
SPARQL queries
+
In tests, it can be useful to run SPARQL queries to check the content of
+an RdfModel. To do this, use the RdfModel.asRepository method, which
+returns an RdfRepository that can run SELECT queries.
+
The configuration of the default graph depends on which underlying
+RDF library is used. If you are querying data in named graphs, use FROM
+or quad patterns rather than the default graph.
+
SHACL validation
+
On startup, graphs of SHACL shapes are loaded from Turtle files in a directory specified
+by app.shacl.shapes-dir in application.conf, and in subdirectories of
+that directory. To validate the default graph of an RdfModel using a graph of
+SHACL shapes, call ShaclValidator.validate, specifying the relative path of the
+Turtle file containing the graph of shapes.
+
Implementations
+
+
+
The Jena-based implementation, in package org.knora.webapi.messages.util.rdf.jenaimpl.
+
+
+
The RDF4J-based implementation, in package org.knora.webapi.messages.util.rdf.rdf4jimpl.
GraphDB and embedded Jena TDB triplestores support is deprecated since
+v20.1.1 of DSP-API.
+
The store module houses the different types of data stores supported by
+Knora. At the moment, only triplestores and IIIF servers (Sipi) are supported.
+The triplestore support is implemented in the
+org.knora.webapi.store.triplestore package and the IIIF server support in
+org.knora.webapi.store.iiif package.
+
Lifecycle
+
At the top level, the store package houses the StoreManager-Actor
+which is started when Knora starts. The StoreManager then starts the
+TriplestoreManager and IIIFManager, which each in turn starts their
+correct actor implementation.
+
Triplestores
+
Currently, the only supported triplestore is Apache Jena Fuseki, a HTTP-based triplestore.
+
HTTP-based triplestore support is implemented in the org.knora.webapi.triplestore.http package.
+
An HTTP-based triplestore is one that is accessed remotely over the HTTP
+protocol. HttpTriplestoreConnector supports the open source triplestore Apache Jena Fuseki.
+
IIIF Servers
+
Currently, only support for SIPI is implemented in org.knora.webapi.store.iiifSipiConnector.
Users must be able to edit the same data concurrently.
+
Each update must be atomic and leave the database in a consistent,
+meaningful state, respecting ontology constraints and permissions.
+
The application must not use any sort of long-lived locks, because they
+tend to hinder concurrent edits, and it is difficult to ensure that they
+are released when they are no longer needed. Instead, if a user requests
+an update based on outdated information (because another user has just
+changed something, and the first user has not found out yet), the update
+must be not performed, and the application must notify the user who
+requested it, suggesting that the user should check the relevant data
+and try again if necessary. (We may eventually provide functionality to
+help users merge edits in such a situation. The application can also
+encourage users to coordinate with one another when they are working on
+the same data, and may eventually provide functionality to facilitate
+this coordination.)
+
We can assume that each SPARQL update operation will run in its own
+database transaction with an isolation level of 'read committed'.
+We cannot assume that it is possible to run more than one SPARQL update
+in a single database transaction. The SPARQL 1.1 Protocol does not provide a
+way to do this, and currently it
+can be done only by embedding the triplestore in the application and
+using a vendor-specific API, but we cannot require this in Knora.)
+
Permissions
+
To create a new value (as opposed to a new version of an existing
+value), the user must have permission to modify the containing resource.
+
To create a new version of an existing value, the user needs only to
+have permission to modify the current version of the
+value; no permissions on the resource are needed.
+
Since changing a link requires deleting the old link and creating a new
+one (as described in Linking), a user wishing
+to change a link must have modify permission on both the containing
+resource and the knora-base:LinkValue for the existing link.
+
When a new resource or value is created, it can be given default permissions
+specified the project's admin data, or (only in API v2) custom permissions
+can be specified.
+
Ontology Constraints
+
Knora must not allow an update that would violate an ontology
+constraint.
+
When creating a new value (as opposed to adding a new version of an
+existing value), Knora must not allow the update if the containing
+resource's OWL class does not contain a cardinality restriction for the
+submitted property, or if the new value would violate the cardinality
+restriction.
+
It must also not allow the update if the type of the submitted value
+does not match the knora-base:objectClassConstraint of the property,
+or if the property has no knora-base:objectClassConstraint. In the
+case of a property that points to a resource, Knora must ensure that the
+target resource belongs to the OWL class specified in the property's
+knora-base:objectClassConstraint, or to a subclass of that class.
+
Duplicate and Redundant Values
+
When creating a new value, or changing an existing value, Knora checks
+whether the submitted value would duplicate an existing value for the
+same property in the resource. The definition of 'duplicate' depends on
+the type of value; it does not necessarily mean that the two values are
+strictly equal. For example, if two text values contain the same Unicode
+string, they are considered duplicates, even if they have different
+Standoff markup. If resource R has property P with value V1, and
+V1 is a duplicate of V2, the API server must not add another
+instance of property P with value V2. However, if the requesting
+user does not have permission to see V2, the duplicate is allowed,
+because forbidding it would reveal the contents of V2 to the user.
+
When creating a new version of a value, Knora also checks whether the
+new version is redundant, given the existing value. It is possible for
+the definition of 'redundant' can depend on the type of value, but in
+practice, it means that the values are strictly equal: any change,
+however trivial, is allowed.
+
Versioning
+
Each Knora value (i.e. something belonging to an OWL class derived from
+knora-base:Value) is versioned. This means that once created, a value
+is never modified. Instead, 'changing' a value means creating a new
+version of the value --- actually a new value --- that points to the
+previous version using knora-base:previousValue. The versions of a
+value are a singly-linked list, pointing backwards into the past. When a
+new version of a value is made, the triple that points from the resource
+to the old version (using a subproperty of knora-base:hasValue) is
+removed, and a triple is added to point from the resource to the new
+version. Thus the resource always points only to the current version of
+the value, and the older versions are available only via the current
+version's knora-base:previousValue predicate.
+
Unlike values, resources (members of OWL classes derived from
+knora-base:Resource) are not versioned. The data that is attached to a
+resource, other than its values, can be modified.
+
Deleting
+
Knora does not actually delete resources or values; it only marks them
+as deleted. Deleted data is normally hidden. All resources and values
+must have the predicate knora- base:isDeleted, whose object is a
+boolean. If a resource or value has been marked as deleted, it has
+knora-base:isDeleted true and has a knora-base:deleteDate. An
+optional knora-base:deleteComment may be added to explain why the
+resource or value has been marked as deleted.
+
Normally, a value is marked as deleted without creating a new version of
+it. However, link values must be treated as a special case. Before a
+LinkValue can be marked as deleted, its reference count must be
+decremented to 0. Therefore, a new version of the LinkValue is made,
+with a reference count of 0, and it is this new version that is marked
+as deleted.
+
Since it is necessary to be able to find out when a resource was
+deleted, it is not possible to undelete a resource. Moreover, to
+simplify the checking of cardinality constraints, and for consistency
+with resources, it is not possible to undelete a value, and no new
+versions of a deleted value can be made. Instead, if desired, a new
+resource or value can be created by copying data from a deleted resource
+or value.
+
Linking
+
Links must be treated differently to other types of values. Knora needs
+to maintain information about the link, including permissions and a
+version history. Since the link does not have a unique IRI of its own,
+Knora uses RDF
+reifications for
+this purpose. Each link between two resources has exactly one
+(non-deleted) knora-base:LinkValue. The resource itself has a
+predicate that points to the LinkValue, using a naming convention in
+which the word Value is appended to the name of the link predicate to
+produce the link value predicate. For example, if a resource
+representing a book has a predicate called hasAuthor that points to
+another resource, it must also have a predicate called hasAuthorValue
+that points to the LinkValue in which information about the link is
+stored. To find a particular LinkValue, one can query it either by
+using its IRI (if known), or by using its rdf:subject,
+rdf:predicate, and rdf:object (and excluding link values that are
+marked as deleted).
+
Like other values, link values are versioned. The link value predicate
+always points from the resource to the current version of the link
+value, and previous versions are available only via the current
+version's knora-base:previousValue predicate. Deleting a link means
+deleting the triple that links the two resources, and making a new
+version of the link value, marked with knora-base:isDeleted. A triple
+then points from the resource to this new, deleted version (using the
+link value property).
+
The API allows a link to be 'changed' so that it points to a different
+target resource. This is implemented as follows: the existing triple
+connecting the two resources is removed, and a new triple is added using
+the same link property and pointing to the new target resource. A new
+version of the old link's LinkValue is made, marked with
+knora-base:isDeleted. A new LinkValue is made for the new link. The
+new LinkValue has no connection to the old one.
+
When a resource contains knora-base:TextValue with Standoff markup
+that includes a reference to another resource, this reference is
+materialised as a direct link between the two resources, to make it
+easier to query. A special link property,
+knora-base:hasStandoffLinkTo, is used for this purpose. The
+corresponding link value property, knora-base:hasStandoffLinkToValue,
+points to a LinkValue. This LinkValue contains a reference count,
+indicated by knora-base:valueHasRefCount, that represents the number
+of text values in the containing resource that include one or more
+Standoff references to the specified target resource. Each time this
+number changes, a new version of this LinkValue is made. When the
+reference count reaches zero, the triple with
+knora-base:hasStandoffLinkTo is removed, and a new version of the
+LinkValue is made and marked with knora-base:isDeleted. If the same
+resource reference later appears again in a text value, a new triple is
+added using knora-base:hasStandoffLinkTo, and a new LinkValue is
+made, with no connection to the old one.
+
For consistency, every LinkValue contains a reference count. If the
+link property is not knora-base:hasStandoffLinkTo, the reference count
+will always be either 1 (if the link exists) or 0 (if it has been
+deleted, in which case the link value will also be marked with
+knora-base:isDeleted).
+
When a LinkValue is created for a standoff resource reference, it is
+given the same permissions as the text value containing the reference.
+
Design
+
Responsibilities of Responders
+
The resources responder (ResourcesResponderV2) has sole responsibility for generating SPARQL to
+create and updating resources, and the values responder (ValuesResponderV2) has sole responsibility for generating
+SPARQL to create and update values. When a new resource is created with its values, the values responder
+generates SPARQL statements that can be included in the INSERT
+clause of a SPARQL update to create the values, and
+the resources responder adds these statements to the SPARQL update that
+creates the resource. This ensures that the resource and its values are
+created in a single SPARQL update operation, and hence in a single
+triplestore transaction.
+
Application-level Locking
+
The 'read committed' isolation level cannot prevent a scenario where two
+users want to add the same data at the same time. It is possible that
+both requests would do pre-update checks and simultaneously find that it
+is OK to add the data, and that both updates would then succeed,
+inserting redundant data and possibly violating ontology constraints.
+Therefore, Knora uses short-lived, application-level write locks on
+resources, to ensure that only one request at a time can update a given
+resource. Before each update, the application acquires a lock on a resource.
+To prevent deadlocks, Knora locks only one resource per API operation.
+It then does the pre-update checks and the update, then releases the
+lock. The lock implementation (in IriLocker) requires each API
+request message to include a random UUID, which is generated in the
+API Routing package. Using
+application-level locks allows us to do pre-update checks in their own
+transactions, and finally to do the SPARQL update in its own
+transaction.
+
Ensuring Data Consistency
+
Knora enforces consistency constraints using three redundant mechanisms:
+
+
By doing pre-update checks using SPARQL SELECT queries and cached
+ ontology data.
+
By doing checks in the WHERE clauses of SPARQL updates.
+
Deprecated: By using GraphDB's built-in consistency checker (see
+ Consistency Checking).
+
+
We take the view that redundant consistency checks are a good thing.
+
Pre-update checks are SPARQL SELECT queries that are executed while
+holding an application-level lock on the resource to be updated. These
+checks should work with any triplestore, and can return helpful,
+Knora-specific error messages to the client if the request would violate
+a consistency constraint.
+
However, the SPARQL update itself is our only chance to do pre-update
+checks in the same transaction that will perform the update. The design
+of the SPARQL 1.1 Update
+standard makes it possible to ensure that if certain conditions are not
+met, the update will not be performed. In our SPARQL update code, each
+update contains a WHERE clause, possibly a DELETE clause, and an
+INSERT clause. The WHERE clause is executed first. It performs
+consistency checks and provides values for variables that are used in
+the DELETE and/or INSERT clauses. In our updates, if the
+expectations of the WHERE clause are not met (e.g. because the data to
+be updated does not exist), the WHERE clause should return no results;
+as a result, the update will not be performed.
+
Regardless of whether the update changes the contents of the
+triplestore, it returns nothing. If the update did nothing because the
+conditions of the WHERE clause were not met, the only way to find out is
+to do a SELECT afterwards. Moreover, in this case, there is no
+straightforward way to find out which conditions was not met. This is
+one reason why Knora does pre-update checks using separate SELECT
+queries and/or cached ontology data, before performing the update.
+This makes it possible to return specific error messages to the user to
+indicate why an update cannot be performed.
+
Moreover, while some checks are easy to do in a SPARQL update, others
+are difficult, impractical, or impossible. Easy checks include checking
+whether a resource or value exists or is deleted, and checking that the
+knora-base:objectClassConstraint of a predicate matches the rdf:type
+of its intended object. Cardinality checks are not very difficult, but
+they perform poorly on Jena. Knora does not do permission checks in
+SPARQL, because its permission-checking algorithm is too complex to be
+implemented in SPARQL. For this reason, Knora's check for duplicate
+values cannot be done in SPARQL update code, because it relies on
+permission checks.
+
In a bulk import operation, which can create a large number of resources
+in a single SPARQL update, a WHERE clause can become very expensive
+for the triplestore, in terms of memory as well as execution time.
+Moreover, RDF4J (and hence GraphDB) uses a recursive algorithm to parse
+SPARQL queries with WHERE clauses, so the size of a WHERE clause is
+limited by the stack space available to the Java Virtual Machine.
+Therefore, in bulk import operations, Knora uses INSERT DATA, which
+does not involve a WHERE clause. Bulk imports thus rely on checks (1)
+and (3) above.
+
SPARQL Update Examples
+
The following sample SPARQL update code is simpler than what Knora
+actually does. It is included here to illustrate the way Knora's SPARQL
+updates are structured and how concurrent updates are handled.
+
Finding a value IRI in a value's version history
+
We will need this query below. If a value is present in a resource
+property's version history, the query returns everything known about the
+value, or nothing otherwise:
The update request must contain the IRI of the most recent version of
+the value (http://rdfh.ch/c5058f3a/values/c3295339). If this is
+not in fact the most recent version (because someone else has done an
+update), this operation will do nothing (because the WHERE clause will
+return no rows). To find out whether the update succeeded, the
+application will then need to do a SELECT query using the query in
+Finding a value IRI in a value's version history.
+In the case of concurrent updates, there are two possibilities:
+
+
Users A and B are looking at version 1. User A submits an update and
+ it succeeds, creating version 2, which user A verifies using a
+ SELECT. User B then submits an update to version 1 but it fails,
+ because version 1 is no longer the latest version. User B's SELECT
+ will find that user B's new value IRI is absent from the value's
+ version history.
+
Users A and B are looking at version 1. User A submits an update and
+ it succeeds, creating version 2. Before User A has time to do a
+ SELECT, user B reads the new value and updates it again. Both users
+ then do a SELECT, and find that both their new value IRIs are
+ present in the value's version history.
This assumes that we know the current version of the value. If the
+version we have is not actually the current version, this query will
+return no rows.
Go to Docker preferences and increase the memory allocation.
+
+ The stack's memory usage is limited to ~20GB, though it should only use that much during heavy workloads. You should
+ be good to go in any case if you allocate 22GB or more.
+
+
Running the stack
+
With Docker installed and configured,
+
+
Run the following:
+
+
makeinit-db-test
+
+
to create the knora-test repository and initialize it with loading some test data into the triplestore (Fuseki).
+
+
Start the entire knora-stack (fuseki (db), sipi, api, salsah1) with the following command:
+
+
makestack-up
+
+
Note: To delete the existing containers and for a clean start, before creating the knora-test repository explained
+in the first step above, run the following:
+
makestack-down-delete-volumes
+
+
This stops the knora-stack and deletes any created volumes (deletes the database!).
+
To only shut down the Knora-Stack without deleting the containers:
+
makestack-down
+
+
To restart the knora-api use the following command:
+
makestack-restart-api
+
+
If a change is made to knora-api code, only its image needs to be rebuilt. In that case, use
+
makestack-up-fast
+
+
which starts the knora-stack by skipping rebuilding most of the images (only api image is rebuilt).
+
To work on Metadata, use
+
makestack-up-with-metadata
+
+
which will put three example metadata sets to the projects anything, images and dokubib.
+This data can then be consumed
+from localhost:3333/v2/metadata/http%3A%2F%2Frdfh.ch%2Fprojects%2F0001, localhost:3333/v2/metadata/http%3A%2F%2Frdfh.ch%2Fprojects%2F00FF
+and localhost:3333/v2/metadata/http%3A%2F%2Frdfh.ch%2Fprojects%2F0804.
+
Managing Containers in Docker Dashboard
+
The Docker Desktop is installed on your computer during the installation of docker, it enables easy management of docker
+containers and access to Docker Hub. To manage your docker containers, docker desktop provides a dashbord.
+
+
In docker dashboard, you can see all the running containers, stop, start, restart, or completely delete them. For
+example, when
+you start the knora-stack as explained above, in the docker dashboard you will see following:
+
+
Access the logs
+
To read information logged out of any container (db, api, etc.), click on the container in the dashboard and choose
+logs. The example, below shows the logs of the database (db) container that includes the last SPARQL query sent to the
+triplestore.
+
+
Note that, you can also print out the log information directly from the command line. For example, the same logs of the
+database container can be printed out using the following command:
+
makestack-logs-db
+
+
Similarly, the logs of the other containers can be printed out by running make with stack-logs-api
+or stack-logs-sipi.
+These commands print out and follow the logs, to only print the logs out without following, use
+-no-follow version of the commands for example:
+
makestack-logs-db-no-follow
+
+
Lastly, to print out the entire logs of the running knora-stack, use
+
makestack-logs
+
+
With the Docker plugin installed, you can attach a terminal to the docker container within VS Code. This will stream the
+docker logs to the terminal window of the editor.
+
+
The docker plugin also allows for a number of other useful features, like inspecting the container's file system or
+attaching a shell to the container.
+
Running the automated tests
+
To run all test targets, use the following in the command line:
+
maketest-all
+
+
To run a single test from the command line, for example SearchV2R2RSpec,
+run the following:
+
sbt" webapi / testOnly *SearchV2R2RSpec* "
+
+
Note: to run tests, the api container must be stopped first!
+
Build and Publish Documentation
+
First, you need to install the requirements through:
+
makedocs-install-requirements
+
+
Then, to build docs into the local site folder, run the following command:
+
makedocs-build
+
+
At this point, you can serve the docs to view them locally using
+
makedocs-serve
+
+
Lastly, to build and publish docs to Github Pages, use
+
makedocs-publish
+
+
Build and Publish Docker Images
+
To build and publish all Docker images locally
+
makedocker-build
+
+
To publish all Docker images to Dockerhub
+
makedocker-publish
+
+
Continuous Integration
+
For continuous integration testing, we use Github CI Actions. Every commit
+pushed to the git repository or every pull request, triggers the build.
+Additionally, in Github there is a small checkmark beside every commit,
+signaling the status of the build (successful, unsuccessful, ongoing).
+
The build that is executed on Github CI Actions is defined in the .github/workflows/*.yml files.
+
Webapi Server Startup-Flags
+
The Webapi-Server can be started with a number of flags.
+
loadDemoData - Flag
+
When the webapi-server is started with the loadDemoData flag, then at
+startup, the data which is configured in application.conf under the
+app.triplestore.rdf-data key is loaded into the triplestore, and any
+data in the triplestore is removed beforehand.
Fuseki - supplied triplestore in the DSP-API Github repository.
+
Sipi by building from
+ source or using the docker
+ image
+
+
Knora Github Repository
+
gitclonehttps://github.com/dasch-swiss/dsp-api
+
+
Triplestore
+
A number of triplestore implementations are available, including free
+software as well as
+proprietary options. DSP-API is designed to work with any
+standards-compliant triplestore. It is primarily tested with Apache Jena Fuseki.
+
Sipi
+
Build Sipi Docker Image
+
The Sipi docker image needs to be build by hand, as it requires the
+Kakadu distribution.
+
To build the image, and push it to the docker hub, follow the following
+steps:
Pushing the image to the docker hub requires prior authentication with
+$ docker login. The user needs to be registered on hub.docker.com.
+Also, the user needs to be allowed to push to the dblabbasel
+organisation.
+
Running Sipi
+
To use the docker image stored locally or on the docker hub repository
+type:
+
dockerrun--namesipi-d-p1024:1024daschswiss/sipi
+
+
This will create and start a docker container with the daschswiss/sipi
+image in the background. The default behaviour is to start Sipi by
+calling the following command:
Within the build.sbt file, the Dependencies package is referenced, which is located in project/Dependencies.scala.
+All third party dependencies need to be declared there.
+
Referencing a third party library
+
There is an object Dependencies where each library should be declared in a val.
The first string corresponds to the group/organization in the library's maven artefact,
+the second string corresponds to the artefact ID and the third string defines the version.
+
The strings are combined with % or %% operators, the latter fixing the dependency to the specified scala-version.
+
It is also possible to use variables in these definitions, e.g. if multiple dependencies share a version number:
Assigning the dependencies to a specific subproject
+
For each SBT project, there is one Seq in the Dependencies object.
+In order to make use of the declared dependencies, they must be referred to in the Seq of the respective subproject.
When a new version of Knora requires an existing repository to be updated,
+ do this automatically when Knora starts, if possible.
+
+
+
Make the update process as fast as possible, with some indication of progress
+ as it runs.
+
+
+
Design
+
As explained in
+Knora Ontology Versions,
+the knora-base ontology contains a version string to ensure compatibility
+between a repository and a given version of Knora. The same version string
+is therefore hard-coded in the Knora source code, in the string constant
+org.knora.webapi.KnoraBaseVersion. For new pull requests, the format of this string
+is knora-base vN, where N is an integer that is incremented for
+each version.
+
During Knora's startup process, ApplicationActor sends an UpdateRepositoryRequest
+message to the StoreManager, which forwards it to TriplestoreManager, which delegates
+it to org.knora.webapi.store.triplestore.upgrade.RepositoryUpdater.
+
RepositoryUpdater does the following procedure:
+
+
+
Check the knora-base version string in the repository.
+
+
+
Consult org.knora.webapi.store.triplestore.upgrade.RepositoryUpdatePlan to see which
+ transformations are needed.
+
+
+
Download the entire repository from the triplestore into an N-Quads file.
+
+
+
Read the N-Quads file into an RdfModel.
+
+
+
Update the RdfModel by running the necessary transformations, and replacing the
+ built-in DSP ontologies with the current ones.
+
+
+
Save the RdfModel to a new N-Quads file.
+
+
+
Empty the repository in the triplestore.
+
+
+
Upload the transformed repository file to the triplestore.
+
+
+
To update the RdfModel, RepositoryUpdater runs a sequence of upgrade plugins, each of which
+is a class in org.knora.webapi.store.triplestore.upgrade.plugins and is registered
+in RepositoryUpdatePlan.
+
Design Rationale
+
We tried and rejected several other designs:
+
+
+
Running SPARQL updates in the triplestore: too slow, and no way to report
+ progress during the update.
+
+
+
Downloading the repository and transforming it in Python using
+ rdflib: too slow.
+
+
+
Downloading the repository and transforming it in C++ using
+ Redland: also too slow.
+
+
+
The Scala implementation is the fastest by far.
+
The whole repository is uploaded in a single transaction, rather than uploading one named
+graph at a time, because GraphDB's consistency checker can enforce dependencies between
+named graphs.
+
Adding an Upgrade Plugin
+
Each time a pull request introduces changes that are not compatible
+with existing data, the following must happen:
+
+
+
The knora-base version number must be incremented in knora-base.ttl and
+ in the string constant org.knora.webapi.KnoraBaseVersion.
+
+
+
A plugin must be added in the package org.knora.webapi.store.triplestore.upgrade.plugins,
+ to transform existing repositories so that they are compatible with the code changes
+ introduced in the pull request. Each new plugin must be registered
+ by adding it to the sequence returned by RepositoryUpdatePlan.makePluginsForVersions.
+
+
+
The order of version numbers (and the plugins) must correspond to the order in which the
+pull requests are merged.
+
An upgrade plugin is a Scala class that extends UpgradePlugin. The name of the plugin
+class should refer to the pull request that made the transformation necessary,
+using the format UpgradePluginPRNNNN, where NNNN is the number of the pull request.
+
A plugin's transform method takes an RdfModel (a mutable object representing
+the repository) and modifies it as needed.
+
Before transforming the data, a plugin can check whether a required manual transformation
+has been carried out. If the requirement is not met, the plugin can throw
+InconsistentRepositoryDataException to abort the upgrade process.
+
Testing Update Plugins
+
Each plugin should have a unit test that extends UpgradePluginSpec. A typical
+test loads a file containing RDF test data into a RdfModel, runs the plugin,
+makes an RdfRepository containing the transformed RdfModel, and uses
+SPARQL to check the result.
Setup Visual Studio Code for development of DSP-API
+
To have full functionality, the Scala Metals plugin should be installed.
+
Additionally, a number of plugins can be installed for convenience, but are not required.
+Those include but are by no means limited to:
+
+
Docker - to attach to running docker containers
+
Stardog RDF grammar - TTL syntax highlighting
+
Lua
+
REST client
+
...
+
+
Formatter
+
As a formatter, we use Scalafmt.
+Metals automatically recognizes the formatting configuration in the .scalafmt.conf file in the root directory.
+VSCode should be configured so that it austomatically formats (e.g. on file saved).
+
Running Tests
+
The tests can be run through make commands or through SBT.
+The most convenient way to run the tests is through VSCode.
+Metals recognizes scalatest suits and lets you run them in the text explorer:
+
+
Or with the setting "metals.testUserInterface": "Code Lenses" directly in the text:
+
+
Debugger
+
It is currently not possible to start the stack in debug mode.
+
Tests can be run in debug mode by running them as described above but choosing debug test instead of test.
Sipi is a high-performance media server written in C++,
+for serving and converting binary media files such as images and video. Sipi can
+efficiently convert between many different formats on demand, preserving
+embedded metadata, and implements the International Image
+Interoperability Framework (IIIF). DSP-API is designed
+to use Sipi for converting and serving media files.
DSP-API and Sipi (Simple Image Presentation Interface) are two
+complementary software projects. Whereas DSP-API deals with data that
+is written to and read from a triplestore (metadata and annotations),
+Sipi takes care of storing, converting and serving image files as well
+as other types of files such as audio, video, or documents (binary files
+it just stores and serves).
+
DSP-API and Sipi stick to a clear division of responsibility regarding
+files: DSP-API knows about the names of files that are attached to
+resources as well as some metadata and is capable of creating the URLs
+for the client to request them from Sipi, but the whole handling of
+files (storing, naming, organization of the internal directory
+structure, format conversions, and serving) is taken care of by Sipi.
+
Adding Files to DSP
+
A file is first uploaded to Sipi, then its metadata is submitted to
+DSP. The implementation of this procedure is described in
+DSP-API and Sipi. Instructions for the client are given in
+Creating File Values.
+
Retrieving Files from Sipi
+
File URLs in API v2
+
In DSP-API v2, image file URLs are provided in IIIF format. In the simple
+ontology schema, a file value is simply
+a IIIF URL that can be used to retrieve the file from Sipi. In the complex schema,
+it is a StillImageFileValue with additional properties that the client can use to construct
+different IIIF URLs, e.g. at different resolutions. See the knora-api ontology for details.
+
Authentication of Users with Sipi
+
File access is restricted to users who have the permission to view the resource that the file is attached to.
+In order to check whether a user has the permission to view a resource, Sipi needs to know the user's identity.
+The identity is provided by DSP-API in the form of a JWT token.
+This jwt token can be provided to Sipi in the following ways:
+
+
recommended - The Authorization header of the request as a Bearer type token.
+
deprecated - The value for a token query parameter of the request. This is unsafe a the token is visible in the
+ URL.
+
deprecated - As a session cookie as set by the dsp-api. For the session cookie to be sent to Sipi, both the DSP-API
+ and Sipi endpoints need to
+ be under the same domain, e.g., api.example.com and iiif.example.com.
The Lucene full-text index provided by the triplestore is used to perform full-text searches in DSP.
+
Lucene Query Parser Syntax
+
Full-text searches in DSP are based on Lucene.
+Therefore, full-text searches support the
+Lucene Query Parser Syntax.
+
A full-text search consists of a single word in the simplest case, but could also be composed of several words combined with
+Boolean operators.
+By default, Lucene combines two or more terms separated by space with a logical OR.
XML files do not lend themselves to searching and linking. Knora's RDF storage
+is better suited to its goal of facilitating data reuse.
+
If your XML files represent text with markup (e.g. TEI/XML),
+the recommended approach is to allow Knora to store it as
+Standoff/RDF. This will allow both text and
+markup to be searched using Gravsearch. Knora
+can also regenerate, at any time, an XML document that is equivalent to the original one.
+
If you have XML that simply represents structured data (rather than text documents),
+we recommend converting it to Knora resources, which are stored as RDF.
Can a project use classes or properties defined in another project's ontology?
+
DSP-API does not allow this to be done with project-specific ontologies.
+Each project must be free to change its own ontologies, but this is not possible
+if they have been used in ontologies or data created by other projects.
+
However, an ontology can be defined as shared, meaning that it can be used by multiple
+projects, and that its creators promise not to change it in ways that could
+affect other ontologies or data that are based on it. See
+Shared Ontologies for details.
+
Why doesn't DSP-API use rdfs:domain and rdfs:range for consistency checking?
+
DSP-API's consistency checking uses specific properties, which are called
+knora-base:subjectClassConstraint and knora-base:objectClassConstraint in
+the knora-base ontology, and knora-api:subjectType and knora-api:objectType
+in the knora-api ontologies. These properties express restrictions on the
+possible subjects and objects of a property. If a property's subject or object
+does not conform to the specified restrictions, DSP-API considers it an error.
+
In contrast,
+the RDF Schema specification says
+that rdfs:domain and rdfs:range can be used to "infer additional information"
+about the subjects and objects of properties, rather than to enforce restrictions.
+This is, in fact, what RDFS reasoners do in practice. For example, consider these
+statements:
To an RDFS reasoner, the first statement means: if something is used as
+the object of example:hasAuthor, we can infer that it's an
+example:Person.
+
The second statement is a mistake; oxygen is not a person. But
+an RDFS reasoner would infer that data:oxygen is actually an
+example:Person, since it is used as the object of
+example:hasAuthor. Queries looking for persons would then get
+data:oxygen in their results, which would be incorrect.
+
Therefore, rdfs:domain and rdfs:range are not suitable for consistency
+checking.
+
DSP-API therefore uses its own properties, along with
+OWL cardinalities, which it interprets according to a "closed world"
+assumption. DSP-API performs its own consistency checks to enforce
+these restrictions. DSP-API repositories can also take advantage of
+triplestore-specific consistency checking mechanisms.
+
The constraint language SHACL may someday
+provide a standard, triplestore-independent way to implement consistency
+checks, if the obstacles to its adoption can be overcome
+(see Diverging views of SHACL).
+For further discussion of these issues, see
+SHACL and OWL Compared.
+
Can a user-created property be an owl:TransitiveProperty?
+
No, because in DSP-API, a resource controls its properties. This basic
+assumption is what allows DSP-API to enforce permissions and transaction
+integrity. The concept of a transitive property would break this assumption.
+
Consider a link property hasLinkToFoo that is defined as an owl:TransitiveProperty,
+and is used to link resource Foo1 to resource Foo2:
+
+
Suppose that Foo1 and Foo2 are owned by different users, and that
+the owner of Foo2 does not have permission to change Foo1.
+Now suppose that the owner of Foo2 adds a link from Foo2 to Foo3,
+using the transitive property:
+
+
Since the property is transitive, a link from Foo1 to Foo3 is now
+inferred. But this should not be allowed, because the owner of Foo2
+does not have permission to add a link to Foo1.
+
Moreover, even if the owner of Foo2 did have that permission, the inferred
+link would not have a knora-base:LinkValue (a reification), which every
+link must have. The LinkValue is what stores metadata about the creator
+of the link, its creation date, its permissions, and so on
+(see LinkValue).
+
Finally, if an update to a resource could modify another
+resource, this would violate DSP-API's model of transaction integrity, in which
+each transaction can modify only one resource
+(see Application-level Locking).
+DSP-API would then be unable to ensure that concurrent transactions do not
+interfere with each other.
+
General
+
Why should I use 0.0.0.0 instead of localhost when running the DSP stack locally?
+
When running locally with the default configuration, if you want authorization cookies
+to be shared between webapi and sipi, then both webapi and sipi must be accessed
+over 0.0.0.0, or otherwise, the cookie will not be sent to sipi.
+
If no authorization cookie sharing is necessary, then both 0.0.0.0 and localhostwill
+work.
Generally, DSP-API is designed to be backward compatible.
+Whenever a new major version of DSP-API is released,
+the existing data is migrated to the new version automatically.
+The public Rest API is also stable and should remain backward compatible.
+
However, when a feature appears not to be used,
+or if there are urgent technical reasons to change the API,
+we may decide to release breaking changes.
+In these instances, we try to provide a migration guide,
+in case some project or application is affected by the change.
+
If you experience any issues with the migration,
+please contact us via the DaSCH Help Center.
+
Migration Guides
+
+
+
Planned: Removal of knora-base:isSequenceOf and knora-base:hasSequenceBounds
+
If you have used knora-base:isSequenceOf and knora-base:hasSequenceBounds in your data,
+this should be replaced by knora-base:isAudioSegmentOf or knora-base:isVideoSegmentOf respectively,
+and knora-base:hasSegmentBounds.
+
The issue with that is that these properties are only allowed
+on resources of type knora-base:AudioSegment and knora-base:VideoSegment,
+whereas previously knora-base:isSequenceOf could be added to any knora-base:Resource.
+This means that you will have to change the type of the resources that you have been using
+to be of type knora-base:AudioSegment or knora-base:VideoSegment.
+
Deprecation Warnings
+
+
+
isSequenceOf and hasSequenceBounds
+
With the introduction of the new Segment concept in DSP-API v30.11.0,
+the previously existing properties knora-base:isSequenceOf and knora-base:hasSequenceBounds
+have been deprecated and will be removed in a future version.
+
If you are creating a new ontology,
+please do not use these properties anymore.
+Instead, use the newly introduced Segment type.
{"use strict";/*!
+ * escape-html
+ * Copyright(c) 2012-2013 TJ Holowaychuk
+ * Copyright(c) 2015 Andreas Lubbe
+ * Copyright(c) 2015 Tiancheng "Timothy" Gu
+ * MIT Licensed
+ */var Va=/["'&<>]/;qn.exports=za;function za(e){var t=""+e,r=Va.exec(t);if(!r)return t;var o,n="",i=0,a=0;for(i=r.index;i0&&i[i.length-1])&&(p[0]===6||p[0]===2)){r=0;continue}if(p[0]===3&&(!i||p[1]>i[0]&&p[1]=e.length&&(e=void 0),{value:e&&e[o++],done:!e}}};throw new TypeError(t?"Object is not iterable.":"Symbol.iterator is not defined.")}function V(e,t){var r=typeof Symbol=="function"&&e[Symbol.iterator];if(!r)return e;var o=r.call(e),n,i=[],a;try{for(;(t===void 0||t-- >0)&&!(n=o.next()).done;)i.push(n.value)}catch(s){a={error:s}}finally{try{n&&!n.done&&(r=o.return)&&r.call(o)}finally{if(a)throw a.error}}return i}function z(e,t,r){if(r||arguments.length===2)for(var o=0,n=t.length,i;o1||s(u,h)})})}function s(u,h){try{c(o[u](h))}catch(w){f(i[0][3],w)}}function c(u){u.value instanceof ot?Promise.resolve(u.value.v).then(p,l):f(i[0][2],u)}function p(u){s("next",u)}function l(u){s("throw",u)}function f(u,h){u(h),i.shift(),i.length&&s(i[0][0],i[0][1])}}function so(e){if(!Symbol.asyncIterator)throw new TypeError("Symbol.asyncIterator is not defined.");var t=e[Symbol.asyncIterator],r;return t?t.call(e):(e=typeof ue=="function"?ue(e):e[Symbol.iterator](),r={},o("next"),o("throw"),o("return"),r[Symbol.asyncIterator]=function(){return this},r);function o(i){r[i]=e[i]&&function(a){return new Promise(function(s,c){a=e[i](a),n(s,c,a.done,a.value)})}}function n(i,a,s,c){Promise.resolve(c).then(function(p){i({value:p,done:s})},a)}}function k(e){return typeof e=="function"}function pt(e){var t=function(o){Error.call(o),o.stack=new Error().stack},r=e(t);return r.prototype=Object.create(Error.prototype),r.prototype.constructor=r,r}var Wt=pt(function(e){return function(r){e(this),this.message=r?r.length+` errors occurred during unsubscription:
+`+r.map(function(o,n){return n+1+") "+o.toString()}).join(`
+ `):"",this.name="UnsubscriptionError",this.errors=r}});function Ve(e,t){if(e){var r=e.indexOf(t);0<=r&&e.splice(r,1)}}var Ie=function(){function e(t){this.initialTeardown=t,this.closed=!1,this._parentage=null,this._finalizers=null}return e.prototype.unsubscribe=function(){var t,r,o,n,i;if(!this.closed){this.closed=!0;var a=this._parentage;if(a)if(this._parentage=null,Array.isArray(a))try{for(var s=ue(a),c=s.next();!c.done;c=s.next()){var p=c.value;p.remove(this)}}catch(A){t={error:A}}finally{try{c&&!c.done&&(r=s.return)&&r.call(s)}finally{if(t)throw t.error}}else a.remove(this);var l=this.initialTeardown;if(k(l))try{l()}catch(A){i=A instanceof Wt?A.errors:[A]}var f=this._finalizers;if(f){this._finalizers=null;try{for(var u=ue(f),h=u.next();!h.done;h=u.next()){var w=h.value;try{co(w)}catch(A){i=i!=null?i:[],A instanceof Wt?i=z(z([],V(i)),V(A.errors)):i.push(A)}}}catch(A){o={error:A}}finally{try{h&&!h.done&&(n=u.return)&&n.call(u)}finally{if(o)throw o.error}}}if(i)throw new Wt(i)}},e.prototype.add=function(t){var r;if(t&&t!==this)if(this.closed)co(t);else{if(t instanceof e){if(t.closed||t._hasParent(this))return;t._addParent(this)}(this._finalizers=(r=this._finalizers)!==null&&r!==void 0?r:[]).push(t)}},e.prototype._hasParent=function(t){var r=this._parentage;return r===t||Array.isArray(r)&&r.includes(t)},e.prototype._addParent=function(t){var r=this._parentage;this._parentage=Array.isArray(r)?(r.push(t),r):r?[r,t]:t},e.prototype._removeParent=function(t){var r=this._parentage;r===t?this._parentage=null:Array.isArray(r)&&Ve(r,t)},e.prototype.remove=function(t){var r=this._finalizers;r&&Ve(r,t),t instanceof e&&t._removeParent(this)},e.EMPTY=function(){var t=new e;return t.closed=!0,t}(),e}();var Er=Ie.EMPTY;function Dt(e){return e instanceof Ie||e&&"closed"in e&&k(e.remove)&&k(e.add)&&k(e.unsubscribe)}function co(e){k(e)?e():e.unsubscribe()}var ke={onUnhandledError:null,onStoppedNotification:null,Promise:void 0,useDeprecatedSynchronousErrorHandling:!1,useDeprecatedNextContext:!1};var lt={setTimeout:function(e,t){for(var r=[],o=2;o0},enumerable:!1,configurable:!0}),t.prototype._trySubscribe=function(r){return this._throwIfClosed(),e.prototype._trySubscribe.call(this,r)},t.prototype._subscribe=function(r){return this._throwIfClosed(),this._checkFinalizedStatuses(r),this._innerSubscribe(r)},t.prototype._innerSubscribe=function(r){var o=this,n=this,i=n.hasError,a=n.isStopped,s=n.observers;return i||a?Er:(this.currentObservers=null,s.push(r),new Ie(function(){o.currentObservers=null,Ve(s,r)}))},t.prototype._checkFinalizedStatuses=function(r){var o=this,n=o.hasError,i=o.thrownError,a=o.isStopped;n?r.error(i):a&&r.complete()},t.prototype.asObservable=function(){var r=new j;return r.source=this,r},t.create=function(r,o){return new vo(r,o)},t}(j);var vo=function(e){se(t,e);function t(r,o){var n=e.call(this)||this;return n.destination=r,n.source=o,n}return t.prototype.next=function(r){var o,n;(n=(o=this.destination)===null||o===void 0?void 0:o.next)===null||n===void 0||n.call(o,r)},t.prototype.error=function(r){var o,n;(n=(o=this.destination)===null||o===void 0?void 0:o.error)===null||n===void 0||n.call(o,r)},t.prototype.complete=function(){var r,o;(o=(r=this.destination)===null||r===void 0?void 0:r.complete)===null||o===void 0||o.call(r)},t.prototype._subscribe=function(r){var o,n;return(n=(o=this.source)===null||o===void 0?void 0:o.subscribe(r))!==null&&n!==void 0?n:Er},t}(g);var St={now:function(){return(St.delegate||Date).now()},delegate:void 0};var Ot=function(e){se(t,e);function t(r,o,n){r===void 0&&(r=1/0),o===void 0&&(o=1/0),n===void 0&&(n=St);var i=e.call(this)||this;return i._bufferSize=r,i._windowTime=o,i._timestampProvider=n,i._buffer=[],i._infiniteTimeWindow=!0,i._infiniteTimeWindow=o===1/0,i._bufferSize=Math.max(1,r),i._windowTime=Math.max(1,o),i}return t.prototype.next=function(r){var o=this,n=o.isStopped,i=o._buffer,a=o._infiniteTimeWindow,s=o._timestampProvider,c=o._windowTime;n||(i.push(r),!a&&i.push(s.now()+c)),this._trimBuffer(),e.prototype.next.call(this,r)},t.prototype._subscribe=function(r){this._throwIfClosed(),this._trimBuffer();for(var o=this._innerSubscribe(r),n=this,i=n._infiniteTimeWindow,a=n._buffer,s=a.slice(),c=0;c0?e.prototype.requestAsyncId.call(this,r,o,n):(r.actions.push(this),r._scheduled||(r._scheduled=ut.requestAnimationFrame(function(){return r.flush(void 0)})))},t.prototype.recycleAsyncId=function(r,o,n){var i;if(n===void 0&&(n=0),n!=null?n>0:this.delay>0)return e.prototype.recycleAsyncId.call(this,r,o,n);var a=r.actions;o!=null&&((i=a[a.length-1])===null||i===void 0?void 0:i.id)!==o&&(ut.cancelAnimationFrame(o),r._scheduled=void 0)},t}(zt);var yo=function(e){se(t,e);function t(){return e!==null&&e.apply(this,arguments)||this}return t.prototype.flush=function(r){this._active=!0;var o=this._scheduled;this._scheduled=void 0;var n=this.actions,i;r=r||n.shift();do if(i=r.execute(r.state,r.delay))break;while((r=n[0])&&r.id===o&&n.shift());if(this._active=!1,i){for(;(r=n[0])&&r.id===o&&n.shift();)r.unsubscribe();throw i}},t}(qt);var de=new yo(xo);var L=new j(function(e){return e.complete()});function Kt(e){return e&&k(e.schedule)}function _r(e){return e[e.length-1]}function Je(e){return k(_r(e))?e.pop():void 0}function Ae(e){return Kt(_r(e))?e.pop():void 0}function Qt(e,t){return typeof _r(e)=="number"?e.pop():t}var dt=function(e){return e&&typeof e.length=="number"&&typeof e!="function"};function Yt(e){return k(e==null?void 0:e.then)}function Bt(e){return k(e[ft])}function Gt(e){return Symbol.asyncIterator&&k(e==null?void 0:e[Symbol.asyncIterator])}function Jt(e){return new TypeError("You provided "+(e!==null&&typeof e=="object"?"an invalid object":"'"+e+"'")+" where a stream was expected. You can provide an Observable, Promise, ReadableStream, Array, AsyncIterable, or Iterable.")}function Di(){return typeof Symbol!="function"||!Symbol.iterator?"@@iterator":Symbol.iterator}var Xt=Di();function Zt(e){return k(e==null?void 0:e[Xt])}function er(e){return ao(this,arguments,function(){var r,o,n,i;return Ut(this,function(a){switch(a.label){case 0:r=e.getReader(),a.label=1;case 1:a.trys.push([1,,9,10]),a.label=2;case 2:return[4,ot(r.read())];case 3:return o=a.sent(),n=o.value,i=o.done,i?[4,ot(void 0)]:[3,5];case 4:return[2,a.sent()];case 5:return[4,ot(n)];case 6:return[4,a.sent()];case 7:return a.sent(),[3,2];case 8:return[3,10];case 9:return r.releaseLock(),[7];case 10:return[2]}})})}function tr(e){return k(e==null?void 0:e.getReader)}function N(e){if(e instanceof j)return e;if(e!=null){if(Bt(e))return Ni(e);if(dt(e))return Vi(e);if(Yt(e))return zi(e);if(Gt(e))return Eo(e);if(Zt(e))return qi(e);if(tr(e))return Ki(e)}throw Jt(e)}function Ni(e){return new j(function(t){var r=e[ft]();if(k(r.subscribe))return r.subscribe(t);throw new TypeError("Provided object does not correctly implement Symbol.observable")})}function Vi(e){return new j(function(t){for(var r=0;r=2;return function(o){return o.pipe(e?b(function(n,i){return e(n,i,o)}):ce,ye(1),r?Qe(t):jo(function(){return new or}))}}function $r(e){return e<=0?function(){return L}:x(function(t,r){var o=[];t.subscribe(S(r,function(n){o.push(n),e=2,!0))}function le(e){e===void 0&&(e={});var t=e.connector,r=t===void 0?function(){return new g}:t,o=e.resetOnError,n=o===void 0?!0:o,i=e.resetOnComplete,a=i===void 0?!0:i,s=e.resetOnRefCountZero,c=s===void 0?!0:s;return function(p){var l,f,u,h=0,w=!1,A=!1,Z=function(){f==null||f.unsubscribe(),f=void 0},te=function(){Z(),l=u=void 0,w=A=!1},J=function(){var C=l;te(),C==null||C.unsubscribe()};return x(function(C,ct){h++,!A&&!w&&Z();var Ne=u=u!=null?u:r();ct.add(function(){h--,h===0&&!A&&!w&&(f=Pr(J,c))}),Ne.subscribe(ct),!l&&h>0&&(l=new it({next:function(Pe){return Ne.next(Pe)},error:function(Pe){A=!0,Z(),f=Pr(te,n,Pe),Ne.error(Pe)},complete:function(){w=!0,Z(),f=Pr(te,a),Ne.complete()}}),N(C).subscribe(l))})(p)}}function Pr(e,t){for(var r=[],o=2;oe.next(document)),e}function R(e,t=document){return Array.from(t.querySelectorAll(e))}function P(e,t=document){let r=me(e,t);if(typeof r=="undefined")throw new ReferenceError(`Missing element: expected "${e}" to be present`);return r}function me(e,t=document){return t.querySelector(e)||void 0}function Re(){var e,t,r,o;return(o=(r=(t=(e=document.activeElement)==null?void 0:e.shadowRoot)==null?void 0:t.activeElement)!=null?r:document.activeElement)!=null?o:void 0}var la=T(d(document.body,"focusin"),d(document.body,"focusout")).pipe(be(1),q(void 0),m(()=>Re()||document.body),B(1));function vt(e){return la.pipe(m(t=>e.contains(t)),Y())}function Vo(e,t){return T(d(e,"mouseenter").pipe(m(()=>!0)),d(e,"mouseleave").pipe(m(()=>!1))).pipe(t?be(t):ce,q(!1))}function Ue(e){return{x:e.offsetLeft,y:e.offsetTop}}function zo(e){return T(d(window,"load"),d(window,"resize")).pipe(Me(0,de),m(()=>Ue(e)),q(Ue(e)))}function ir(e){return{x:e.scrollLeft,y:e.scrollTop}}function et(e){return T(d(e,"scroll"),d(window,"resize")).pipe(Me(0,de),m(()=>ir(e)),q(ir(e)))}function qo(e,t){if(typeof t=="string"||typeof t=="number")e.innerHTML+=t.toString();else if(t instanceof Node)e.appendChild(t);else if(Array.isArray(t))for(let r of t)qo(e,r)}function E(e,t,...r){let o=document.createElement(e);if(t)for(let n of Object.keys(t))typeof t[n]!="undefined"&&(typeof t[n]!="boolean"?o.setAttribute(n,t[n]):o.setAttribute(n,""));for(let n of r)qo(o,n);return o}function ar(e){if(e>999){let t=+((e-950)%1e3>99);return`${((e+1e-6)/1e3).toFixed(t)}k`}else return e.toString()}function gt(e){let t=E("script",{src:e});return H(()=>(document.head.appendChild(t),T(d(t,"load"),d(t,"error").pipe(v(()=>Ar(()=>new ReferenceError(`Invalid script: ${e}`))))).pipe(m(()=>{}),_(()=>document.head.removeChild(t)),ye(1))))}var Ko=new g,ma=H(()=>typeof ResizeObserver=="undefined"?gt("https://unpkg.com/resize-observer-polyfill"):$(void 0)).pipe(m(()=>new ResizeObserver(e=>{for(let t of e)Ko.next(t)})),v(e=>T(qe,$(e)).pipe(_(()=>e.disconnect()))),B(1));function pe(e){return{width:e.offsetWidth,height:e.offsetHeight}}function Ee(e){return ma.pipe(y(t=>t.observe(e)),v(t=>Ko.pipe(b(({target:r})=>r===e),_(()=>t.unobserve(e)),m(()=>pe(e)))),q(pe(e)))}function xt(e){return{width:e.scrollWidth,height:e.scrollHeight}}function sr(e){let t=e.parentElement;for(;t&&(e.scrollWidth<=t.scrollWidth&&e.scrollHeight<=t.scrollHeight);)t=(e=t).parentElement;return t?e:void 0}var Qo=new g,fa=H(()=>$(new IntersectionObserver(e=>{for(let t of e)Qo.next(t)},{threshold:0}))).pipe(v(e=>T(qe,$(e)).pipe(_(()=>e.disconnect()))),B(1));function yt(e){return fa.pipe(y(t=>t.observe(e)),v(t=>Qo.pipe(b(({target:r})=>r===e),_(()=>t.unobserve(e)),m(({isIntersecting:r})=>r))))}function Yo(e,t=16){return et(e).pipe(m(({y:r})=>{let o=pe(e),n=xt(e);return r>=n.height-o.height-t}),Y())}var cr={drawer:P("[data-md-toggle=drawer]"),search:P("[data-md-toggle=search]")};function Bo(e){return cr[e].checked}function Be(e,t){cr[e].checked!==t&&cr[e].click()}function We(e){let t=cr[e];return d(t,"change").pipe(m(()=>t.checked),q(t.checked))}function ua(e,t){switch(e.constructor){case HTMLInputElement:return e.type==="radio"?/^Arrow/.test(t):!0;case HTMLSelectElement:case HTMLTextAreaElement:return!0;default:return e.isContentEditable}}function da(){return T(d(window,"compositionstart").pipe(m(()=>!0)),d(window,"compositionend").pipe(m(()=>!1))).pipe(q(!1))}function Go(){let e=d(window,"keydown").pipe(b(t=>!(t.metaKey||t.ctrlKey)),m(t=>({mode:Bo("search")?"search":"global",type:t.key,claim(){t.preventDefault(),t.stopPropagation()}})),b(({mode:t,type:r})=>{if(t==="global"){let o=Re();if(typeof o!="undefined")return!ua(o,r)}return!0}),le());return da().pipe(v(t=>t?L:e))}function ve(){return new URL(location.href)}function st(e,t=!1){if(G("navigation.instant")&&!t){let r=E("a",{href:e.href});document.body.appendChild(r),r.click(),r.remove()}else location.href=e.href}function Jo(){return new g}function Xo(){return location.hash.slice(1)}function Zo(e){let t=E("a",{href:e});t.addEventListener("click",r=>r.stopPropagation()),t.click()}function ha(e){return T(d(window,"hashchange"),e).pipe(m(Xo),q(Xo()),b(t=>t.length>0),B(1))}function en(e){return ha(e).pipe(m(t=>me(`[id="${t}"]`)),b(t=>typeof t!="undefined"))}function At(e){let t=matchMedia(e);return nr(r=>t.addListener(()=>r(t.matches))).pipe(q(t.matches))}function tn(){let e=matchMedia("print");return T(d(window,"beforeprint").pipe(m(()=>!0)),d(window,"afterprint").pipe(m(()=>!1))).pipe(q(e.matches))}function Ur(e,t){return e.pipe(v(r=>r?t():L))}function Wr(e,t){return new j(r=>{let o=new XMLHttpRequest;return o.open("GET",`${e}`),o.responseType="blob",o.addEventListener("load",()=>{o.status>=200&&o.status<300?(r.next(o.response),r.complete()):r.error(new Error(o.statusText))}),o.addEventListener("error",()=>{r.error(new Error("Network error"))}),o.addEventListener("abort",()=>{r.complete()}),typeof(t==null?void 0:t.progress$)!="undefined"&&(o.addEventListener("progress",n=>{var i;if(n.lengthComputable)t.progress$.next(n.loaded/n.total*100);else{let a=(i=o.getResponseHeader("Content-Length"))!=null?i:0;t.progress$.next(n.loaded/+a*100)}}),t.progress$.next(5)),o.send(),()=>o.abort()})}function De(e,t){return Wr(e,t).pipe(v(r=>r.text()),m(r=>JSON.parse(r)),B(1))}function rn(e,t){let r=new DOMParser;return Wr(e,t).pipe(v(o=>o.text()),m(o=>r.parseFromString(o,"text/html")),B(1))}function on(e,t){let r=new DOMParser;return Wr(e,t).pipe(v(o=>o.text()),m(o=>r.parseFromString(o,"text/xml")),B(1))}function nn(){return{x:Math.max(0,scrollX),y:Math.max(0,scrollY)}}function an(){return T(d(window,"scroll",{passive:!0}),d(window,"resize",{passive:!0})).pipe(m(nn),q(nn()))}function sn(){return{width:innerWidth,height:innerHeight}}function cn(){return d(window,"resize",{passive:!0}).pipe(m(sn),q(sn()))}function pn(){return Q([an(),cn()]).pipe(m(([e,t])=>({offset:e,size:t})),B(1))}function pr(e,{viewport$:t,header$:r}){let o=t.pipe(X("size")),n=Q([o,r]).pipe(m(()=>Ue(e)));return Q([r,t,n]).pipe(m(([{height:i},{offset:a,size:s},{x:c,y:p}])=>({offset:{x:a.x-c,y:a.y-p+i},size:s})))}function ba(e){return d(e,"message",t=>t.data)}function va(e){let t=new g;return t.subscribe(r=>e.postMessage(r)),t}function ln(e,t=new Worker(e)){let r=ba(t),o=va(t),n=new g;n.subscribe(o);let i=o.pipe(ee(),oe(!0));return n.pipe(ee(),$e(r.pipe(U(i))),le())}var ga=P("#__config"),Et=JSON.parse(ga.textContent);Et.base=`${new URL(Et.base,ve())}`;function we(){return Et}function G(e){return Et.features.includes(e)}function ge(e,t){return typeof t!="undefined"?Et.translations[e].replace("#",t.toString()):Et.translations[e]}function Te(e,t=document){return P(`[data-md-component=${e}]`,t)}function ie(e,t=document){return R(`[data-md-component=${e}]`,t)}function xa(e){let t=P(".md-typeset > :first-child",e);return d(t,"click",{once:!0}).pipe(m(()=>P(".md-typeset",e)),m(r=>({hash:__md_hash(r.innerHTML)})))}function mn(e){if(!G("announce.dismiss")||!e.childElementCount)return L;if(!e.hidden){let t=P(".md-typeset",e);__md_hash(t.innerHTML)===__md_get("__announce")&&(e.hidden=!0)}return H(()=>{let t=new g;return t.subscribe(({hash:r})=>{e.hidden=!0,__md_set("__announce",r)}),xa(e).pipe(y(r=>t.next(r)),_(()=>t.complete()),m(r=>F({ref:e},r)))})}function ya(e,{target$:t}){return t.pipe(m(r=>({hidden:r!==e})))}function fn(e,t){let r=new g;return r.subscribe(({hidden:o})=>{e.hidden=o}),ya(e,t).pipe(y(o=>r.next(o)),_(()=>r.complete()),m(o=>F({ref:e},o)))}function Ct(e,t){return t==="inline"?E("div",{class:"md-tooltip md-tooltip--inline",id:e,role:"tooltip"},E("div",{class:"md-tooltip__inner md-typeset"})):E("div",{class:"md-tooltip",id:e,role:"tooltip"},E("div",{class:"md-tooltip__inner md-typeset"}))}function un(e,t){if(t=t?`${t}_annotation_${e}`:void 0,t){let r=t?`#${t}`:void 0;return E("aside",{class:"md-annotation",tabIndex:0},Ct(t),E("a",{href:r,class:"md-annotation__index",tabIndex:-1},E("span",{"data-md-annotation-id":e})))}else return E("aside",{class:"md-annotation",tabIndex:0},Ct(t),E("span",{class:"md-annotation__index",tabIndex:-1},E("span",{"data-md-annotation-id":e})))}function dn(e){return E("button",{class:"md-clipboard md-icon",title:ge("clipboard.copy"),"data-clipboard-target":`#${e} > code`})}function Dr(e,t){let r=t&2,o=t&1,n=Object.keys(e.terms).filter(c=>!e.terms[c]).reduce((c,p)=>[...c,E("del",null,p)," "],[]).slice(0,-1),i=we(),a=new URL(e.location,i.base);G("search.highlight")&&a.searchParams.set("h",Object.entries(e.terms).filter(([,c])=>c).reduce((c,[p])=>`${c} ${p}`.trim(),""));let{tags:s}=we();return E("a",{href:`${a}`,class:"md-search-result__link",tabIndex:-1},E("article",{class:"md-search-result__article md-typeset","data-md-score":e.score.toFixed(2)},r>0&&E("div",{class:"md-search-result__icon md-icon"}),r>0&&E("h1",null,e.title),r<=0&&E("h2",null,e.title),o>0&&e.text.length>0&&e.text,e.tags&&e.tags.map(c=>{let p=s?c in s?`md-tag-icon md-tag--${s[c]}`:"md-tag-icon":"";return E("span",{class:`md-tag ${p}`},c)}),o>0&&n.length>0&&E("p",{class:"md-search-result__terms"},ge("search.result.term.missing"),": ",...n)))}function hn(e){let t=e[0].score,r=[...e],o=we(),n=r.findIndex(l=>!`${new URL(l.location,o.base)}`.includes("#")),[i]=r.splice(n,1),a=r.findIndex(l=>l.scoreDr(l,1)),...c.length?[E("details",{class:"md-search-result__more"},E("summary",{tabIndex:-1},E("div",null,c.length>0&&c.length===1?ge("search.result.more.one"):ge("search.result.more.other",c.length))),...c.map(l=>Dr(l,1)))]:[]];return E("li",{class:"md-search-result__item"},p)}function bn(e){return E("ul",{class:"md-source__facts"},Object.entries(e).map(([t,r])=>E("li",{class:`md-source__fact md-source__fact--${t}`},typeof r=="number"?ar(r):r)))}function Nr(e){let t=`tabbed-control tabbed-control--${e}`;return E("div",{class:t,hidden:!0},E("button",{class:"tabbed-button",tabIndex:-1,"aria-hidden":"true"}))}function vn(e){return E("div",{class:"md-typeset__scrollwrap"},E("div",{class:"md-typeset__table"},e))}function Ea(e){let t=we(),r=new URL(`../${e.version}/`,t.base);return E("li",{class:"md-version__item"},E("a",{href:`${r}`,class:"md-version__link"},e.title))}function gn(e,t){return e=e.filter(r=>{var o;return!((o=r.properties)!=null&&o.hidden)}),E("div",{class:"md-version"},E("button",{class:"md-version__current","aria-label":ge("select.version")},t.title),E("ul",{class:"md-version__list"},e.map(Ea)))}var wa=0;function Ta(e,t){document.body.append(e);let{width:r}=pe(e);e.style.setProperty("--md-tooltip-width",`${r}px`),e.remove();let o=sr(t),n=typeof o!="undefined"?et(o):$({x:0,y:0}),i=T(vt(t),Vo(t)).pipe(Y());return Q([i,n]).pipe(m(([a,s])=>{let{x:c,y:p}=Ue(t),l=pe(t),f=t.closest("table");return f&&t.parentElement&&(c+=f.offsetLeft+t.parentElement.offsetLeft,p+=f.offsetTop+t.parentElement.offsetTop),{active:a,offset:{x:c-s.x+l.width/2-r/2,y:p-s.y+l.height+8}}}))}function Ge(e){let t=e.title;if(!t.length)return L;let r=`__tooltip_${wa++}`,o=Ct(r,"inline"),n=P(".md-typeset",o);return n.innerHTML=t,H(()=>{let i=new g;return i.subscribe({next({offset:a}){o.style.setProperty("--md-tooltip-x",`${a.x}px`),o.style.setProperty("--md-tooltip-y",`${a.y}px`)},complete(){o.style.removeProperty("--md-tooltip-x"),o.style.removeProperty("--md-tooltip-y")}}),T(i.pipe(b(({active:a})=>a)),i.pipe(be(250),b(({active:a})=>!a))).subscribe({next({active:a}){a?(e.insertAdjacentElement("afterend",o),e.setAttribute("aria-describedby",r),e.removeAttribute("title")):(o.remove(),e.removeAttribute("aria-describedby"),e.setAttribute("title",t))},complete(){o.remove(),e.removeAttribute("aria-describedby"),e.setAttribute("title",t)}}),i.pipe(Me(16,de)).subscribe(({active:a})=>{o.classList.toggle("md-tooltip--active",a)}),i.pipe(_t(125,de),b(()=>!!e.offsetParent),m(()=>e.offsetParent.getBoundingClientRect()),m(({x:a})=>a)).subscribe({next(a){a?o.style.setProperty("--md-tooltip-0",`${-a}px`):o.style.removeProperty("--md-tooltip-0")},complete(){o.style.removeProperty("--md-tooltip-0")}}),Ta(o,e).pipe(y(a=>i.next(a)),_(()=>i.complete()),m(a=>F({ref:e},a)))}).pipe(ze(ae))}function Sa(e,t){let r=H(()=>Q([zo(e),et(t)])).pipe(m(([{x:o,y:n},i])=>{let{width:a,height:s}=pe(e);return{x:o-i.x+a/2,y:n-i.y+s/2}}));return vt(e).pipe(v(o=>r.pipe(m(n=>({active:o,offset:n})),ye(+!o||1/0))))}function xn(e,t,{target$:r}){let[o,n]=Array.from(e.children);return H(()=>{let i=new g,a=i.pipe(ee(),oe(!0));return i.subscribe({next({offset:s}){e.style.setProperty("--md-tooltip-x",`${s.x}px`),e.style.setProperty("--md-tooltip-y",`${s.y}px`)},complete(){e.style.removeProperty("--md-tooltip-x"),e.style.removeProperty("--md-tooltip-y")}}),yt(e).pipe(U(a)).subscribe(s=>{e.toggleAttribute("data-md-visible",s)}),T(i.pipe(b(({active:s})=>s)),i.pipe(be(250),b(({active:s})=>!s))).subscribe({next({active:s}){s?e.prepend(o):o.remove()},complete(){e.prepend(o)}}),i.pipe(Me(16,de)).subscribe(({active:s})=>{o.classList.toggle("md-tooltip--active",s)}),i.pipe(_t(125,de),b(()=>!!e.offsetParent),m(()=>e.offsetParent.getBoundingClientRect()),m(({x:s})=>s)).subscribe({next(s){s?e.style.setProperty("--md-tooltip-0",`${-s}px`):e.style.removeProperty("--md-tooltip-0")},complete(){e.style.removeProperty("--md-tooltip-0")}}),d(n,"click").pipe(U(a),b(s=>!(s.metaKey||s.ctrlKey))).subscribe(s=>{s.stopPropagation(),s.preventDefault()}),d(n,"mousedown").pipe(U(a),ne(i)).subscribe(([s,{active:c}])=>{var p;if(s.button!==0||s.metaKey||s.ctrlKey)s.preventDefault();else if(c){s.preventDefault();let l=e.parentElement.closest(".md-annotation");l instanceof HTMLElement?l.focus():(p=Re())==null||p.blur()}}),r.pipe(U(a),b(s=>s===o),Ye(125)).subscribe(()=>e.focus()),Sa(e,t).pipe(y(s=>i.next(s)),_(()=>i.complete()),m(s=>F({ref:e},s)))})}function Oa(e){return e.tagName==="CODE"?R(".c, .c1, .cm",e):[e]}function Ma(e){let t=[];for(let r of Oa(e)){let o=[],n=document.createNodeIterator(r,NodeFilter.SHOW_TEXT);for(let i=n.nextNode();i;i=n.nextNode())o.push(i);for(let i of o){let a;for(;a=/(\(\d+\))(!)?/.exec(i.textContent);){let[,s,c]=a;if(typeof c=="undefined"){let p=i.splitText(a.index);i=p.splitText(s.length),t.push(p)}else{i.textContent=s,t.push(i);break}}}}return t}function yn(e,t){t.append(...Array.from(e.childNodes))}function lr(e,t,{target$:r,print$:o}){let n=t.closest("[id]"),i=n==null?void 0:n.id,a=new Map;for(let s of Ma(t)){let[,c]=s.textContent.match(/\((\d+)\)/);me(`:scope > li:nth-child(${c})`,e)&&(a.set(c,un(c,i)),s.replaceWith(a.get(c)))}return a.size===0?L:H(()=>{let s=new g,c=s.pipe(ee(),oe(!0)),p=[];for(let[l,f]of a)p.push([P(".md-typeset",f),P(`:scope > li:nth-child(${l})`,e)]);return o.pipe(U(c)).subscribe(l=>{e.hidden=!l,e.classList.toggle("md-annotation-list",l);for(let[f,u]of p)l?yn(f,u):yn(u,f)}),T(...[...a].map(([,l])=>xn(l,t,{target$:r}))).pipe(_(()=>s.complete()),le())})}function En(e){if(e.nextElementSibling){let t=e.nextElementSibling;if(t.tagName==="OL")return t;if(t.tagName==="P"&&!t.children.length)return En(t)}}function wn(e,t){return H(()=>{let r=En(e);return typeof r!="undefined"?lr(r,e,t):L})}var Tn=jt(zr());var La=0;function Sn(e){if(e.nextElementSibling){let t=e.nextElementSibling;if(t.tagName==="OL")return t;if(t.tagName==="P"&&!t.children.length)return Sn(t)}}function _a(e){return Ee(e).pipe(m(({width:t})=>({scrollable:xt(e).width>t})),X("scrollable"))}function On(e,t){let{matches:r}=matchMedia("(hover)"),o=H(()=>{let n=new g,i=n.pipe($r(1));n.subscribe(({scrollable:c})=>{c&&r?e.setAttribute("tabindex","0"):e.removeAttribute("tabindex")});let a=[];if(Tn.default.isSupported()&&(e.closest(".copy")||G("content.code.copy")&&!e.closest(".no-copy"))){let c=e.closest("pre");c.id=`__code_${La++}`;let p=dn(c.id);c.insertBefore(p,e),G("content.tooltips")&&a.push(Ge(p))}let s=e.closest(".highlight");if(s instanceof HTMLElement){let c=Sn(s);if(typeof c!="undefined"&&(s.classList.contains("annotate")||G("content.code.annotate"))){let p=lr(c,e,t);a.push(Ee(s).pipe(U(i),m(({width:l,height:f})=>l&&f),Y(),v(l=>l?p:L)))}}return _a(e).pipe(y(c=>n.next(c)),_(()=>n.complete()),m(c=>F({ref:e},c)),$e(...a))});return G("content.lazy")?yt(e).pipe(b(n=>n),ye(1),v(()=>o)):o}function Aa(e,{target$:t,print$:r}){let o=!0;return T(t.pipe(m(n=>n.closest("details:not([open])")),b(n=>e===n),m(()=>({action:"open",reveal:!0}))),r.pipe(b(n=>n||!o),y(()=>o=e.open),m(n=>({action:n?"open":"close"}))))}function Mn(e,t){return H(()=>{let r=new g;return r.subscribe(({action:o,reveal:n})=>{e.toggleAttribute("open",o==="open"),n&&e.scrollIntoView()}),Aa(e,t).pipe(y(o=>r.next(o)),_(()=>r.complete()),m(o=>F({ref:e},o)))})}var Ln=".node circle,.node ellipse,.node path,.node polygon,.node rect{fill:var(--md-mermaid-node-bg-color);stroke:var(--md-mermaid-node-fg-color)}marker{fill:var(--md-mermaid-edge-color)!important}.edgeLabel .label rect{fill:#0000}.label{color:var(--md-mermaid-label-fg-color);font-family:var(--md-mermaid-font-family)}.label foreignObject{line-height:normal;overflow:visible}.label div .edgeLabel{color:var(--md-mermaid-label-fg-color)}.edgeLabel,.edgeLabel rect,.label div .edgeLabel{background-color:var(--md-mermaid-label-bg-color)}.edgeLabel,.edgeLabel rect{fill:var(--md-mermaid-label-bg-color);color:var(--md-mermaid-edge-color)}.edgePath .path,.flowchart-link{stroke:var(--md-mermaid-edge-color);stroke-width:.05rem}.edgePath .arrowheadPath{fill:var(--md-mermaid-edge-color);stroke:none}.cluster rect{fill:var(--md-default-fg-color--lightest);stroke:var(--md-default-fg-color--lighter)}.cluster span{color:var(--md-mermaid-label-fg-color);font-family:var(--md-mermaid-font-family)}g #flowchart-circleEnd,g #flowchart-circleStart,g #flowchart-crossEnd,g #flowchart-crossStart,g #flowchart-pointEnd,g #flowchart-pointStart{stroke:none}g.classGroup line,g.classGroup rect{fill:var(--md-mermaid-node-bg-color);stroke:var(--md-mermaid-node-fg-color)}g.classGroup text{fill:var(--md-mermaid-label-fg-color);font-family:var(--md-mermaid-font-family)}.classLabel .box{fill:var(--md-mermaid-label-bg-color);background-color:var(--md-mermaid-label-bg-color);opacity:1}.classLabel .label{fill:var(--md-mermaid-label-fg-color);font-family:var(--md-mermaid-font-family)}.node .divider{stroke:var(--md-mermaid-node-fg-color)}.relation{stroke:var(--md-mermaid-edge-color)}.cardinality{fill:var(--md-mermaid-label-fg-color);font-family:var(--md-mermaid-font-family)}.cardinality text{fill:inherit!important}defs #classDiagram-compositionEnd,defs #classDiagram-compositionStart,defs #classDiagram-dependencyEnd,defs #classDiagram-dependencyStart,defs #classDiagram-extensionEnd,defs #classDiagram-extensionStart{fill:var(--md-mermaid-edge-color)!important;stroke:var(--md-mermaid-edge-color)!important}defs #classDiagram-aggregationEnd,defs #classDiagram-aggregationStart{fill:var(--md-mermaid-label-bg-color)!important;stroke:var(--md-mermaid-edge-color)!important}g.stateGroup rect{fill:var(--md-mermaid-node-bg-color);stroke:var(--md-mermaid-node-fg-color)}g.stateGroup .state-title{fill:var(--md-mermaid-label-fg-color)!important;font-family:var(--md-mermaid-font-family)}g.stateGroup .composit{fill:var(--md-mermaid-label-bg-color)}.nodeLabel,.nodeLabel p{color:var(--md-mermaid-label-fg-color);font-family:var(--md-mermaid-font-family)}.node circle.state-end,.node circle.state-start,.start-state{fill:var(--md-mermaid-edge-color);stroke:none}.end-state-inner,.end-state-outer{fill:var(--md-mermaid-edge-color)}.end-state-inner,.node circle.state-end{stroke:var(--md-mermaid-label-bg-color)}.transition{stroke:var(--md-mermaid-edge-color)}[id^=state-fork] rect,[id^=state-join] rect{fill:var(--md-mermaid-edge-color)!important;stroke:none!important}.statediagram-cluster.statediagram-cluster .inner{fill:var(--md-default-bg-color)}.statediagram-cluster rect{fill:var(--md-mermaid-node-bg-color);stroke:var(--md-mermaid-node-fg-color)}.statediagram-state rect.divider{fill:var(--md-default-fg-color--lightest);stroke:var(--md-default-fg-color--lighter)}defs #statediagram-barbEnd{stroke:var(--md-mermaid-edge-color)}.attributeBoxEven,.attributeBoxOdd{fill:var(--md-mermaid-node-bg-color);stroke:var(--md-mermaid-node-fg-color)}.entityBox{fill:var(--md-mermaid-label-bg-color);stroke:var(--md-mermaid-node-fg-color)}.entityLabel{fill:var(--md-mermaid-label-fg-color);font-family:var(--md-mermaid-font-family)}.relationshipLabelBox{fill:var(--md-mermaid-label-bg-color);fill-opacity:1;background-color:var(--md-mermaid-label-bg-color);opacity:1}.relationshipLabel{fill:var(--md-mermaid-label-fg-color)}.relationshipLine{stroke:var(--md-mermaid-edge-color)}defs #ONE_OR_MORE_END *,defs #ONE_OR_MORE_START *,defs #ONLY_ONE_END *,defs #ONLY_ONE_START *,defs #ZERO_OR_MORE_END *,defs #ZERO_OR_MORE_START *,defs #ZERO_OR_ONE_END *,defs #ZERO_OR_ONE_START *{stroke:var(--md-mermaid-edge-color)!important}defs #ZERO_OR_MORE_END circle,defs #ZERO_OR_MORE_START circle{fill:var(--md-mermaid-label-bg-color)}.actor{fill:var(--md-mermaid-sequence-actor-bg-color);stroke:var(--md-mermaid-sequence-actor-border-color)}text.actor>tspan{fill:var(--md-mermaid-sequence-actor-fg-color);font-family:var(--md-mermaid-font-family)}line{stroke:var(--md-mermaid-sequence-actor-line-color)}.actor-man circle,.actor-man line{fill:var(--md-mermaid-sequence-actorman-bg-color);stroke:var(--md-mermaid-sequence-actorman-line-color)}.messageLine0,.messageLine1{stroke:var(--md-mermaid-sequence-message-line-color)}.note{fill:var(--md-mermaid-sequence-note-bg-color);stroke:var(--md-mermaid-sequence-note-border-color)}.loopText,.loopText>tspan,.messageText,.noteText>tspan{stroke:none;font-family:var(--md-mermaid-font-family)!important}.messageText{fill:var(--md-mermaid-sequence-message-fg-color)}.loopText,.loopText>tspan{fill:var(--md-mermaid-sequence-loop-fg-color)}.noteText>tspan{fill:var(--md-mermaid-sequence-note-fg-color)}#arrowhead path{fill:var(--md-mermaid-sequence-message-line-color);stroke:none}.loopLine{fill:var(--md-mermaid-sequence-loop-bg-color);stroke:var(--md-mermaid-sequence-loop-border-color)}.labelBox{fill:var(--md-mermaid-sequence-label-bg-color);stroke:none}.labelText,.labelText>span{fill:var(--md-mermaid-sequence-label-fg-color);font-family:var(--md-mermaid-font-family)}.sequenceNumber{fill:var(--md-mermaid-sequence-number-fg-color)}rect.rect{fill:var(--md-mermaid-sequence-box-bg-color);stroke:none}rect.rect+text.text{fill:var(--md-mermaid-sequence-box-fg-color)}defs #sequencenumber{fill:var(--md-mermaid-sequence-number-bg-color)!important}";var qr,ka=0;function Ha(){return typeof mermaid=="undefined"||mermaid instanceof Element?gt("https://unpkg.com/mermaid@10.7.0/dist/mermaid.min.js"):$(void 0)}function _n(e){return e.classList.remove("mermaid"),qr||(qr=Ha().pipe(y(()=>mermaid.initialize({startOnLoad:!1,themeCSS:Ln,sequence:{actorFontSize:"16px",messageFontSize:"16px",noteFontSize:"16px"}})),m(()=>{}),B(1))),qr.subscribe(()=>ro(this,null,function*(){e.classList.add("mermaid");let t=`__mermaid_${ka++}`,r=E("div",{class:"mermaid"}),o=e.textContent,{svg:n,fn:i}=yield mermaid.render(t,o),a=r.attachShadow({mode:"closed"});a.innerHTML=n,e.replaceWith(r),i==null||i(a)})),qr.pipe(m(()=>({ref:e})))}var An=E("table");function Cn(e){return e.replaceWith(An),An.replaceWith(vn(e)),$({ref:e})}function $a(e){let t=e.find(r=>r.checked)||e[0];return T(...e.map(r=>d(r,"change").pipe(m(()=>P(`label[for="${r.id}"]`))))).pipe(q(P(`label[for="${t.id}"]`)),m(r=>({active:r})))}function kn(e,{viewport$:t,target$:r}){let o=P(".tabbed-labels",e),n=R(":scope > input",e),i=Nr("prev");e.append(i);let a=Nr("next");return e.append(a),H(()=>{let s=new g,c=s.pipe(ee(),oe(!0));Q([s,Ee(e)]).pipe(U(c),Me(1,de)).subscribe({next([{active:p},l]){let f=Ue(p),{width:u}=pe(p);e.style.setProperty("--md-indicator-x",`${f.x}px`),e.style.setProperty("--md-indicator-width",`${u}px`);let h=ir(o);(f.xh.x+l.width)&&o.scrollTo({left:Math.max(0,f.x-16),behavior:"smooth"})},complete(){e.style.removeProperty("--md-indicator-x"),e.style.removeProperty("--md-indicator-width")}}),Q([et(o),Ee(o)]).pipe(U(c)).subscribe(([p,l])=>{let f=xt(o);i.hidden=p.x<16,a.hidden=p.x>f.width-l.width-16}),T(d(i,"click").pipe(m(()=>-1)),d(a,"click").pipe(m(()=>1))).pipe(U(c)).subscribe(p=>{let{width:l}=pe(o);o.scrollBy({left:l*p,behavior:"smooth"})}),r.pipe(U(c),b(p=>n.includes(p))).subscribe(p=>p.click()),o.classList.add("tabbed-labels--linked");for(let p of n){let l=P(`label[for="${p.id}"]`);l.replaceChildren(E("a",{href:`#${l.htmlFor}`,tabIndex:-1},...Array.from(l.childNodes))),d(l.firstElementChild,"click").pipe(U(c),b(f=>!(f.metaKey||f.ctrlKey)),y(f=>{f.preventDefault(),f.stopPropagation()})).subscribe(()=>{history.replaceState({},"",`#${l.htmlFor}`),l.click()})}return G("content.tabs.link")&&s.pipe(Le(1),ne(t)).subscribe(([{active:p},{offset:l}])=>{let f=p.innerText.trim();if(p.hasAttribute("data-md-switching"))p.removeAttribute("data-md-switching");else{let u=e.offsetTop-l.y;for(let w of R("[data-tabs]"))for(let A of R(":scope > input",w)){let Z=P(`label[for="${A.id}"]`);if(Z!==p&&Z.innerText.trim()===f){Z.setAttribute("data-md-switching",""),A.click();break}}window.scrollTo({top:e.offsetTop-u});let h=__md_get("__tabs")||[];__md_set("__tabs",[...new Set([f,...h])])}}),s.pipe(U(c)).subscribe(()=>{for(let p of R("audio, video",e))p.pause()}),$a(n).pipe(y(p=>s.next(p)),_(()=>s.complete()),m(p=>F({ref:e},p)))}).pipe(ze(ae))}function Hn(e,{viewport$:t,target$:r,print$:o}){return T(...R(".annotate:not(.highlight)",e).map(n=>wn(n,{target$:r,print$:o})),...R("pre:not(.mermaid) > code",e).map(n=>On(n,{target$:r,print$:o})),...R("pre.mermaid",e).map(n=>_n(n)),...R("table:not([class])",e).map(n=>Cn(n)),...R("details",e).map(n=>Mn(n,{target$:r,print$:o})),...R("[data-tabs]",e).map(n=>kn(n,{viewport$:t,target$:r})),...R("[title]",e).filter(()=>G("content.tooltips")).map(n=>Ge(n)))}function Ra(e,{alert$:t}){return t.pipe(v(r=>T($(!0),$(!1).pipe(Ye(2e3))).pipe(m(o=>({message:r,active:o})))))}function $n(e,t){let r=P(".md-typeset",e);return H(()=>{let o=new g;return o.subscribe(({message:n,active:i})=>{e.classList.toggle("md-dialog--active",i),r.textContent=n}),Ra(e,t).pipe(y(n=>o.next(n)),_(()=>o.complete()),m(n=>F({ref:e},n)))})}function Pa({viewport$:e}){if(!G("header.autohide"))return $(!1);let t=e.pipe(m(({offset:{y:n}})=>n),Ke(2,1),m(([n,i])=>[nMath.abs(i-n.y)>100),m(([,[n]])=>n),Y()),o=We("search");return Q([e,o]).pipe(m(([{offset:n},i])=>n.y>400&&!i),Y(),v(n=>n?r:$(!1)),q(!1))}function Rn(e,t){return H(()=>Q([Ee(e),Pa(t)])).pipe(m(([{height:r},o])=>({height:r,hidden:o})),Y((r,o)=>r.height===o.height&&r.hidden===o.hidden),B(1))}function Pn(e,{header$:t,main$:r}){return H(()=>{let o=new g,n=o.pipe(ee(),oe(!0));o.pipe(X("active"),je(t)).subscribe(([{active:a},{hidden:s}])=>{e.classList.toggle("md-header--shadow",a&&!s),e.hidden=s});let i=fe(R("[title]",e)).pipe(b(()=>G("content.tooltips")),re(a=>Ge(a)));return r.subscribe(o),t.pipe(U(n),m(a=>F({ref:e},a)),$e(i.pipe(U(n))))})}function Ia(e,{viewport$:t,header$:r}){return pr(e,{viewport$:t,header$:r}).pipe(m(({offset:{y:o}})=>{let{height:n}=pe(e);return{active:o>=n}}),X("active"))}function In(e,t){return H(()=>{let r=new g;r.subscribe({next({active:n}){e.classList.toggle("md-header__title--active",n)},complete(){e.classList.remove("md-header__title--active")}});let o=me(".md-content h1");return typeof o=="undefined"?L:Ia(o,t).pipe(y(n=>r.next(n)),_(()=>r.complete()),m(n=>F({ref:e},n)))})}function Fn(e,{viewport$:t,header$:r}){let o=r.pipe(m(({height:i})=>i),Y()),n=o.pipe(v(()=>Ee(e).pipe(m(({height:i})=>({top:e.offsetTop,bottom:e.offsetTop+i})),X("bottom"))));return Q([o,n,t]).pipe(m(([i,{top:a,bottom:s},{offset:{y:c},size:{height:p}}])=>(p=Math.max(0,p-Math.max(0,a-c,i)-Math.max(0,p+c-s)),{offset:a-i,height:p,active:a-i<=c})),Y((i,a)=>i.offset===a.offset&&i.height===a.height&&i.active===a.active))}function Fa(e){let t=__md_get("__palette")||{index:e.findIndex(o=>matchMedia(o.getAttribute("data-md-color-media")).matches)},r=Math.max(0,Math.min(t.index,e.length-1));return $(...e).pipe(re(o=>d(o,"change").pipe(m(()=>o))),q(e[r]),m(o=>({index:e.indexOf(o),color:{media:o.getAttribute("data-md-color-media"),scheme:o.getAttribute("data-md-color-scheme"),primary:o.getAttribute("data-md-color-primary"),accent:o.getAttribute("data-md-color-accent")}})),B(1))}function jn(e){let t=R("input",e),r=E("meta",{name:"theme-color"});document.head.appendChild(r);let o=E("meta",{name:"color-scheme"});document.head.appendChild(o);let n=At("(prefers-color-scheme: light)");return H(()=>{let i=new g;return i.subscribe(a=>{if(document.body.setAttribute("data-md-color-switching",""),a.color.media==="(prefers-color-scheme)"){let s=matchMedia("(prefers-color-scheme: light)"),c=document.querySelector(s.matches?"[data-md-color-media='(prefers-color-scheme: light)']":"[data-md-color-media='(prefers-color-scheme: dark)']");a.color.scheme=c.getAttribute("data-md-color-scheme"),a.color.primary=c.getAttribute("data-md-color-primary"),a.color.accent=c.getAttribute("data-md-color-accent")}for(let[s,c]of Object.entries(a.color))document.body.setAttribute(`data-md-color-${s}`,c);for(let s=0;sa.key==="Enter"),ne(i,(a,s)=>s)).subscribe(({index:a})=>{a=(a+1)%t.length,t[a].click(),t[a].focus()}),i.pipe(m(()=>{let a=Te("header"),s=window.getComputedStyle(a);return o.content=s.colorScheme,s.backgroundColor.match(/\d+/g).map(c=>(+c).toString(16).padStart(2,"0")).join("")})).subscribe(a=>r.content=`#${a}`),i.pipe(Oe(ae)).subscribe(()=>{document.body.removeAttribute("data-md-color-switching")}),Fa(t).pipe(U(n.pipe(Le(1))),at(),y(a=>i.next(a)),_(()=>i.complete()),m(a=>F({ref:e},a)))})}function Un(e,{progress$:t}){return H(()=>{let r=new g;return r.subscribe(({value:o})=>{e.style.setProperty("--md-progress-value",`${o}`)}),t.pipe(y(o=>r.next({value:o})),_(()=>r.complete()),m(o=>({ref:e,value:o})))})}var Kr=jt(zr());function ja(e){e.setAttribute("data-md-copying","");let t=e.closest("[data-copy]"),r=t?t.getAttribute("data-copy"):e.innerText;return e.removeAttribute("data-md-copying"),r.trimEnd()}function Wn({alert$:e}){Kr.default.isSupported()&&new j(t=>{new Kr.default("[data-clipboard-target], [data-clipboard-text]",{text:r=>r.getAttribute("data-clipboard-text")||ja(P(r.getAttribute("data-clipboard-target")))}).on("success",r=>t.next(r))}).pipe(y(t=>{t.trigger.focus()}),m(()=>ge("clipboard.copied"))).subscribe(e)}function Dn(e,t){return e.protocol=t.protocol,e.hostname=t.hostname,e}function Ua(e,t){let r=new Map;for(let o of R("url",e)){let n=P("loc",o),i=[Dn(new URL(n.textContent),t)];r.set(`${i[0]}`,i);for(let a of R("[rel=alternate]",o)){let s=a.getAttribute("href");s!=null&&i.push(Dn(new URL(s),t))}}return r}function mr(e){return on(new URL("sitemap.xml",e)).pipe(m(t=>Ua(t,new URL(e))),he(()=>$(new Map)))}function Wa(e,t){if(!(e.target instanceof Element))return L;let r=e.target.closest("a");if(r===null)return L;if(r.target||e.metaKey||e.ctrlKey)return L;let o=new URL(r.href);return o.search=o.hash="",t.has(`${o}`)?(e.preventDefault(),$(new URL(r.href))):L}function Nn(e){let t=new Map;for(let r of R(":scope > *",e.head))t.set(r.outerHTML,r);return t}function Vn(e){for(let t of R("[href], [src]",e))for(let r of["href","src"]){let o=t.getAttribute(r);if(o&&!/^(?:[a-z]+:)?\/\//i.test(o)){t[r]=t[r];break}}return $(e)}function Da(e){for(let o of["[data-md-component=announce]","[data-md-component=container]","[data-md-component=header-topic]","[data-md-component=outdated]","[data-md-component=logo]","[data-md-component=skip]",...G("navigation.tabs.sticky")?["[data-md-component=tabs]"]:[]]){let n=me(o),i=me(o,e);typeof n!="undefined"&&typeof i!="undefined"&&n.replaceWith(i)}let t=Nn(document);for(let[o,n]of Nn(e))t.has(o)?t.delete(o):document.head.appendChild(n);for(let o of t.values()){let n=o.getAttribute("name");n!=="theme-color"&&n!=="color-scheme"&&o.remove()}let r=Te("container");return Fe(R("script",r)).pipe(v(o=>{let n=e.createElement("script");if(o.src){for(let i of o.getAttributeNames())n.setAttribute(i,o.getAttribute(i));return o.replaceWith(n),new j(i=>{n.onload=()=>i.complete()})}else return n.textContent=o.textContent,o.replaceWith(n),L}),ee(),oe(document))}function zn({location$:e,viewport$:t,progress$:r}){let o=we();if(location.protocol==="file:")return L;let n=mr(o.base);$(document).subscribe(Vn);let i=d(document.body,"click").pipe(je(n),v(([c,p])=>Wa(c,p)),le()),a=d(window,"popstate").pipe(m(ve),le());i.pipe(ne(t)).subscribe(([c,{offset:p}])=>{history.replaceState(p,""),history.pushState(null,"",c)}),T(i,a).subscribe(e);let s=e.pipe(X("pathname"),v(c=>rn(c,{progress$:r}).pipe(he(()=>(st(c,!0),L)))),v(Vn),v(Da),le());return T(s.pipe(ne(e,(c,p)=>p)),e.pipe(X("pathname"),v(()=>e),X("hash")),e.pipe(Y((c,p)=>c.pathname===p.pathname&&c.hash===p.hash),v(()=>i),y(()=>history.back()))).subscribe(c=>{var p,l;history.state!==null||!c.hash?window.scrollTo(0,(l=(p=history.state)==null?void 0:p.y)!=null?l:0):(history.scrollRestoration="auto",Zo(c.hash),history.scrollRestoration="manual")}),e.subscribe(()=>{history.scrollRestoration="manual"}),d(window,"beforeunload").subscribe(()=>{history.scrollRestoration="auto"}),t.pipe(X("offset"),be(100)).subscribe(({offset:c})=>{history.replaceState(c,"")}),s}var Qn=jt(Kn());function Yn(e){let t=e.separator.split("|").map(n=>n.replace(/(\(\?[!=<][^)]+\))/g,"").length===0?"\uFFFD":n).join("|"),r=new RegExp(t,"img"),o=(n,i,a)=>`${i}${a}`;return n=>{n=n.replace(/[\s*+\-:~^]+/g," ").trim();let i=new RegExp(`(^|${e.separator}|)(${n.replace(/[|\\{}()[\]^$+*?.-]/g,"\\$&").replace(r,"|")})`,"img");return a=>(0,Qn.default)(a).replace(i,o).replace(/<\/mark>(\s+)]*>/img,"$1")}}function Ht(e){return e.type===1}function fr(e){return e.type===3}function Bn(e,t){let r=ln(e);return T($(location.protocol!=="file:"),We("search")).pipe(He(o=>o),v(()=>t)).subscribe(({config:o,docs:n})=>r.next({type:0,data:{config:o,docs:n,options:{suggest:G("search.suggest")}}})),r}function Gn({document$:e}){let t=we(),r=De(new URL("../versions.json",t.base)).pipe(he(()=>L)),o=r.pipe(m(n=>{let[,i]=t.base.match(/([^/]+)\/?$/);return n.find(({version:a,aliases:s})=>a===i||s.includes(i))||n[0]}));r.pipe(m(n=>new Map(n.map(i=>[`${new URL(`../${i.version}/`,t.base)}`,i]))),v(n=>d(document.body,"click").pipe(b(i=>!i.metaKey&&!i.ctrlKey),ne(o),v(([i,a])=>{if(i.target instanceof Element){let s=i.target.closest("a");if(s&&!s.target&&n.has(s.href)){let c=s.href;return!i.target.closest(".md-version")&&n.get(c)===a?L:(i.preventDefault(),$(c))}}return L}),v(i=>{let{version:a}=n.get(i);return mr(new URL(i)).pipe(m(s=>{let p=ve().href.replace(t.base,"");return s.has(p.split("#")[0])?new URL(`../${a}/${p}`,t.base):new URL(i)}))})))).subscribe(n=>st(n,!0)),Q([r,o]).subscribe(([n,i])=>{P(".md-header__topic").appendChild(gn(n,i))}),e.pipe(v(()=>o)).subscribe(n=>{var a;let i=__md_get("__outdated",sessionStorage);if(i===null){i=!0;let s=((a=t.version)==null?void 0:a.default)||"latest";Array.isArray(s)||(s=[s]);e:for(let c of s)for(let p of n.aliases.concat(n.version))if(new RegExp(c,"i").test(p)){i=!1;break e}__md_set("__outdated",i,sessionStorage)}if(i)for(let s of ie("outdated"))s.hidden=!1})}function Ka(e,{worker$:t}){let{searchParams:r}=ve();r.has("q")&&(Be("search",!0),e.value=r.get("q"),e.focus(),We("search").pipe(He(i=>!i)).subscribe(()=>{let i=ve();i.searchParams.delete("q"),history.replaceState({},"",`${i}`)}));let o=vt(e),n=T(t.pipe(He(Ht)),d(e,"keyup"),o).pipe(m(()=>e.value),Y());return Q([n,o]).pipe(m(([i,a])=>({value:i,focus:a})),B(1))}function Jn(e,{worker$:t}){let r=new g,o=r.pipe(ee(),oe(!0));Q([t.pipe(He(Ht)),r],(i,a)=>a).pipe(X("value")).subscribe(({value:i})=>t.next({type:2,data:i})),r.pipe(X("focus")).subscribe(({focus:i})=>{i&&Be("search",i)}),d(e.form,"reset").pipe(U(o)).subscribe(()=>e.focus());let n=P("header [for=__search]");return d(n,"click").subscribe(()=>e.focus()),Ka(e,{worker$:t}).pipe(y(i=>r.next(i)),_(()=>r.complete()),m(i=>F({ref:e},i)),B(1))}function Xn(e,{worker$:t,query$:r}){let o=new g,n=Yo(e.parentElement).pipe(b(Boolean)),i=e.parentElement,a=P(":scope > :first-child",e),s=P(":scope > :last-child",e);We("search").subscribe(l=>s.setAttribute("role",l?"list":"presentation")),o.pipe(ne(r),Ir(t.pipe(He(Ht)))).subscribe(([{items:l},{value:f}])=>{switch(l.length){case 0:a.textContent=f.length?ge("search.result.none"):ge("search.result.placeholder");break;case 1:a.textContent=ge("search.result.one");break;default:let u=ar(l.length);a.textContent=ge("search.result.other",u)}});let c=o.pipe(y(()=>s.innerHTML=""),v(({items:l})=>T($(...l.slice(0,10)),$(...l.slice(10)).pipe(Ke(4),jr(n),v(([f])=>f)))),m(hn),le());return c.subscribe(l=>s.appendChild(l)),c.pipe(re(l=>{let f=me("details",l);return typeof f=="undefined"?L:d(f,"toggle").pipe(U(o),m(()=>f))})).subscribe(l=>{l.open===!1&&l.offsetTop<=i.scrollTop&&i.scrollTo({top:l.offsetTop})}),t.pipe(b(fr),m(({data:l})=>l)).pipe(y(l=>o.next(l)),_(()=>o.complete()),m(l=>F({ref:e},l)))}function Qa(e,{query$:t}){return t.pipe(m(({value:r})=>{let o=ve();return o.hash="",r=r.replace(/\s+/g,"+").replace(/&/g,"%26").replace(/=/g,"%3D"),o.search=`q=${r}`,{url:o}}))}function Zn(e,t){let r=new g,o=r.pipe(ee(),oe(!0));return r.subscribe(({url:n})=>{e.setAttribute("data-clipboard-text",e.href),e.href=`${n}`}),d(e,"click").pipe(U(o)).subscribe(n=>n.preventDefault()),Qa(e,t).pipe(y(n=>r.next(n)),_(()=>r.complete()),m(n=>F({ref:e},n)))}function ei(e,{worker$:t,keyboard$:r}){let o=new g,n=Te("search-query"),i=T(d(n,"keydown"),d(n,"focus")).pipe(Oe(ae),m(()=>n.value),Y());return o.pipe(je(i),m(([{suggest:s},c])=>{let p=c.split(/([\s-]+)/);if(s!=null&&s.length&&p[p.length-1]){let l=s[s.length-1];l.startsWith(p[p.length-1])&&(p[p.length-1]=l)}else p.length=0;return p})).subscribe(s=>e.innerHTML=s.join("").replace(/\s/g," ")),r.pipe(b(({mode:s})=>s==="search")).subscribe(s=>{switch(s.type){case"ArrowRight":e.innerText.length&&n.selectionStart===n.value.length&&(n.value=e.innerText);break}}),t.pipe(b(fr),m(({data:s})=>s)).pipe(y(s=>o.next(s)),_(()=>o.complete()),m(()=>({ref:e})))}function ti(e,{index$:t,keyboard$:r}){let o=we();try{let n=Bn(o.search,t),i=Te("search-query",e),a=Te("search-result",e);d(e,"click").pipe(b(({target:c})=>c instanceof Element&&!!c.closest("a"))).subscribe(()=>Be("search",!1)),r.pipe(b(({mode:c})=>c==="search")).subscribe(c=>{let p=Re();switch(c.type){case"Enter":if(p===i){let l=new Map;for(let f of R(":first-child [href]",a)){let u=f.firstElementChild;l.set(f,parseFloat(u.getAttribute("data-md-score")))}if(l.size){let[[f]]=[...l].sort(([,u],[,h])=>h-u);f.click()}c.claim()}break;case"Escape":case"Tab":Be("search",!1),i.blur();break;case"ArrowUp":case"ArrowDown":if(typeof p=="undefined")i.focus();else{let l=[i,...R(":not(details) > [href], summary, details[open] [href]",a)],f=Math.max(0,(Math.max(0,l.indexOf(p))+l.length+(c.type==="ArrowUp"?-1:1))%l.length);l[f].focus()}c.claim();break;default:i!==Re()&&i.focus()}}),r.pipe(b(({mode:c})=>c==="global")).subscribe(c=>{switch(c.type){case"f":case"s":case"/":i.focus(),i.select(),c.claim();break}});let s=Jn(i,{worker$:n});return T(s,Xn(a,{worker$:n,query$:s})).pipe($e(...ie("search-share",e).map(c=>Zn(c,{query$:s})),...ie("search-suggest",e).map(c=>ei(c,{worker$:n,keyboard$:r}))))}catch(n){return e.hidden=!0,qe}}function ri(e,{index$:t,location$:r}){return Q([t,r.pipe(q(ve()),b(o=>!!o.searchParams.get("h")))]).pipe(m(([o,n])=>Yn(o.config)(n.searchParams.get("h"))),m(o=>{var a;let n=new Map,i=document.createNodeIterator(e,NodeFilter.SHOW_TEXT);for(let s=i.nextNode();s;s=i.nextNode())if((a=s.parentElement)!=null&&a.offsetHeight){let c=s.textContent,p=o(c);p.length>c.length&&n.set(s,p)}for(let[s,c]of n){let{childNodes:p}=E("span",null,c);s.replaceWith(...Array.from(p))}return{ref:e,nodes:n}}))}function Ya(e,{viewport$:t,main$:r}){let o=e.closest(".md-grid"),n=o.offsetTop-o.parentElement.offsetTop;return Q([r,t]).pipe(m(([{offset:i,height:a},{offset:{y:s}}])=>(a=a+Math.min(n,Math.max(0,s-i))-n,{height:a,locked:s>=i+n})),Y((i,a)=>i.height===a.height&&i.locked===a.locked))}function Qr(e,o){var n=o,{header$:t}=n,r=to(n,["header$"]);let i=P(".md-sidebar__scrollwrap",e),{y:a}=Ue(i);return H(()=>{let s=new g,c=s.pipe(ee(),oe(!0)),p=s.pipe(Me(0,de));return p.pipe(ne(t)).subscribe({next([{height:l},{height:f}]){i.style.height=`${l-2*a}px`,e.style.top=`${f}px`},complete(){i.style.height="",e.style.top=""}}),p.pipe(He()).subscribe(()=>{for(let l of R(".md-nav__link--active[href]",e)){if(!l.clientHeight)continue;let f=l.closest(".md-sidebar__scrollwrap");if(typeof f!="undefined"){let u=l.offsetTop-f.offsetTop,{height:h}=pe(f);f.scrollTo({top:u-h/2})}}}),fe(R("label[tabindex]",e)).pipe(re(l=>d(l,"click").pipe(Oe(ae),m(()=>l),U(c)))).subscribe(l=>{let f=P(`[id="${l.htmlFor}"]`);P(`[aria-labelledby="${l.id}"]`).setAttribute("aria-expanded",`${f.checked}`)}),Ya(e,r).pipe(y(l=>s.next(l)),_(()=>s.complete()),m(l=>F({ref:e},l)))})}function oi(e,t){if(typeof t!="undefined"){let r=`https://api.github.com/repos/${e}/${t}`;return Lt(De(`${r}/releases/latest`).pipe(he(()=>L),m(o=>({version:o.tag_name})),Qe({})),De(r).pipe(he(()=>L),m(o=>({stars:o.stargazers_count,forks:o.forks_count})),Qe({}))).pipe(m(([o,n])=>F(F({},o),n)))}else{let r=`https://api.github.com/users/${e}`;return De(r).pipe(m(o=>({repositories:o.public_repos})),Qe({}))}}function ni(e,t){let r=`https://${e}/api/v4/projects/${encodeURIComponent(t)}`;return De(r).pipe(he(()=>L),m(({star_count:o,forks_count:n})=>({stars:o,forks:n})),Qe({}))}function ii(e){let t=e.match(/^.+github\.com\/([^/]+)\/?([^/]+)?/i);if(t){let[,r,o]=t;return oi(r,o)}if(t=e.match(/^.+?([^/]*gitlab[^/]+)\/(.+?)\/?$/i),t){let[,r,o]=t;return ni(r,o)}return L}var Ba;function Ga(e){return Ba||(Ba=H(()=>{let t=__md_get("__source",sessionStorage);if(t)return $(t);if(ie("consent").length){let o=__md_get("__consent");if(!(o&&o.github))return L}return ii(e.href).pipe(y(o=>__md_set("__source",o,sessionStorage)))}).pipe(he(()=>L),b(t=>Object.keys(t).length>0),m(t=>({facts:t})),B(1)))}function ai(e){let t=P(":scope > :last-child",e);return H(()=>{let r=new g;return r.subscribe(({facts:o})=>{t.appendChild(bn(o)),t.classList.add("md-source__repository--active")}),Ga(e).pipe(y(o=>r.next(o)),_(()=>r.complete()),m(o=>F({ref:e},o)))})}function Ja(e,{viewport$:t,header$:r}){return Ee(document.body).pipe(v(()=>pr(e,{header$:r,viewport$:t})),m(({offset:{y:o}})=>({hidden:o>=10})),X("hidden"))}function si(e,t){return H(()=>{let r=new g;return r.subscribe({next({hidden:o}){e.hidden=o},complete(){e.hidden=!1}}),(G("navigation.tabs.sticky")?$({hidden:!1}):Ja(e,t)).pipe(y(o=>r.next(o)),_(()=>r.complete()),m(o=>F({ref:e},o)))})}function Xa(e,{viewport$:t,header$:r}){let o=new Map,n=R(".md-nav__link",e);for(let s of n){let c=decodeURIComponent(s.hash.substring(1)),p=me(`[id="${c}"]`);typeof p!="undefined"&&o.set(s,p)}let i=r.pipe(X("height"),m(({height:s})=>{let c=Te("main"),p=P(":scope > :first-child",c);return s+.8*(p.offsetTop-c.offsetTop)}),le());return Ee(document.body).pipe(X("height"),v(s=>H(()=>{let c=[];return $([...o].reduce((p,[l,f])=>{for(;c.length&&o.get(c[c.length-1]).tagName>=f.tagName;)c.pop();let u=f.offsetTop;for(;!u&&f.parentElement;)f=f.parentElement,u=f.offsetTop;let h=f.offsetParent;for(;h;h=h.offsetParent)u+=h.offsetTop;return p.set([...c=[...c,l]].reverse(),u)},new Map))}).pipe(m(c=>new Map([...c].sort(([,p],[,l])=>p-l))),je(i),v(([c,p])=>t.pipe(Rr(([l,f],{offset:{y:u},size:h})=>{let w=u+h.height>=Math.floor(s.height);for(;f.length;){let[,A]=f[0];if(A-p=u&&!w)f=[l.pop(),...f];else break}return[l,f]},[[],[...c]]),Y((l,f)=>l[0]===f[0]&&l[1]===f[1])))))).pipe(m(([s,c])=>({prev:s.map(([p])=>p),next:c.map(([p])=>p)})),q({prev:[],next:[]}),Ke(2,1),m(([s,c])=>s.prev.length{let i=new g,a=i.pipe(ee(),oe(!0));if(i.subscribe(({prev:s,next:c})=>{for(let[p]of c)p.classList.remove("md-nav__link--passed"),p.classList.remove("md-nav__link--active");for(let[p,[l]]of s.entries())l.classList.add("md-nav__link--passed"),l.classList.toggle("md-nav__link--active",p===s.length-1)}),G("toc.follow")){let s=T(t.pipe(be(1),m(()=>{})),t.pipe(be(250),m(()=>"smooth")));i.pipe(b(({prev:c})=>c.length>0),je(o.pipe(Oe(ae))),ne(s)).subscribe(([[{prev:c}],p])=>{let[l]=c[c.length-1];if(l.offsetHeight){let f=sr(l);if(typeof f!="undefined"){let u=l.offsetTop-f.offsetTop,{height:h}=pe(f);f.scrollTo({top:u-h/2,behavior:p})}}})}return G("navigation.tracking")&&t.pipe(U(a),X("offset"),be(250),Le(1),U(n.pipe(Le(1))),at({delay:250}),ne(i)).subscribe(([,{prev:s}])=>{let c=ve(),p=s[s.length-1];if(p&&p.length){let[l]=p,{hash:f}=new URL(l.href);c.hash!==f&&(c.hash=f,history.replaceState({},"",`${c}`))}else c.hash="",history.replaceState({},"",`${c}`)}),Xa(e,{viewport$:t,header$:r}).pipe(y(s=>i.next(s)),_(()=>i.complete()),m(s=>F({ref:e},s)))})}function Za(e,{viewport$:t,main$:r,target$:o}){let n=t.pipe(m(({offset:{y:a}})=>a),Ke(2,1),m(([a,s])=>a>s&&s>0),Y()),i=r.pipe(m(({active:a})=>a));return Q([i,n]).pipe(m(([a,s])=>!(a&&s)),Y(),U(o.pipe(Le(1))),oe(!0),at({delay:250}),m(a=>({hidden:a})))}function pi(e,{viewport$:t,header$:r,main$:o,target$:n}){let i=new g,a=i.pipe(ee(),oe(!0));return i.subscribe({next({hidden:s}){e.hidden=s,s?(e.setAttribute("tabindex","-1"),e.blur()):e.removeAttribute("tabindex")},complete(){e.style.top="",e.hidden=!0,e.removeAttribute("tabindex")}}),r.pipe(U(a),X("height")).subscribe(({height:s})=>{e.style.top=`${s+16}px`}),d(e,"click").subscribe(s=>{s.preventDefault(),window.scrollTo({top:0})}),Za(e,{viewport$:t,main$:o,target$:n}).pipe(y(s=>i.next(s)),_(()=>i.complete()),m(s=>F({ref:e},s)))}function li({document$:e}){e.pipe(v(()=>R(".md-ellipsis")),re(t=>yt(t).pipe(U(e.pipe(Le(1))),b(r=>r),m(()=>t),ye(1))),b(t=>t.offsetWidth{let r=t.innerText,o=t.closest("a")||t;return o.title=r,Ge(o).pipe(U(e.pipe(Le(1))),_(()=>o.removeAttribute("title")))})).subscribe(),e.pipe(v(()=>R(".md-status")),re(t=>Ge(t))).subscribe()}function mi({document$:e,tablet$:t}){e.pipe(v(()=>R(".md-toggle--indeterminate")),y(r=>{r.indeterminate=!0,r.checked=!1}),re(r=>d(r,"change").pipe(Fr(()=>r.classList.contains("md-toggle--indeterminate")),m(()=>r))),ne(t)).subscribe(([r,o])=>{r.classList.remove("md-toggle--indeterminate"),o&&(r.checked=!1)})}function es(){return/(iPad|iPhone|iPod)/.test(navigator.userAgent)}function fi({document$:e}){e.pipe(v(()=>R("[data-md-scrollfix]")),y(t=>t.removeAttribute("data-md-scrollfix")),b(es),re(t=>d(t,"touchstart").pipe(m(()=>t)))).subscribe(t=>{let r=t.scrollTop;r===0?t.scrollTop=1:r+t.offsetHeight===t.scrollHeight&&(t.scrollTop=r-1)})}function ui({viewport$:e,tablet$:t}){Q([We("search"),t]).pipe(m(([r,o])=>r&&!o),v(r=>$(r).pipe(Ye(r?400:100))),ne(e)).subscribe(([r,{offset:{y:o}}])=>{if(r)document.body.setAttribute("data-md-scrolllock",""),document.body.style.top=`-${o}px`;else{let n=-1*parseInt(document.body.style.top,10);document.body.removeAttribute("data-md-scrolllock"),document.body.style.top="",n&&window.scrollTo(0,n)}})}Object.entries||(Object.entries=function(e){let t=[];for(let r of Object.keys(e))t.push([r,e[r]]);return t});Object.values||(Object.values=function(e){let t=[];for(let r of Object.keys(e))t.push(e[r]);return t});typeof Element!="undefined"&&(Element.prototype.scrollTo||(Element.prototype.scrollTo=function(e,t){typeof e=="object"?(this.scrollLeft=e.left,this.scrollTop=e.top):(this.scrollLeft=e,this.scrollTop=t)}),Element.prototype.replaceWith||(Element.prototype.replaceWith=function(...e){let t=this.parentNode;if(t){e.length===0&&t.removeChild(this);for(let r=e.length-1;r>=0;r--){let o=e[r];typeof o=="string"?o=document.createTextNode(o):o.parentNode&&o.parentNode.removeChild(o),r?t.insertBefore(this.previousSibling,o):t.replaceChild(o,this)}}}));function ts(){return location.protocol==="file:"?gt(`${new URL("search/search_index.js",Yr.base)}`).pipe(m(()=>__index),B(1)):De(new URL("search/search_index.json",Yr.base))}document.documentElement.classList.remove("no-js");document.documentElement.classList.add("js");var rt=No(),Rt=Jo(),wt=en(Rt),Br=Go(),_e=pn(),ur=At("(min-width: 960px)"),hi=At("(min-width: 1220px)"),bi=tn(),Yr=we(),vi=document.forms.namedItem("search")?ts():qe,Gr=new g;Wn({alert$:Gr});var Jr=new g;G("navigation.instant")&&zn({location$:Rt,viewport$:_e,progress$:Jr}).subscribe(rt);var di;((di=Yr.version)==null?void 0:di.provider)==="mike"&&Gn({document$:rt});T(Rt,wt).pipe(Ye(125)).subscribe(()=>{Be("drawer",!1),Be("search",!1)});Br.pipe(b(({mode:e})=>e==="global")).subscribe(e=>{switch(e.type){case"p":case",":let t=me("link[rel=prev]");typeof t!="undefined"&&st(t);break;case"n":case".":let r=me("link[rel=next]");typeof r!="undefined"&&st(r);break;case"Enter":let o=Re();o instanceof HTMLLabelElement&&o.click()}});li({document$:rt});mi({document$:rt,tablet$:ur});fi({document$:rt});ui({viewport$:_e,tablet$:ur});var tt=Rn(Te("header"),{viewport$:_e}),$t=rt.pipe(m(()=>Te("main")),v(e=>Fn(e,{viewport$:_e,header$:tt})),B(1)),rs=T(...ie("consent").map(e=>fn(e,{target$:wt})),...ie("dialog").map(e=>$n(e,{alert$:Gr})),...ie("header").map(e=>Pn(e,{viewport$:_e,header$:tt,main$:$t})),...ie("palette").map(e=>jn(e)),...ie("progress").map(e=>Un(e,{progress$:Jr})),...ie("search").map(e=>ti(e,{index$:vi,keyboard$:Br})),...ie("source").map(e=>ai(e))),os=H(()=>T(...ie("announce").map(e=>mn(e)),...ie("content").map(e=>Hn(e,{viewport$:_e,target$:wt,print$:bi})),...ie("content").map(e=>G("search.highlight")?ri(e,{index$:vi,location$:Rt}):L),...ie("header-title").map(e=>In(e,{viewport$:_e,header$:tt})),...ie("sidebar").map(e=>e.getAttribute("data-md-type")==="navigation"?Ur(hi,()=>Qr(e,{viewport$:_e,header$:tt,main$:$t})):Ur(ur,()=>Qr(e,{viewport$:_e,header$:tt,main$:$t}))),...ie("tabs").map(e=>si(e,{viewport$:_e,header$:tt})),...ie("toc").map(e=>ci(e,{viewport$:_e,header$:tt,main$:$t,target$:wt})),...ie("top").map(e=>pi(e,{viewport$:_e,header$:tt,main$:$t,target$:wt})))),gi=rt.pipe(v(()=>os),$e(rs),B(1));gi.subscribe();window.document$=rt;window.location$=Rt;window.target$=wt;window.keyboard$=Br;window.viewport$=_e;window.tablet$=ur;window.screen$=hi;window.print$=bi;window.alert$=Gr;window.progress$=Jr;window.component$=gi;})();
+//# sourceMappingURL=bundle.1e8ae164.min.js.map
+
diff --git a/assets/javascripts/bundle.1e8ae164.min.js.map b/assets/javascripts/bundle.1e8ae164.min.js.map
new file mode 100644
index 0000000000..6c33b8e8e6
--- /dev/null
+++ b/assets/javascripts/bundle.1e8ae164.min.js.map
@@ -0,0 +1,7 @@
+{
+ "version": 3,
+ "sources": ["node_modules/focus-visible/dist/focus-visible.js", "node_modules/clipboard/dist/clipboard.js", "node_modules/escape-html/index.js", "src/templates/assets/javascripts/bundle.ts", "node_modules/rxjs/node_modules/tslib/tslib.es6.js", "node_modules/rxjs/src/internal/util/isFunction.ts", "node_modules/rxjs/src/internal/util/createErrorClass.ts", "node_modules/rxjs/src/internal/util/UnsubscriptionError.ts", "node_modules/rxjs/src/internal/util/arrRemove.ts", "node_modules/rxjs/src/internal/Subscription.ts", "node_modules/rxjs/src/internal/config.ts", "node_modules/rxjs/src/internal/scheduler/timeoutProvider.ts", "node_modules/rxjs/src/internal/util/reportUnhandledError.ts", "node_modules/rxjs/src/internal/util/noop.ts", "node_modules/rxjs/src/internal/NotificationFactories.ts", "node_modules/rxjs/src/internal/util/errorContext.ts", "node_modules/rxjs/src/internal/Subscriber.ts", "node_modules/rxjs/src/internal/symbol/observable.ts", "node_modules/rxjs/src/internal/util/identity.ts", "node_modules/rxjs/src/internal/util/pipe.ts", "node_modules/rxjs/src/internal/Observable.ts", "node_modules/rxjs/src/internal/util/lift.ts", "node_modules/rxjs/src/internal/operators/OperatorSubscriber.ts", "node_modules/rxjs/src/internal/scheduler/animationFrameProvider.ts", "node_modules/rxjs/src/internal/util/ObjectUnsubscribedError.ts", "node_modules/rxjs/src/internal/Subject.ts", "node_modules/rxjs/src/internal/scheduler/dateTimestampProvider.ts", "node_modules/rxjs/src/internal/ReplaySubject.ts", "node_modules/rxjs/src/internal/scheduler/Action.ts", "node_modules/rxjs/src/internal/scheduler/intervalProvider.ts", "node_modules/rxjs/src/internal/scheduler/AsyncAction.ts", "node_modules/rxjs/src/internal/Scheduler.ts", "node_modules/rxjs/src/internal/scheduler/AsyncScheduler.ts", "node_modules/rxjs/src/internal/scheduler/async.ts", "node_modules/rxjs/src/internal/scheduler/AnimationFrameAction.ts", "node_modules/rxjs/src/internal/scheduler/AnimationFrameScheduler.ts", "node_modules/rxjs/src/internal/scheduler/animationFrame.ts", "node_modules/rxjs/src/internal/observable/empty.ts", "node_modules/rxjs/src/internal/util/isScheduler.ts", "node_modules/rxjs/src/internal/util/args.ts", "node_modules/rxjs/src/internal/util/isArrayLike.ts", "node_modules/rxjs/src/internal/util/isPromise.ts", "node_modules/rxjs/src/internal/util/isInteropObservable.ts", "node_modules/rxjs/src/internal/util/isAsyncIterable.ts", "node_modules/rxjs/src/internal/util/throwUnobservableError.ts", "node_modules/rxjs/src/internal/symbol/iterator.ts", "node_modules/rxjs/src/internal/util/isIterable.ts", "node_modules/rxjs/src/internal/util/isReadableStreamLike.ts", "node_modules/rxjs/src/internal/observable/innerFrom.ts", "node_modules/rxjs/src/internal/util/executeSchedule.ts", "node_modules/rxjs/src/internal/operators/observeOn.ts", "node_modules/rxjs/src/internal/operators/subscribeOn.ts", "node_modules/rxjs/src/internal/scheduled/scheduleObservable.ts", "node_modules/rxjs/src/internal/scheduled/schedulePromise.ts", "node_modules/rxjs/src/internal/scheduled/scheduleArray.ts", "node_modules/rxjs/src/internal/scheduled/scheduleIterable.ts", "node_modules/rxjs/src/internal/scheduled/scheduleAsyncIterable.ts", "node_modules/rxjs/src/internal/scheduled/scheduleReadableStreamLike.ts", "node_modules/rxjs/src/internal/scheduled/scheduled.ts", "node_modules/rxjs/src/internal/observable/from.ts", "node_modules/rxjs/src/internal/observable/of.ts", "node_modules/rxjs/src/internal/observable/throwError.ts", "node_modules/rxjs/src/internal/util/EmptyError.ts", "node_modules/rxjs/src/internal/util/isDate.ts", "node_modules/rxjs/src/internal/operators/map.ts", "node_modules/rxjs/src/internal/util/mapOneOrManyArgs.ts", "node_modules/rxjs/src/internal/util/argsArgArrayOrObject.ts", "node_modules/rxjs/src/internal/util/createObject.ts", "node_modules/rxjs/src/internal/observable/combineLatest.ts", "node_modules/rxjs/src/internal/operators/mergeInternals.ts", "node_modules/rxjs/src/internal/operators/mergeMap.ts", "node_modules/rxjs/src/internal/operators/mergeAll.ts", "node_modules/rxjs/src/internal/operators/concatAll.ts", "node_modules/rxjs/src/internal/observable/concat.ts", "node_modules/rxjs/src/internal/observable/defer.ts", "node_modules/rxjs/src/internal/observable/fromEvent.ts", "node_modules/rxjs/src/internal/observable/fromEventPattern.ts", "node_modules/rxjs/src/internal/observable/timer.ts", "node_modules/rxjs/src/internal/observable/merge.ts", "node_modules/rxjs/src/internal/observable/never.ts", "node_modules/rxjs/src/internal/util/argsOrArgArray.ts", "node_modules/rxjs/src/internal/operators/filter.ts", "node_modules/rxjs/src/internal/observable/zip.ts", "node_modules/rxjs/src/internal/operators/audit.ts", "node_modules/rxjs/src/internal/operators/auditTime.ts", "node_modules/rxjs/src/internal/operators/bufferCount.ts", "node_modules/rxjs/src/internal/operators/catchError.ts", "node_modules/rxjs/src/internal/operators/scanInternals.ts", "node_modules/rxjs/src/internal/operators/combineLatest.ts", "node_modules/rxjs/src/internal/operators/combineLatestWith.ts", "node_modules/rxjs/src/internal/operators/debounceTime.ts", "node_modules/rxjs/src/internal/operators/defaultIfEmpty.ts", "node_modules/rxjs/src/internal/operators/take.ts", "node_modules/rxjs/src/internal/operators/ignoreElements.ts", "node_modules/rxjs/src/internal/operators/mapTo.ts", "node_modules/rxjs/src/internal/operators/delayWhen.ts", "node_modules/rxjs/src/internal/operators/delay.ts", "node_modules/rxjs/src/internal/operators/distinctUntilChanged.ts", "node_modules/rxjs/src/internal/operators/distinctUntilKeyChanged.ts", "node_modules/rxjs/src/internal/operators/throwIfEmpty.ts", "node_modules/rxjs/src/internal/operators/endWith.ts", "node_modules/rxjs/src/internal/operators/finalize.ts", "node_modules/rxjs/src/internal/operators/first.ts", "node_modules/rxjs/src/internal/operators/takeLast.ts", "node_modules/rxjs/src/internal/operators/merge.ts", "node_modules/rxjs/src/internal/operators/mergeWith.ts", "node_modules/rxjs/src/internal/operators/repeat.ts", "node_modules/rxjs/src/internal/operators/scan.ts", "node_modules/rxjs/src/internal/operators/share.ts", "node_modules/rxjs/src/internal/operators/shareReplay.ts", "node_modules/rxjs/src/internal/operators/skip.ts", "node_modules/rxjs/src/internal/operators/skipUntil.ts", "node_modules/rxjs/src/internal/operators/startWith.ts", "node_modules/rxjs/src/internal/operators/switchMap.ts", "node_modules/rxjs/src/internal/operators/takeUntil.ts", "node_modules/rxjs/src/internal/operators/takeWhile.ts", "node_modules/rxjs/src/internal/operators/tap.ts", "node_modules/rxjs/src/internal/operators/throttle.ts", "node_modules/rxjs/src/internal/operators/throttleTime.ts", "node_modules/rxjs/src/internal/operators/withLatestFrom.ts", "node_modules/rxjs/src/internal/operators/zip.ts", "node_modules/rxjs/src/internal/operators/zipWith.ts", "src/templates/assets/javascripts/browser/document/index.ts", "src/templates/assets/javascripts/browser/element/_/index.ts", "src/templates/assets/javascripts/browser/element/focus/index.ts", "src/templates/assets/javascripts/browser/element/hover/index.ts", "src/templates/assets/javascripts/browser/element/offset/_/index.ts", "src/templates/assets/javascripts/browser/element/offset/content/index.ts", "src/templates/assets/javascripts/utilities/h/index.ts", "src/templates/assets/javascripts/utilities/round/index.ts", "src/templates/assets/javascripts/browser/script/index.ts", "src/templates/assets/javascripts/browser/element/size/_/index.ts", "src/templates/assets/javascripts/browser/element/size/content/index.ts", "src/templates/assets/javascripts/browser/element/visibility/index.ts", "src/templates/assets/javascripts/browser/toggle/index.ts", "src/templates/assets/javascripts/browser/keyboard/index.ts", "src/templates/assets/javascripts/browser/location/_/index.ts", "src/templates/assets/javascripts/browser/location/hash/index.ts", "src/templates/assets/javascripts/browser/media/index.ts", "src/templates/assets/javascripts/browser/request/index.ts", "src/templates/assets/javascripts/browser/viewport/offset/index.ts", "src/templates/assets/javascripts/browser/viewport/size/index.ts", "src/templates/assets/javascripts/browser/viewport/_/index.ts", "src/templates/assets/javascripts/browser/viewport/at/index.ts", "src/templates/assets/javascripts/browser/worker/index.ts", "src/templates/assets/javascripts/_/index.ts", "src/templates/assets/javascripts/components/_/index.ts", "src/templates/assets/javascripts/components/announce/index.ts", "src/templates/assets/javascripts/components/consent/index.ts", "src/templates/assets/javascripts/templates/tooltip/index.tsx", "src/templates/assets/javascripts/templates/annotation/index.tsx", "src/templates/assets/javascripts/templates/clipboard/index.tsx", "src/templates/assets/javascripts/templates/search/index.tsx", "src/templates/assets/javascripts/templates/source/index.tsx", "src/templates/assets/javascripts/templates/tabbed/index.tsx", "src/templates/assets/javascripts/templates/table/index.tsx", "src/templates/assets/javascripts/templates/version/index.tsx", "src/templates/assets/javascripts/components/tooltip/index.ts", "src/templates/assets/javascripts/components/content/annotation/_/index.ts", "src/templates/assets/javascripts/components/content/annotation/list/index.ts", "src/templates/assets/javascripts/components/content/annotation/block/index.ts", "src/templates/assets/javascripts/components/content/code/_/index.ts", "src/templates/assets/javascripts/components/content/details/index.ts", "src/templates/assets/javascripts/components/content/mermaid/index.css", "src/templates/assets/javascripts/components/content/mermaid/index.ts", "src/templates/assets/javascripts/components/content/table/index.ts", "src/templates/assets/javascripts/components/content/tabs/index.ts", "src/templates/assets/javascripts/components/content/_/index.ts", "src/templates/assets/javascripts/components/dialog/index.ts", "src/templates/assets/javascripts/components/header/_/index.ts", "src/templates/assets/javascripts/components/header/title/index.ts", "src/templates/assets/javascripts/components/main/index.ts", "src/templates/assets/javascripts/components/palette/index.ts", "src/templates/assets/javascripts/components/progress/index.ts", "src/templates/assets/javascripts/integrations/clipboard/index.ts", "src/templates/assets/javascripts/integrations/sitemap/index.ts", "src/templates/assets/javascripts/integrations/instant/index.ts", "src/templates/assets/javascripts/integrations/search/highlighter/index.ts", "src/templates/assets/javascripts/integrations/search/worker/message/index.ts", "src/templates/assets/javascripts/integrations/search/worker/_/index.ts", "src/templates/assets/javascripts/integrations/version/index.ts", "src/templates/assets/javascripts/components/search/query/index.ts", "src/templates/assets/javascripts/components/search/result/index.ts", "src/templates/assets/javascripts/components/search/share/index.ts", "src/templates/assets/javascripts/components/search/suggest/index.ts", "src/templates/assets/javascripts/components/search/_/index.ts", "src/templates/assets/javascripts/components/search/highlight/index.ts", "src/templates/assets/javascripts/components/sidebar/index.ts", "src/templates/assets/javascripts/components/source/facts/github/index.ts", "src/templates/assets/javascripts/components/source/facts/gitlab/index.ts", "src/templates/assets/javascripts/components/source/facts/_/index.ts", "src/templates/assets/javascripts/components/source/_/index.ts", "src/templates/assets/javascripts/components/tabs/index.ts", "src/templates/assets/javascripts/components/toc/index.ts", "src/templates/assets/javascripts/components/top/index.ts", "src/templates/assets/javascripts/patches/ellipsis/index.ts", "src/templates/assets/javascripts/patches/indeterminate/index.ts", "src/templates/assets/javascripts/patches/scrollfix/index.ts", "src/templates/assets/javascripts/patches/scrolllock/index.ts", "src/templates/assets/javascripts/polyfills/index.ts"],
+ "sourcesContent": ["(function (global, factory) {\n typeof exports === 'object' && typeof module !== 'undefined' ? factory() :\n typeof define === 'function' && define.amd ? define(factory) :\n (factory());\n}(this, (function () { 'use strict';\n\n /**\n * Applies the :focus-visible polyfill at the given scope.\n * A scope in this case is either the top-level Document or a Shadow Root.\n *\n * @param {(Document|ShadowRoot)} scope\n * @see https://github.com/WICG/focus-visible\n */\n function applyFocusVisiblePolyfill(scope) {\n var hadKeyboardEvent = true;\n var hadFocusVisibleRecently = false;\n var hadFocusVisibleRecentlyTimeout = null;\n\n var inputTypesAllowlist = {\n text: true,\n search: true,\n url: true,\n tel: true,\n email: true,\n password: true,\n number: true,\n date: true,\n month: true,\n week: true,\n time: true,\n datetime: true,\n 'datetime-local': true\n };\n\n /**\n * Helper function for legacy browsers and iframes which sometimes focus\n * elements like document, body, and non-interactive SVG.\n * @param {Element} el\n */\n function isValidFocusTarget(el) {\n if (\n el &&\n el !== document &&\n el.nodeName !== 'HTML' &&\n el.nodeName !== 'BODY' &&\n 'classList' in el &&\n 'contains' in el.classList\n ) {\n return true;\n }\n return false;\n }\n\n /**\n * Computes whether the given element should automatically trigger the\n * `focus-visible` class being added, i.e. whether it should always match\n * `:focus-visible` when focused.\n * @param {Element} el\n * @return {boolean}\n */\n function focusTriggersKeyboardModality(el) {\n var type = el.type;\n var tagName = el.tagName;\n\n if (tagName === 'INPUT' && inputTypesAllowlist[type] && !el.readOnly) {\n return true;\n }\n\n if (tagName === 'TEXTAREA' && !el.readOnly) {\n return true;\n }\n\n if (el.isContentEditable) {\n return true;\n }\n\n return false;\n }\n\n /**\n * Add the `focus-visible` class to the given element if it was not added by\n * the author.\n * @param {Element} el\n */\n function addFocusVisibleClass(el) {\n if (el.classList.contains('focus-visible')) {\n return;\n }\n el.classList.add('focus-visible');\n el.setAttribute('data-focus-visible-added', '');\n }\n\n /**\n * Remove the `focus-visible` class from the given element if it was not\n * originally added by the author.\n * @param {Element} el\n */\n function removeFocusVisibleClass(el) {\n if (!el.hasAttribute('data-focus-visible-added')) {\n return;\n }\n el.classList.remove('focus-visible');\n el.removeAttribute('data-focus-visible-added');\n }\n\n /**\n * If the most recent user interaction was via the keyboard;\n * and the key press did not include a meta, alt/option, or control key;\n * then the modality is keyboard. Otherwise, the modality is not keyboard.\n * Apply `focus-visible` to any current active element and keep track\n * of our keyboard modality state with `hadKeyboardEvent`.\n * @param {KeyboardEvent} e\n */\n function onKeyDown(e) {\n if (e.metaKey || e.altKey || e.ctrlKey) {\n return;\n }\n\n if (isValidFocusTarget(scope.activeElement)) {\n addFocusVisibleClass(scope.activeElement);\n }\n\n hadKeyboardEvent = true;\n }\n\n /**\n * If at any point a user clicks with a pointing device, ensure that we change\n * the modality away from keyboard.\n * This avoids the situation where a user presses a key on an already focused\n * element, and then clicks on a different element, focusing it with a\n * pointing device, while we still think we're in keyboard modality.\n * @param {Event} e\n */\n function onPointerDown(e) {\n hadKeyboardEvent = false;\n }\n\n /**\n * On `focus`, add the `focus-visible` class to the target if:\n * - the target received focus as a result of keyboard navigation, or\n * - the event target is an element that will likely require interaction\n * via the keyboard (e.g. a text box)\n * @param {Event} e\n */\n function onFocus(e) {\n // Prevent IE from focusing the document or HTML element.\n if (!isValidFocusTarget(e.target)) {\n return;\n }\n\n if (hadKeyboardEvent || focusTriggersKeyboardModality(e.target)) {\n addFocusVisibleClass(e.target);\n }\n }\n\n /**\n * On `blur`, remove the `focus-visible` class from the target.\n * @param {Event} e\n */\n function onBlur(e) {\n if (!isValidFocusTarget(e.target)) {\n return;\n }\n\n if (\n e.target.classList.contains('focus-visible') ||\n e.target.hasAttribute('data-focus-visible-added')\n ) {\n // To detect a tab/window switch, we look for a blur event followed\n // rapidly by a visibility change.\n // If we don't see a visibility change within 100ms, it's probably a\n // regular focus change.\n hadFocusVisibleRecently = true;\n window.clearTimeout(hadFocusVisibleRecentlyTimeout);\n hadFocusVisibleRecentlyTimeout = window.setTimeout(function() {\n hadFocusVisibleRecently = false;\n }, 100);\n removeFocusVisibleClass(e.target);\n }\n }\n\n /**\n * If the user changes tabs, keep track of whether or not the previously\n * focused element had .focus-visible.\n * @param {Event} e\n */\n function onVisibilityChange(e) {\n if (document.visibilityState === 'hidden') {\n // If the tab becomes active again, the browser will handle calling focus\n // on the element (Safari actually calls it twice).\n // If this tab change caused a blur on an element with focus-visible,\n // re-apply the class when the user switches back to the tab.\n if (hadFocusVisibleRecently) {\n hadKeyboardEvent = true;\n }\n addInitialPointerMoveListeners();\n }\n }\n\n /**\n * Add a group of listeners to detect usage of any pointing devices.\n * These listeners will be added when the polyfill first loads, and anytime\n * the window is blurred, so that they are active when the window regains\n * focus.\n */\n function addInitialPointerMoveListeners() {\n document.addEventListener('mousemove', onInitialPointerMove);\n document.addEventListener('mousedown', onInitialPointerMove);\n document.addEventListener('mouseup', onInitialPointerMove);\n document.addEventListener('pointermove', onInitialPointerMove);\n document.addEventListener('pointerdown', onInitialPointerMove);\n document.addEventListener('pointerup', onInitialPointerMove);\n document.addEventListener('touchmove', onInitialPointerMove);\n document.addEventListener('touchstart', onInitialPointerMove);\n document.addEventListener('touchend', onInitialPointerMove);\n }\n\n function removeInitialPointerMoveListeners() {\n document.removeEventListener('mousemove', onInitialPointerMove);\n document.removeEventListener('mousedown', onInitialPointerMove);\n document.removeEventListener('mouseup', onInitialPointerMove);\n document.removeEventListener('pointermove', onInitialPointerMove);\n document.removeEventListener('pointerdown', onInitialPointerMove);\n document.removeEventListener('pointerup', onInitialPointerMove);\n document.removeEventListener('touchmove', onInitialPointerMove);\n document.removeEventListener('touchstart', onInitialPointerMove);\n document.removeEventListener('touchend', onInitialPointerMove);\n }\n\n /**\n * When the polfyill first loads, assume the user is in keyboard modality.\n * If any event is received from a pointing device (e.g. mouse, pointer,\n * touch), turn off keyboard modality.\n * This accounts for situations where focus enters the page from the URL bar.\n * @param {Event} e\n */\n function onInitialPointerMove(e) {\n // Work around a Safari quirk that fires a mousemove on whenever the\n // window blurs, even if you're tabbing out of the page. \u00AF\\_(\u30C4)_/\u00AF\n if (e.target.nodeName && e.target.nodeName.toLowerCase() === 'html') {\n return;\n }\n\n hadKeyboardEvent = false;\n removeInitialPointerMoveListeners();\n }\n\n // For some kinds of state, we are interested in changes at the global scope\n // only. For example, global pointer input, global key presses and global\n // visibility change should affect the state at every scope:\n document.addEventListener('keydown', onKeyDown, true);\n document.addEventListener('mousedown', onPointerDown, true);\n document.addEventListener('pointerdown', onPointerDown, true);\n document.addEventListener('touchstart', onPointerDown, true);\n document.addEventListener('visibilitychange', onVisibilityChange, true);\n\n addInitialPointerMoveListeners();\n\n // For focus and blur, we specifically care about state changes in the local\n // scope. This is because focus / blur events that originate from within a\n // shadow root are not re-dispatched from the host element if it was already\n // the active element in its own scope:\n scope.addEventListener('focus', onFocus, true);\n scope.addEventListener('blur', onBlur, true);\n\n // We detect that a node is a ShadowRoot by ensuring that it is a\n // DocumentFragment and also has a host property. This check covers native\n // implementation and polyfill implementation transparently. If we only cared\n // about the native implementation, we could just check if the scope was\n // an instance of a ShadowRoot.\n if (scope.nodeType === Node.DOCUMENT_FRAGMENT_NODE && scope.host) {\n // Since a ShadowRoot is a special kind of DocumentFragment, it does not\n // have a root element to add a class to. So, we add this attribute to the\n // host element instead:\n scope.host.setAttribute('data-js-focus-visible', '');\n } else if (scope.nodeType === Node.DOCUMENT_NODE) {\n document.documentElement.classList.add('js-focus-visible');\n document.documentElement.setAttribute('data-js-focus-visible', '');\n }\n }\n\n // It is important to wrap all references to global window and document in\n // these checks to support server-side rendering use cases\n // @see https://github.com/WICG/focus-visible/issues/199\n if (typeof window !== 'undefined' && typeof document !== 'undefined') {\n // Make the polyfill helper globally available. This can be used as a signal\n // to interested libraries that wish to coordinate with the polyfill for e.g.,\n // applying the polyfill to a shadow root:\n window.applyFocusVisiblePolyfill = applyFocusVisiblePolyfill;\n\n // Notify interested libraries of the polyfill's presence, in case the\n // polyfill was loaded lazily:\n var event;\n\n try {\n event = new CustomEvent('focus-visible-polyfill-ready');\n } catch (error) {\n // IE11 does not support using CustomEvent as a constructor directly:\n event = document.createEvent('CustomEvent');\n event.initCustomEvent('focus-visible-polyfill-ready', false, false, {});\n }\n\n window.dispatchEvent(event);\n }\n\n if (typeof document !== 'undefined') {\n // Apply the polyfill to the global document, so that no JavaScript\n // coordination is required to use the polyfill in the top-level document:\n applyFocusVisiblePolyfill(document);\n }\n\n})));\n", "/*!\n * clipboard.js v2.0.11\n * https://clipboardjs.com/\n *\n * Licensed MIT \u00A9 Zeno Rocha\n */\n(function webpackUniversalModuleDefinition(root, factory) {\n\tif(typeof exports === 'object' && typeof module === 'object')\n\t\tmodule.exports = factory();\n\telse if(typeof define === 'function' && define.amd)\n\t\tdefine([], factory);\n\telse if(typeof exports === 'object')\n\t\texports[\"ClipboardJS\"] = factory();\n\telse\n\t\troot[\"ClipboardJS\"] = factory();\n})(this, function() {\nreturn /******/ (function() { // webpackBootstrap\n/******/ \tvar __webpack_modules__ = ({\n\n/***/ 686:\n/***/ (function(__unused_webpack_module, __webpack_exports__, __webpack_require__) {\n\n\"use strict\";\n\n// EXPORTS\n__webpack_require__.d(__webpack_exports__, {\n \"default\": function() { return /* binding */ clipboard; }\n});\n\n// EXTERNAL MODULE: ./node_modules/tiny-emitter/index.js\nvar tiny_emitter = __webpack_require__(279);\nvar tiny_emitter_default = /*#__PURE__*/__webpack_require__.n(tiny_emitter);\n// EXTERNAL MODULE: ./node_modules/good-listener/src/listen.js\nvar listen = __webpack_require__(370);\nvar listen_default = /*#__PURE__*/__webpack_require__.n(listen);\n// EXTERNAL MODULE: ./node_modules/select/src/select.js\nvar src_select = __webpack_require__(817);\nvar select_default = /*#__PURE__*/__webpack_require__.n(src_select);\n;// CONCATENATED MODULE: ./src/common/command.js\n/**\n * Executes a given operation type.\n * @param {String} type\n * @return {Boolean}\n */\nfunction command(type) {\n try {\n return document.execCommand(type);\n } catch (err) {\n return false;\n }\n}\n;// CONCATENATED MODULE: ./src/actions/cut.js\n\n\n/**\n * Cut action wrapper.\n * @param {String|HTMLElement} target\n * @return {String}\n */\n\nvar ClipboardActionCut = function ClipboardActionCut(target) {\n var selectedText = select_default()(target);\n command('cut');\n return selectedText;\n};\n\n/* harmony default export */ var actions_cut = (ClipboardActionCut);\n;// CONCATENATED MODULE: ./src/common/create-fake-element.js\n/**\n * Creates a fake textarea element with a value.\n * @param {String} value\n * @return {HTMLElement}\n */\nfunction createFakeElement(value) {\n var isRTL = document.documentElement.getAttribute('dir') === 'rtl';\n var fakeElement = document.createElement('textarea'); // Prevent zooming on iOS\n\n fakeElement.style.fontSize = '12pt'; // Reset box model\n\n fakeElement.style.border = '0';\n fakeElement.style.padding = '0';\n fakeElement.style.margin = '0'; // Move element out of screen horizontally\n\n fakeElement.style.position = 'absolute';\n fakeElement.style[isRTL ? 'right' : 'left'] = '-9999px'; // Move element to the same position vertically\n\n var yPosition = window.pageYOffset || document.documentElement.scrollTop;\n fakeElement.style.top = \"\".concat(yPosition, \"px\");\n fakeElement.setAttribute('readonly', '');\n fakeElement.value = value;\n return fakeElement;\n}\n;// CONCATENATED MODULE: ./src/actions/copy.js\n\n\n\n/**\n * Create fake copy action wrapper using a fake element.\n * @param {String} target\n * @param {Object} options\n * @return {String}\n */\n\nvar fakeCopyAction = function fakeCopyAction(value, options) {\n var fakeElement = createFakeElement(value);\n options.container.appendChild(fakeElement);\n var selectedText = select_default()(fakeElement);\n command('copy');\n fakeElement.remove();\n return selectedText;\n};\n/**\n * Copy action wrapper.\n * @param {String|HTMLElement} target\n * @param {Object} options\n * @return {String}\n */\n\n\nvar ClipboardActionCopy = function ClipboardActionCopy(target) {\n var options = arguments.length > 1 && arguments[1] !== undefined ? arguments[1] : {\n container: document.body\n };\n var selectedText = '';\n\n if (typeof target === 'string') {\n selectedText = fakeCopyAction(target, options);\n } else if (target instanceof HTMLInputElement && !['text', 'search', 'url', 'tel', 'password'].includes(target === null || target === void 0 ? void 0 : target.type)) {\n // If input type doesn't support `setSelectionRange`. Simulate it. https://developer.mozilla.org/en-US/docs/Web/API/HTMLInputElement/setSelectionRange\n selectedText = fakeCopyAction(target.value, options);\n } else {\n selectedText = select_default()(target);\n command('copy');\n }\n\n return selectedText;\n};\n\n/* harmony default export */ var actions_copy = (ClipboardActionCopy);\n;// CONCATENATED MODULE: ./src/actions/default.js\nfunction _typeof(obj) { \"@babel/helpers - typeof\"; if (typeof Symbol === \"function\" && typeof Symbol.iterator === \"symbol\") { _typeof = function _typeof(obj) { return typeof obj; }; } else { _typeof = function _typeof(obj) { return obj && typeof Symbol === \"function\" && obj.constructor === Symbol && obj !== Symbol.prototype ? \"symbol\" : typeof obj; }; } return _typeof(obj); }\n\n\n\n/**\n * Inner function which performs selection from either `text` or `target`\n * properties and then executes copy or cut operations.\n * @param {Object} options\n */\n\nvar ClipboardActionDefault = function ClipboardActionDefault() {\n var options = arguments.length > 0 && arguments[0] !== undefined ? arguments[0] : {};\n // Defines base properties passed from constructor.\n var _options$action = options.action,\n action = _options$action === void 0 ? 'copy' : _options$action,\n container = options.container,\n target = options.target,\n text = options.text; // Sets the `action` to be performed which can be either 'copy' or 'cut'.\n\n if (action !== 'copy' && action !== 'cut') {\n throw new Error('Invalid \"action\" value, use either \"copy\" or \"cut\"');\n } // Sets the `target` property using an element that will be have its content copied.\n\n\n if (target !== undefined) {\n if (target && _typeof(target) === 'object' && target.nodeType === 1) {\n if (action === 'copy' && target.hasAttribute('disabled')) {\n throw new Error('Invalid \"target\" attribute. Please use \"readonly\" instead of \"disabled\" attribute');\n }\n\n if (action === 'cut' && (target.hasAttribute('readonly') || target.hasAttribute('disabled'))) {\n throw new Error('Invalid \"target\" attribute. You can\\'t cut text from elements with \"readonly\" or \"disabled\" attributes');\n }\n } else {\n throw new Error('Invalid \"target\" value, use a valid Element');\n }\n } // Define selection strategy based on `text` property.\n\n\n if (text) {\n return actions_copy(text, {\n container: container\n });\n } // Defines which selection strategy based on `target` property.\n\n\n if (target) {\n return action === 'cut' ? actions_cut(target) : actions_copy(target, {\n container: container\n });\n }\n};\n\n/* harmony default export */ var actions_default = (ClipboardActionDefault);\n;// CONCATENATED MODULE: ./src/clipboard.js\nfunction clipboard_typeof(obj) { \"@babel/helpers - typeof\"; if (typeof Symbol === \"function\" && typeof Symbol.iterator === \"symbol\") { clipboard_typeof = function _typeof(obj) { return typeof obj; }; } else { clipboard_typeof = function _typeof(obj) { return obj && typeof Symbol === \"function\" && obj.constructor === Symbol && obj !== Symbol.prototype ? \"symbol\" : typeof obj; }; } return clipboard_typeof(obj); }\n\nfunction _classCallCheck(instance, Constructor) { if (!(instance instanceof Constructor)) { throw new TypeError(\"Cannot call a class as a function\"); } }\n\nfunction _defineProperties(target, props) { for (var i = 0; i < props.length; i++) { var descriptor = props[i]; descriptor.enumerable = descriptor.enumerable || false; descriptor.configurable = true; if (\"value\" in descriptor) descriptor.writable = true; Object.defineProperty(target, descriptor.key, descriptor); } }\n\nfunction _createClass(Constructor, protoProps, staticProps) { if (protoProps) _defineProperties(Constructor.prototype, protoProps); if (staticProps) _defineProperties(Constructor, staticProps); return Constructor; }\n\nfunction _inherits(subClass, superClass) { if (typeof superClass !== \"function\" && superClass !== null) { throw new TypeError(\"Super expression must either be null or a function\"); } subClass.prototype = Object.create(superClass && superClass.prototype, { constructor: { value: subClass, writable: true, configurable: true } }); if (superClass) _setPrototypeOf(subClass, superClass); }\n\nfunction _setPrototypeOf(o, p) { _setPrototypeOf = Object.setPrototypeOf || function _setPrototypeOf(o, p) { o.__proto__ = p; return o; }; return _setPrototypeOf(o, p); }\n\nfunction _createSuper(Derived) { var hasNativeReflectConstruct = _isNativeReflectConstruct(); return function _createSuperInternal() { var Super = _getPrototypeOf(Derived), result; if (hasNativeReflectConstruct) { var NewTarget = _getPrototypeOf(this).constructor; result = Reflect.construct(Super, arguments, NewTarget); } else { result = Super.apply(this, arguments); } return _possibleConstructorReturn(this, result); }; }\n\nfunction _possibleConstructorReturn(self, call) { if (call && (clipboard_typeof(call) === \"object\" || typeof call === \"function\")) { return call; } return _assertThisInitialized(self); }\n\nfunction _assertThisInitialized(self) { if (self === void 0) { throw new ReferenceError(\"this hasn't been initialised - super() hasn't been called\"); } return self; }\n\nfunction _isNativeReflectConstruct() { if (typeof Reflect === \"undefined\" || !Reflect.construct) return false; if (Reflect.construct.sham) return false; if (typeof Proxy === \"function\") return true; try { Date.prototype.toString.call(Reflect.construct(Date, [], function () {})); return true; } catch (e) { return false; } }\n\nfunction _getPrototypeOf(o) { _getPrototypeOf = Object.setPrototypeOf ? Object.getPrototypeOf : function _getPrototypeOf(o) { return o.__proto__ || Object.getPrototypeOf(o); }; return _getPrototypeOf(o); }\n\n\n\n\n\n\n/**\n * Helper function to retrieve attribute value.\n * @param {String} suffix\n * @param {Element} element\n */\n\nfunction getAttributeValue(suffix, element) {\n var attribute = \"data-clipboard-\".concat(suffix);\n\n if (!element.hasAttribute(attribute)) {\n return;\n }\n\n return element.getAttribute(attribute);\n}\n/**\n * Base class which takes one or more elements, adds event listeners to them,\n * and instantiates a new `ClipboardAction` on each click.\n */\n\n\nvar Clipboard = /*#__PURE__*/function (_Emitter) {\n _inherits(Clipboard, _Emitter);\n\n var _super = _createSuper(Clipboard);\n\n /**\n * @param {String|HTMLElement|HTMLCollection|NodeList} trigger\n * @param {Object} options\n */\n function Clipboard(trigger, options) {\n var _this;\n\n _classCallCheck(this, Clipboard);\n\n _this = _super.call(this);\n\n _this.resolveOptions(options);\n\n _this.listenClick(trigger);\n\n return _this;\n }\n /**\n * Defines if attributes would be resolved using internal setter functions\n * or custom functions that were passed in the constructor.\n * @param {Object} options\n */\n\n\n _createClass(Clipboard, [{\n key: \"resolveOptions\",\n value: function resolveOptions() {\n var options = arguments.length > 0 && arguments[0] !== undefined ? arguments[0] : {};\n this.action = typeof options.action === 'function' ? options.action : this.defaultAction;\n this.target = typeof options.target === 'function' ? options.target : this.defaultTarget;\n this.text = typeof options.text === 'function' ? options.text : this.defaultText;\n this.container = clipboard_typeof(options.container) === 'object' ? options.container : document.body;\n }\n /**\n * Adds a click event listener to the passed trigger.\n * @param {String|HTMLElement|HTMLCollection|NodeList} trigger\n */\n\n }, {\n key: \"listenClick\",\n value: function listenClick(trigger) {\n var _this2 = this;\n\n this.listener = listen_default()(trigger, 'click', function (e) {\n return _this2.onClick(e);\n });\n }\n /**\n * Defines a new `ClipboardAction` on each click event.\n * @param {Event} e\n */\n\n }, {\n key: \"onClick\",\n value: function onClick(e) {\n var trigger = e.delegateTarget || e.currentTarget;\n var action = this.action(trigger) || 'copy';\n var text = actions_default({\n action: action,\n container: this.container,\n target: this.target(trigger),\n text: this.text(trigger)\n }); // Fires an event based on the copy operation result.\n\n this.emit(text ? 'success' : 'error', {\n action: action,\n text: text,\n trigger: trigger,\n clearSelection: function clearSelection() {\n if (trigger) {\n trigger.focus();\n }\n\n window.getSelection().removeAllRanges();\n }\n });\n }\n /**\n * Default `action` lookup function.\n * @param {Element} trigger\n */\n\n }, {\n key: \"defaultAction\",\n value: function defaultAction(trigger) {\n return getAttributeValue('action', trigger);\n }\n /**\n * Default `target` lookup function.\n * @param {Element} trigger\n */\n\n }, {\n key: \"defaultTarget\",\n value: function defaultTarget(trigger) {\n var selector = getAttributeValue('target', trigger);\n\n if (selector) {\n return document.querySelector(selector);\n }\n }\n /**\n * Allow fire programmatically a copy action\n * @param {String|HTMLElement} target\n * @param {Object} options\n * @returns Text copied.\n */\n\n }, {\n key: \"defaultText\",\n\n /**\n * Default `text` lookup function.\n * @param {Element} trigger\n */\n value: function defaultText(trigger) {\n return getAttributeValue('text', trigger);\n }\n /**\n * Destroy lifecycle.\n */\n\n }, {\n key: \"destroy\",\n value: function destroy() {\n this.listener.destroy();\n }\n }], [{\n key: \"copy\",\n value: function copy(target) {\n var options = arguments.length > 1 && arguments[1] !== undefined ? arguments[1] : {\n container: document.body\n };\n return actions_copy(target, options);\n }\n /**\n * Allow fire programmatically a cut action\n * @param {String|HTMLElement} target\n * @returns Text cutted.\n */\n\n }, {\n key: \"cut\",\n value: function cut(target) {\n return actions_cut(target);\n }\n /**\n * Returns the support of the given action, or all actions if no action is\n * given.\n * @param {String} [action]\n */\n\n }, {\n key: \"isSupported\",\n value: function isSupported() {\n var action = arguments.length > 0 && arguments[0] !== undefined ? arguments[0] : ['copy', 'cut'];\n var actions = typeof action === 'string' ? [action] : action;\n var support = !!document.queryCommandSupported;\n actions.forEach(function (action) {\n support = support && !!document.queryCommandSupported(action);\n });\n return support;\n }\n }]);\n\n return Clipboard;\n}((tiny_emitter_default()));\n\n/* harmony default export */ var clipboard = (Clipboard);\n\n/***/ }),\n\n/***/ 828:\n/***/ (function(module) {\n\nvar DOCUMENT_NODE_TYPE = 9;\n\n/**\n * A polyfill for Element.matches()\n */\nif (typeof Element !== 'undefined' && !Element.prototype.matches) {\n var proto = Element.prototype;\n\n proto.matches = proto.matchesSelector ||\n proto.mozMatchesSelector ||\n proto.msMatchesSelector ||\n proto.oMatchesSelector ||\n proto.webkitMatchesSelector;\n}\n\n/**\n * Finds the closest parent that matches a selector.\n *\n * @param {Element} element\n * @param {String} selector\n * @return {Function}\n */\nfunction closest (element, selector) {\n while (element && element.nodeType !== DOCUMENT_NODE_TYPE) {\n if (typeof element.matches === 'function' &&\n element.matches(selector)) {\n return element;\n }\n element = element.parentNode;\n }\n}\n\nmodule.exports = closest;\n\n\n/***/ }),\n\n/***/ 438:\n/***/ (function(module, __unused_webpack_exports, __webpack_require__) {\n\nvar closest = __webpack_require__(828);\n\n/**\n * Delegates event to a selector.\n *\n * @param {Element} element\n * @param {String} selector\n * @param {String} type\n * @param {Function} callback\n * @param {Boolean} useCapture\n * @return {Object}\n */\nfunction _delegate(element, selector, type, callback, useCapture) {\n var listenerFn = listener.apply(this, arguments);\n\n element.addEventListener(type, listenerFn, useCapture);\n\n return {\n destroy: function() {\n element.removeEventListener(type, listenerFn, useCapture);\n }\n }\n}\n\n/**\n * Delegates event to a selector.\n *\n * @param {Element|String|Array} [elements]\n * @param {String} selector\n * @param {String} type\n * @param {Function} callback\n * @param {Boolean} useCapture\n * @return {Object}\n */\nfunction delegate(elements, selector, type, callback, useCapture) {\n // Handle the regular Element usage\n if (typeof elements.addEventListener === 'function') {\n return _delegate.apply(null, arguments);\n }\n\n // Handle Element-less usage, it defaults to global delegation\n if (typeof type === 'function') {\n // Use `document` as the first parameter, then apply arguments\n // This is a short way to .unshift `arguments` without running into deoptimizations\n return _delegate.bind(null, document).apply(null, arguments);\n }\n\n // Handle Selector-based usage\n if (typeof elements === 'string') {\n elements = document.querySelectorAll(elements);\n }\n\n // Handle Array-like based usage\n return Array.prototype.map.call(elements, function (element) {\n return _delegate(element, selector, type, callback, useCapture);\n });\n}\n\n/**\n * Finds closest match and invokes callback.\n *\n * @param {Element} element\n * @param {String} selector\n * @param {String} type\n * @param {Function} callback\n * @return {Function}\n */\nfunction listener(element, selector, type, callback) {\n return function(e) {\n e.delegateTarget = closest(e.target, selector);\n\n if (e.delegateTarget) {\n callback.call(element, e);\n }\n }\n}\n\nmodule.exports = delegate;\n\n\n/***/ }),\n\n/***/ 879:\n/***/ (function(__unused_webpack_module, exports) {\n\n/**\n * Check if argument is a HTML element.\n *\n * @param {Object} value\n * @return {Boolean}\n */\nexports.node = function(value) {\n return value !== undefined\n && value instanceof HTMLElement\n && value.nodeType === 1;\n};\n\n/**\n * Check if argument is a list of HTML elements.\n *\n * @param {Object} value\n * @return {Boolean}\n */\nexports.nodeList = function(value) {\n var type = Object.prototype.toString.call(value);\n\n return value !== undefined\n && (type === '[object NodeList]' || type === '[object HTMLCollection]')\n && ('length' in value)\n && (value.length === 0 || exports.node(value[0]));\n};\n\n/**\n * Check if argument is a string.\n *\n * @param {Object} value\n * @return {Boolean}\n */\nexports.string = function(value) {\n return typeof value === 'string'\n || value instanceof String;\n};\n\n/**\n * Check if argument is a function.\n *\n * @param {Object} value\n * @return {Boolean}\n */\nexports.fn = function(value) {\n var type = Object.prototype.toString.call(value);\n\n return type === '[object Function]';\n};\n\n\n/***/ }),\n\n/***/ 370:\n/***/ (function(module, __unused_webpack_exports, __webpack_require__) {\n\nvar is = __webpack_require__(879);\nvar delegate = __webpack_require__(438);\n\n/**\n * Validates all params and calls the right\n * listener function based on its target type.\n *\n * @param {String|HTMLElement|HTMLCollection|NodeList} target\n * @param {String} type\n * @param {Function} callback\n * @return {Object}\n */\nfunction listen(target, type, callback) {\n if (!target && !type && !callback) {\n throw new Error('Missing required arguments');\n }\n\n if (!is.string(type)) {\n throw new TypeError('Second argument must be a String');\n }\n\n if (!is.fn(callback)) {\n throw new TypeError('Third argument must be a Function');\n }\n\n if (is.node(target)) {\n return listenNode(target, type, callback);\n }\n else if (is.nodeList(target)) {\n return listenNodeList(target, type, callback);\n }\n else if (is.string(target)) {\n return listenSelector(target, type, callback);\n }\n else {\n throw new TypeError('First argument must be a String, HTMLElement, HTMLCollection, or NodeList');\n }\n}\n\n/**\n * Adds an event listener to a HTML element\n * and returns a remove listener function.\n *\n * @param {HTMLElement} node\n * @param {String} type\n * @param {Function} callback\n * @return {Object}\n */\nfunction listenNode(node, type, callback) {\n node.addEventListener(type, callback);\n\n return {\n destroy: function() {\n node.removeEventListener(type, callback);\n }\n }\n}\n\n/**\n * Add an event listener to a list of HTML elements\n * and returns a remove listener function.\n *\n * @param {NodeList|HTMLCollection} nodeList\n * @param {String} type\n * @param {Function} callback\n * @return {Object}\n */\nfunction listenNodeList(nodeList, type, callback) {\n Array.prototype.forEach.call(nodeList, function(node) {\n node.addEventListener(type, callback);\n });\n\n return {\n destroy: function() {\n Array.prototype.forEach.call(nodeList, function(node) {\n node.removeEventListener(type, callback);\n });\n }\n }\n}\n\n/**\n * Add an event listener to a selector\n * and returns a remove listener function.\n *\n * @param {String} selector\n * @param {String} type\n * @param {Function} callback\n * @return {Object}\n */\nfunction listenSelector(selector, type, callback) {\n return delegate(document.body, selector, type, callback);\n}\n\nmodule.exports = listen;\n\n\n/***/ }),\n\n/***/ 817:\n/***/ (function(module) {\n\nfunction select(element) {\n var selectedText;\n\n if (element.nodeName === 'SELECT') {\n element.focus();\n\n selectedText = element.value;\n }\n else if (element.nodeName === 'INPUT' || element.nodeName === 'TEXTAREA') {\n var isReadOnly = element.hasAttribute('readonly');\n\n if (!isReadOnly) {\n element.setAttribute('readonly', '');\n }\n\n element.select();\n element.setSelectionRange(0, element.value.length);\n\n if (!isReadOnly) {\n element.removeAttribute('readonly');\n }\n\n selectedText = element.value;\n }\n else {\n if (element.hasAttribute('contenteditable')) {\n element.focus();\n }\n\n var selection = window.getSelection();\n var range = document.createRange();\n\n range.selectNodeContents(element);\n selection.removeAllRanges();\n selection.addRange(range);\n\n selectedText = selection.toString();\n }\n\n return selectedText;\n}\n\nmodule.exports = select;\n\n\n/***/ }),\n\n/***/ 279:\n/***/ (function(module) {\n\nfunction E () {\n // Keep this empty so it's easier to inherit from\n // (via https://github.com/lipsmack from https://github.com/scottcorgan/tiny-emitter/issues/3)\n}\n\nE.prototype = {\n on: function (name, callback, ctx) {\n var e = this.e || (this.e = {});\n\n (e[name] || (e[name] = [])).push({\n fn: callback,\n ctx: ctx\n });\n\n return this;\n },\n\n once: function (name, callback, ctx) {\n var self = this;\n function listener () {\n self.off(name, listener);\n callback.apply(ctx, arguments);\n };\n\n listener._ = callback\n return this.on(name, listener, ctx);\n },\n\n emit: function (name) {\n var data = [].slice.call(arguments, 1);\n var evtArr = ((this.e || (this.e = {}))[name] || []).slice();\n var i = 0;\n var len = evtArr.length;\n\n for (i; i < len; i++) {\n evtArr[i].fn.apply(evtArr[i].ctx, data);\n }\n\n return this;\n },\n\n off: function (name, callback) {\n var e = this.e || (this.e = {});\n var evts = e[name];\n var liveEvents = [];\n\n if (evts && callback) {\n for (var i = 0, len = evts.length; i < len; i++) {\n if (evts[i].fn !== callback && evts[i].fn._ !== callback)\n liveEvents.push(evts[i]);\n }\n }\n\n // Remove event from queue to prevent memory leak\n // Suggested by https://github.com/lazd\n // Ref: https://github.com/scottcorgan/tiny-emitter/commit/c6ebfaa9bc973b33d110a84a307742b7cf94c953#commitcomment-5024910\n\n (liveEvents.length)\n ? e[name] = liveEvents\n : delete e[name];\n\n return this;\n }\n};\n\nmodule.exports = E;\nmodule.exports.TinyEmitter = E;\n\n\n/***/ })\n\n/******/ \t});\n/************************************************************************/\n/******/ \t// The module cache\n/******/ \tvar __webpack_module_cache__ = {};\n/******/ \t\n/******/ \t// The require function\n/******/ \tfunction __webpack_require__(moduleId) {\n/******/ \t\t// Check if module is in cache\n/******/ \t\tif(__webpack_module_cache__[moduleId]) {\n/******/ \t\t\treturn __webpack_module_cache__[moduleId].exports;\n/******/ \t\t}\n/******/ \t\t// Create a new module (and put it into the cache)\n/******/ \t\tvar module = __webpack_module_cache__[moduleId] = {\n/******/ \t\t\t// no module.id needed\n/******/ \t\t\t// no module.loaded needed\n/******/ \t\t\texports: {}\n/******/ \t\t};\n/******/ \t\n/******/ \t\t// Execute the module function\n/******/ \t\t__webpack_modules__[moduleId](module, module.exports, __webpack_require__);\n/******/ \t\n/******/ \t\t// Return the exports of the module\n/******/ \t\treturn module.exports;\n/******/ \t}\n/******/ \t\n/************************************************************************/\n/******/ \t/* webpack/runtime/compat get default export */\n/******/ \t!function() {\n/******/ \t\t// getDefaultExport function for compatibility with non-harmony modules\n/******/ \t\t__webpack_require__.n = function(module) {\n/******/ \t\t\tvar getter = module && module.__esModule ?\n/******/ \t\t\t\tfunction() { return module['default']; } :\n/******/ \t\t\t\tfunction() { return module; };\n/******/ \t\t\t__webpack_require__.d(getter, { a: getter });\n/******/ \t\t\treturn getter;\n/******/ \t\t};\n/******/ \t}();\n/******/ \t\n/******/ \t/* webpack/runtime/define property getters */\n/******/ \t!function() {\n/******/ \t\t// define getter functions for harmony exports\n/******/ \t\t__webpack_require__.d = function(exports, definition) {\n/******/ \t\t\tfor(var key in definition) {\n/******/ \t\t\t\tif(__webpack_require__.o(definition, key) && !__webpack_require__.o(exports, key)) {\n/******/ \t\t\t\t\tObject.defineProperty(exports, key, { enumerable: true, get: definition[key] });\n/******/ \t\t\t\t}\n/******/ \t\t\t}\n/******/ \t\t};\n/******/ \t}();\n/******/ \t\n/******/ \t/* webpack/runtime/hasOwnProperty shorthand */\n/******/ \t!function() {\n/******/ \t\t__webpack_require__.o = function(obj, prop) { return Object.prototype.hasOwnProperty.call(obj, prop); }\n/******/ \t}();\n/******/ \t\n/************************************************************************/\n/******/ \t// module exports must be returned from runtime so entry inlining is disabled\n/******/ \t// startup\n/******/ \t// Load entry module and return exports\n/******/ \treturn __webpack_require__(686);\n/******/ })()\n.default;\n});", "/*!\n * escape-html\n * Copyright(c) 2012-2013 TJ Holowaychuk\n * Copyright(c) 2015 Andreas Lubbe\n * Copyright(c) 2015 Tiancheng \"Timothy\" Gu\n * MIT Licensed\n */\n\n'use strict';\n\n/**\n * Module variables.\n * @private\n */\n\nvar matchHtmlRegExp = /[\"'&<>]/;\n\n/**\n * Module exports.\n * @public\n */\n\nmodule.exports = escapeHtml;\n\n/**\n * Escape special characters in the given string of html.\n *\n * @param {string} string The string to escape for inserting into HTML\n * @return {string}\n * @public\n */\n\nfunction escapeHtml(string) {\n var str = '' + string;\n var match = matchHtmlRegExp.exec(str);\n\n if (!match) {\n return str;\n }\n\n var escape;\n var html = '';\n var index = 0;\n var lastIndex = 0;\n\n for (index = match.index; index < str.length; index++) {\n switch (str.charCodeAt(index)) {\n case 34: // \"\n escape = '"';\n break;\n case 38: // &\n escape = '&';\n break;\n case 39: // '\n escape = ''';\n break;\n case 60: // <\n escape = '<';\n break;\n case 62: // >\n escape = '>';\n break;\n default:\n continue;\n }\n\n if (lastIndex !== index) {\n html += str.substring(lastIndex, index);\n }\n\n lastIndex = index + 1;\n html += escape;\n }\n\n return lastIndex !== index\n ? html + str.substring(lastIndex, index)\n : html;\n}\n", "/*\n * Copyright (c) 2016-2024 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport \"focus-visible\"\n\nimport {\n EMPTY,\n NEVER,\n Observable,\n Subject,\n defer,\n delay,\n filter,\n map,\n merge,\n mergeWith,\n shareReplay,\n switchMap\n} from \"rxjs\"\n\nimport { configuration, feature } from \"./_\"\nimport {\n at,\n getActiveElement,\n getOptionalElement,\n requestJSON,\n setLocation,\n setToggle,\n watchDocument,\n watchKeyboard,\n watchLocation,\n watchLocationTarget,\n watchMedia,\n watchPrint,\n watchScript,\n watchViewport\n} from \"./browser\"\nimport {\n getComponentElement,\n getComponentElements,\n mountAnnounce,\n mountBackToTop,\n mountConsent,\n mountContent,\n mountDialog,\n mountHeader,\n mountHeaderTitle,\n mountPalette,\n mountProgress,\n mountSearch,\n mountSearchHiglight,\n mountSidebar,\n mountSource,\n mountTableOfContents,\n mountTabs,\n watchHeader,\n watchMain\n} from \"./components\"\nimport {\n SearchIndex,\n setupClipboardJS,\n setupInstantNavigation,\n setupVersionSelector\n} from \"./integrations\"\nimport {\n patchEllipsis,\n patchIndeterminate,\n patchScrollfix,\n patchScrolllock\n} from \"./patches\"\nimport \"./polyfills\"\n\n/* ----------------------------------------------------------------------------\n * Functions - @todo refactor\n * ------------------------------------------------------------------------- */\n\n/**\n * Fetch search index\n *\n * @returns Search index observable\n */\nfunction fetchSearchIndex(): Observable {\n if (location.protocol === \"file:\") {\n return watchScript(\n `${new URL(\"search/search_index.js\", config.base)}`\n )\n .pipe(\n // @ts-ignore - @todo fix typings\n map(() => __index),\n shareReplay(1)\n )\n } else {\n return requestJSON(\n new URL(\"search/search_index.json\", config.base)\n )\n }\n}\n\n/* ----------------------------------------------------------------------------\n * Application\n * ------------------------------------------------------------------------- */\n\n/* Yay, JavaScript is available */\ndocument.documentElement.classList.remove(\"no-js\")\ndocument.documentElement.classList.add(\"js\")\n\n/* Set up navigation observables and subjects */\nconst document$ = watchDocument()\nconst location$ = watchLocation()\nconst target$ = watchLocationTarget(location$)\nconst keyboard$ = watchKeyboard()\n\n/* Set up media observables */\nconst viewport$ = watchViewport()\nconst tablet$ = watchMedia(\"(min-width: 960px)\")\nconst screen$ = watchMedia(\"(min-width: 1220px)\")\nconst print$ = watchPrint()\n\n/* Retrieve search index, if search is enabled */\nconst config = configuration()\nconst index$ = document.forms.namedItem(\"search\")\n ? fetchSearchIndex()\n : NEVER\n\n/* Set up Clipboard.js integration */\nconst alert$ = new Subject()\nsetupClipboardJS({ alert$ })\n\n/* Set up progress indicator */\nconst progress$ = new Subject()\n\n/* Set up instant navigation, if enabled */\nif (feature(\"navigation.instant\"))\n setupInstantNavigation({ location$, viewport$, progress$ })\n .subscribe(document$)\n\n/* Set up version selector */\nif (config.version?.provider === \"mike\")\n setupVersionSelector({ document$ })\n\n/* Always close drawer and search on navigation */\nmerge(location$, target$)\n .pipe(\n delay(125)\n )\n .subscribe(() => {\n setToggle(\"drawer\", false)\n setToggle(\"search\", false)\n })\n\n/* Set up global keyboard handlers */\nkeyboard$\n .pipe(\n filter(({ mode }) => mode === \"global\")\n )\n .subscribe(key => {\n switch (key.type) {\n\n /* Go to previous page */\n case \"p\":\n case \",\":\n const prev = getOptionalElement(\"link[rel=prev]\")\n if (typeof prev !== \"undefined\")\n setLocation(prev)\n break\n\n /* Go to next page */\n case \"n\":\n case \".\":\n const next = getOptionalElement(\"link[rel=next]\")\n if (typeof next !== \"undefined\")\n setLocation(next)\n break\n\n /* Expand navigation, see https://bit.ly/3ZjG5io */\n case \"Enter\":\n const active = getActiveElement()\n if (active instanceof HTMLLabelElement)\n active.click()\n }\n })\n\n/* Set up patches */\npatchEllipsis({ document$ })\npatchIndeterminate({ document$, tablet$ })\npatchScrollfix({ document$ })\npatchScrolllock({ viewport$, tablet$ })\n\n/* Set up header and main area observable */\nconst header$ = watchHeader(getComponentElement(\"header\"), { viewport$ })\nconst main$ = document$\n .pipe(\n map(() => getComponentElement(\"main\")),\n switchMap(el => watchMain(el, { viewport$, header$ })),\n shareReplay(1)\n )\n\n/* Set up control component observables */\nconst control$ = merge(\n\n /* Consent */\n ...getComponentElements(\"consent\")\n .map(el => mountConsent(el, { target$ })),\n\n /* Dialog */\n ...getComponentElements(\"dialog\")\n .map(el => mountDialog(el, { alert$ })),\n\n /* Header */\n ...getComponentElements(\"header\")\n .map(el => mountHeader(el, { viewport$, header$, main$ })),\n\n /* Color palette */\n ...getComponentElements(\"palette\")\n .map(el => mountPalette(el)),\n\n /* Progress bar */\n ...getComponentElements(\"progress\")\n .map(el => mountProgress(el, { progress$ })),\n\n /* Search */\n ...getComponentElements(\"search\")\n .map(el => mountSearch(el, { index$, keyboard$ })),\n\n /* Repository information */\n ...getComponentElements(\"source\")\n .map(el => mountSource(el))\n)\n\n/* Set up content component observables */\nconst content$ = defer(() => merge(\n\n /* Announcement bar */\n ...getComponentElements(\"announce\")\n .map(el => mountAnnounce(el)),\n\n /* Content */\n ...getComponentElements(\"content\")\n .map(el => mountContent(el, { viewport$, target$, print$ })),\n\n /* Search highlighting */\n ...getComponentElements(\"content\")\n .map(el => feature(\"search.highlight\")\n ? mountSearchHiglight(el, { index$, location$ })\n : EMPTY\n ),\n\n /* Header title */\n ...getComponentElements(\"header-title\")\n .map(el => mountHeaderTitle(el, { viewport$, header$ })),\n\n /* Sidebar */\n ...getComponentElements(\"sidebar\")\n .map(el => el.getAttribute(\"data-md-type\") === \"navigation\"\n ? at(screen$, () => mountSidebar(el, { viewport$, header$, main$ }))\n : at(tablet$, () => mountSidebar(el, { viewport$, header$, main$ }))\n ),\n\n /* Navigation tabs */\n ...getComponentElements(\"tabs\")\n .map(el => mountTabs(el, { viewport$, header$ })),\n\n /* Table of contents */\n ...getComponentElements(\"toc\")\n .map(el => mountTableOfContents(el, {\n viewport$, header$, main$, target$\n })),\n\n /* Back-to-top button */\n ...getComponentElements(\"top\")\n .map(el => mountBackToTop(el, { viewport$, header$, main$, target$ }))\n))\n\n/* Set up component observables */\nconst component$ = document$\n .pipe(\n switchMap(() => content$),\n mergeWith(control$),\n shareReplay(1)\n )\n\n/* Subscribe to all components */\ncomponent$.subscribe()\n\n/* ----------------------------------------------------------------------------\n * Exports\n * ------------------------------------------------------------------------- */\n\nwindow.document$ = document$ /* Document observable */\nwindow.location$ = location$ /* Location subject */\nwindow.target$ = target$ /* Location target observable */\nwindow.keyboard$ = keyboard$ /* Keyboard observable */\nwindow.viewport$ = viewport$ /* Viewport observable */\nwindow.tablet$ = tablet$ /* Media tablet observable */\nwindow.screen$ = screen$ /* Media screen observable */\nwindow.print$ = print$ /* Media print observable */\nwindow.alert$ = alert$ /* Alert subject */\nwindow.progress$ = progress$ /* Progress indicator subject */\nwindow.component$ = component$ /* Component observable */\n", "/*! *****************************************************************************\r\nCopyright (c) Microsoft Corporation.\r\n\r\nPermission to use, copy, modify, and/or distribute this software for any\r\npurpose with or without fee is hereby granted.\r\n\r\nTHE SOFTWARE IS PROVIDED \"AS IS\" AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH\r\nREGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY\r\nAND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT,\r\nINDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM\r\nLOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR\r\nOTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR\r\nPERFORMANCE OF THIS SOFTWARE.\r\n***************************************************************************** */\r\n/* global Reflect, Promise */\r\n\r\nvar extendStatics = function(d, b) {\r\n extendStatics = Object.setPrototypeOf ||\r\n ({ __proto__: [] } instanceof Array && function (d, b) { d.__proto__ = b; }) ||\r\n function (d, b) { for (var p in b) if (Object.prototype.hasOwnProperty.call(b, p)) d[p] = b[p]; };\r\n return extendStatics(d, b);\r\n};\r\n\r\nexport function __extends(d, b) {\r\n if (typeof b !== \"function\" && b !== null)\r\n throw new TypeError(\"Class extends value \" + String(b) + \" is not a constructor or null\");\r\n extendStatics(d, b);\r\n function __() { this.constructor = d; }\r\n d.prototype = b === null ? Object.create(b) : (__.prototype = b.prototype, new __());\r\n}\r\n\r\nexport var __assign = function() {\r\n __assign = Object.assign || function __assign(t) {\r\n for (var s, i = 1, n = arguments.length; i < n; i++) {\r\n s = arguments[i];\r\n for (var p in s) if (Object.prototype.hasOwnProperty.call(s, p)) t[p] = s[p];\r\n }\r\n return t;\r\n }\r\n return __assign.apply(this, arguments);\r\n}\r\n\r\nexport function __rest(s, e) {\r\n var t = {};\r\n for (var p in s) if (Object.prototype.hasOwnProperty.call(s, p) && e.indexOf(p) < 0)\r\n t[p] = s[p];\r\n if (s != null && typeof Object.getOwnPropertySymbols === \"function\")\r\n for (var i = 0, p = Object.getOwnPropertySymbols(s); i < p.length; i++) {\r\n if (e.indexOf(p[i]) < 0 && Object.prototype.propertyIsEnumerable.call(s, p[i]))\r\n t[p[i]] = s[p[i]];\r\n }\r\n return t;\r\n}\r\n\r\nexport function __decorate(decorators, target, key, desc) {\r\n var c = arguments.length, r = c < 3 ? target : desc === null ? desc = Object.getOwnPropertyDescriptor(target, key) : desc, d;\r\n if (typeof Reflect === \"object\" && typeof Reflect.decorate === \"function\") r = Reflect.decorate(decorators, target, key, desc);\r\n else for (var i = decorators.length - 1; i >= 0; i--) if (d = decorators[i]) r = (c < 3 ? d(r) : c > 3 ? d(target, key, r) : d(target, key)) || r;\r\n return c > 3 && r && Object.defineProperty(target, key, r), r;\r\n}\r\n\r\nexport function __param(paramIndex, decorator) {\r\n return function (target, key) { decorator(target, key, paramIndex); }\r\n}\r\n\r\nexport function __metadata(metadataKey, metadataValue) {\r\n if (typeof Reflect === \"object\" && typeof Reflect.metadata === \"function\") return Reflect.metadata(metadataKey, metadataValue);\r\n}\r\n\r\nexport function __awaiter(thisArg, _arguments, P, generator) {\r\n function adopt(value) { return value instanceof P ? value : new P(function (resolve) { resolve(value); }); }\r\n return new (P || (P = Promise))(function (resolve, reject) {\r\n function fulfilled(value) { try { step(generator.next(value)); } catch (e) { reject(e); } }\r\n function rejected(value) { try { step(generator[\"throw\"](value)); } catch (e) { reject(e); } }\r\n function step(result) { result.done ? resolve(result.value) : adopt(result.value).then(fulfilled, rejected); }\r\n step((generator = generator.apply(thisArg, _arguments || [])).next());\r\n });\r\n}\r\n\r\nexport function __generator(thisArg, body) {\r\n var _ = { label: 0, sent: function() { if (t[0] & 1) throw t[1]; return t[1]; }, trys: [], ops: [] }, f, y, t, g;\r\n return g = { next: verb(0), \"throw\": verb(1), \"return\": verb(2) }, typeof Symbol === \"function\" && (g[Symbol.iterator] = function() { return this; }), g;\r\n function verb(n) { return function (v) { return step([n, v]); }; }\r\n function step(op) {\r\n if (f) throw new TypeError(\"Generator is already executing.\");\r\n while (_) try {\r\n if (f = 1, y && (t = op[0] & 2 ? y[\"return\"] : op[0] ? y[\"throw\"] || ((t = y[\"return\"]) && t.call(y), 0) : y.next) && !(t = t.call(y, op[1])).done) return t;\r\n if (y = 0, t) op = [op[0] & 2, t.value];\r\n switch (op[0]) {\r\n case 0: case 1: t = op; break;\r\n case 4: _.label++; return { value: op[1], done: false };\r\n case 5: _.label++; y = op[1]; op = [0]; continue;\r\n case 7: op = _.ops.pop(); _.trys.pop(); continue;\r\n default:\r\n if (!(t = _.trys, t = t.length > 0 && t[t.length - 1]) && (op[0] === 6 || op[0] === 2)) { _ = 0; continue; }\r\n if (op[0] === 3 && (!t || (op[1] > t[0] && op[1] < t[3]))) { _.label = op[1]; break; }\r\n if (op[0] === 6 && _.label < t[1]) { _.label = t[1]; t = op; break; }\r\n if (t && _.label < t[2]) { _.label = t[2]; _.ops.push(op); break; }\r\n if (t[2]) _.ops.pop();\r\n _.trys.pop(); continue;\r\n }\r\n op = body.call(thisArg, _);\r\n } catch (e) { op = [6, e]; y = 0; } finally { f = t = 0; }\r\n if (op[0] & 5) throw op[1]; return { value: op[0] ? op[1] : void 0, done: true };\r\n }\r\n}\r\n\r\nexport var __createBinding = Object.create ? (function(o, m, k, k2) {\r\n if (k2 === undefined) k2 = k;\r\n Object.defineProperty(o, k2, { enumerable: true, get: function() { return m[k]; } });\r\n}) : (function(o, m, k, k2) {\r\n if (k2 === undefined) k2 = k;\r\n o[k2] = m[k];\r\n});\r\n\r\nexport function __exportStar(m, o) {\r\n for (var p in m) if (p !== \"default\" && !Object.prototype.hasOwnProperty.call(o, p)) __createBinding(o, m, p);\r\n}\r\n\r\nexport function __values(o) {\r\n var s = typeof Symbol === \"function\" && Symbol.iterator, m = s && o[s], i = 0;\r\n if (m) return m.call(o);\r\n if (o && typeof o.length === \"number\") return {\r\n next: function () {\r\n if (o && i >= o.length) o = void 0;\r\n return { value: o && o[i++], done: !o };\r\n }\r\n };\r\n throw new TypeError(s ? \"Object is not iterable.\" : \"Symbol.iterator is not defined.\");\r\n}\r\n\r\nexport function __read(o, n) {\r\n var m = typeof Symbol === \"function\" && o[Symbol.iterator];\r\n if (!m) return o;\r\n var i = m.call(o), r, ar = [], e;\r\n try {\r\n while ((n === void 0 || n-- > 0) && !(r = i.next()).done) ar.push(r.value);\r\n }\r\n catch (error) { e = { error: error }; }\r\n finally {\r\n try {\r\n if (r && !r.done && (m = i[\"return\"])) m.call(i);\r\n }\r\n finally { if (e) throw e.error; }\r\n }\r\n return ar;\r\n}\r\n\r\n/** @deprecated */\r\nexport function __spread() {\r\n for (var ar = [], i = 0; i < arguments.length; i++)\r\n ar = ar.concat(__read(arguments[i]));\r\n return ar;\r\n}\r\n\r\n/** @deprecated */\r\nexport function __spreadArrays() {\r\n for (var s = 0, i = 0, il = arguments.length; i < il; i++) s += arguments[i].length;\r\n for (var r = Array(s), k = 0, i = 0; i < il; i++)\r\n for (var a = arguments[i], j = 0, jl = a.length; j < jl; j++, k++)\r\n r[k] = a[j];\r\n return r;\r\n}\r\n\r\nexport function __spreadArray(to, from, pack) {\r\n if (pack || arguments.length === 2) for (var i = 0, l = from.length, ar; i < l; i++) {\r\n if (ar || !(i in from)) {\r\n if (!ar) ar = Array.prototype.slice.call(from, 0, i);\r\n ar[i] = from[i];\r\n }\r\n }\r\n return to.concat(ar || Array.prototype.slice.call(from));\r\n}\r\n\r\nexport function __await(v) {\r\n return this instanceof __await ? (this.v = v, this) : new __await(v);\r\n}\r\n\r\nexport function __asyncGenerator(thisArg, _arguments, generator) {\r\n if (!Symbol.asyncIterator) throw new TypeError(\"Symbol.asyncIterator is not defined.\");\r\n var g = generator.apply(thisArg, _arguments || []), i, q = [];\r\n return i = {}, verb(\"next\"), verb(\"throw\"), verb(\"return\"), i[Symbol.asyncIterator] = function () { return this; }, i;\r\n function verb(n) { if (g[n]) i[n] = function (v) { return new Promise(function (a, b) { q.push([n, v, a, b]) > 1 || resume(n, v); }); }; }\r\n function resume(n, v) { try { step(g[n](v)); } catch (e) { settle(q[0][3], e); } }\r\n function step(r) { r.value instanceof __await ? Promise.resolve(r.value.v).then(fulfill, reject) : settle(q[0][2], r); }\r\n function fulfill(value) { resume(\"next\", value); }\r\n function reject(value) { resume(\"throw\", value); }\r\n function settle(f, v) { if (f(v), q.shift(), q.length) resume(q[0][0], q[0][1]); }\r\n}\r\n\r\nexport function __asyncDelegator(o) {\r\n var i, p;\r\n return i = {}, verb(\"next\"), verb(\"throw\", function (e) { throw e; }), verb(\"return\"), i[Symbol.iterator] = function () { return this; }, i;\r\n function verb(n, f) { i[n] = o[n] ? function (v) { return (p = !p) ? { value: __await(o[n](v)), done: n === \"return\" } : f ? f(v) : v; } : f; }\r\n}\r\n\r\nexport function __asyncValues(o) {\r\n if (!Symbol.asyncIterator) throw new TypeError(\"Symbol.asyncIterator is not defined.\");\r\n var m = o[Symbol.asyncIterator], i;\r\n return m ? m.call(o) : (o = typeof __values === \"function\" ? __values(o) : o[Symbol.iterator](), i = {}, verb(\"next\"), verb(\"throw\"), verb(\"return\"), i[Symbol.asyncIterator] = function () { return this; }, i);\r\n function verb(n) { i[n] = o[n] && function (v) { return new Promise(function (resolve, reject) { v = o[n](v), settle(resolve, reject, v.done, v.value); }); }; }\r\n function settle(resolve, reject, d, v) { Promise.resolve(v).then(function(v) { resolve({ value: v, done: d }); }, reject); }\r\n}\r\n\r\nexport function __makeTemplateObject(cooked, raw) {\r\n if (Object.defineProperty) { Object.defineProperty(cooked, \"raw\", { value: raw }); } else { cooked.raw = raw; }\r\n return cooked;\r\n};\r\n\r\nvar __setModuleDefault = Object.create ? (function(o, v) {\r\n Object.defineProperty(o, \"default\", { enumerable: true, value: v });\r\n}) : function(o, v) {\r\n o[\"default\"] = v;\r\n};\r\n\r\nexport function __importStar(mod) {\r\n if (mod && mod.__esModule) return mod;\r\n var result = {};\r\n if (mod != null) for (var k in mod) if (k !== \"default\" && Object.prototype.hasOwnProperty.call(mod, k)) __createBinding(result, mod, k);\r\n __setModuleDefault(result, mod);\r\n return result;\r\n}\r\n\r\nexport function __importDefault(mod) {\r\n return (mod && mod.__esModule) ? mod : { default: mod };\r\n}\r\n\r\nexport function __classPrivateFieldGet(receiver, state, kind, f) {\r\n if (kind === \"a\" && !f) throw new TypeError(\"Private accessor was defined without a getter\");\r\n if (typeof state === \"function\" ? receiver !== state || !f : !state.has(receiver)) throw new TypeError(\"Cannot read private member from an object whose class did not declare it\");\r\n return kind === \"m\" ? f : kind === \"a\" ? f.call(receiver) : f ? f.value : state.get(receiver);\r\n}\r\n\r\nexport function __classPrivateFieldSet(receiver, state, value, kind, f) {\r\n if (kind === \"m\") throw new TypeError(\"Private method is not writable\");\r\n if (kind === \"a\" && !f) throw new TypeError(\"Private accessor was defined without a setter\");\r\n if (typeof state === \"function\" ? receiver !== state || !f : !state.has(receiver)) throw new TypeError(\"Cannot write private member to an object whose class did not declare it\");\r\n return (kind === \"a\" ? f.call(receiver, value) : f ? f.value = value : state.set(receiver, value)), value;\r\n}\r\n", "/**\n * Returns true if the object is a function.\n * @param value The value to check\n */\nexport function isFunction(value: any): value is (...args: any[]) => any {\n return typeof value === 'function';\n}\n", "/**\n * Used to create Error subclasses until the community moves away from ES5.\n *\n * This is because compiling from TypeScript down to ES5 has issues with subclassing Errors\n * as well as other built-in types: https://github.com/Microsoft/TypeScript/issues/12123\n *\n * @param createImpl A factory function to create the actual constructor implementation. The returned\n * function should be a named function that calls `_super` internally.\n */\nexport function createErrorClass(createImpl: (_super: any) => any): T {\n const _super = (instance: any) => {\n Error.call(instance);\n instance.stack = new Error().stack;\n };\n\n const ctorFunc = createImpl(_super);\n ctorFunc.prototype = Object.create(Error.prototype);\n ctorFunc.prototype.constructor = ctorFunc;\n return ctorFunc;\n}\n", "import { createErrorClass } from './createErrorClass';\n\nexport interface UnsubscriptionError extends Error {\n readonly errors: any[];\n}\n\nexport interface UnsubscriptionErrorCtor {\n /**\n * @deprecated Internal implementation detail. Do not construct error instances.\n * Cannot be tagged as internal: https://github.com/ReactiveX/rxjs/issues/6269\n */\n new (errors: any[]): UnsubscriptionError;\n}\n\n/**\n * An error thrown when one or more errors have occurred during the\n * `unsubscribe` of a {@link Subscription}.\n */\nexport const UnsubscriptionError: UnsubscriptionErrorCtor = createErrorClass(\n (_super) =>\n function UnsubscriptionErrorImpl(this: any, errors: (Error | string)[]) {\n _super(this);\n this.message = errors\n ? `${errors.length} errors occurred during unsubscription:\n${errors.map((err, i) => `${i + 1}) ${err.toString()}`).join('\\n ')}`\n : '';\n this.name = 'UnsubscriptionError';\n this.errors = errors;\n }\n);\n", "/**\n * Removes an item from an array, mutating it.\n * @param arr The array to remove the item from\n * @param item The item to remove\n */\nexport function arrRemove(arr: T[] | undefined | null, item: T) {\n if (arr) {\n const index = arr.indexOf(item);\n 0 <= index && arr.splice(index, 1);\n }\n}\n", "import { isFunction } from './util/isFunction';\nimport { UnsubscriptionError } from './util/UnsubscriptionError';\nimport { SubscriptionLike, TeardownLogic, Unsubscribable } from './types';\nimport { arrRemove } from './util/arrRemove';\n\n/**\n * Represents a disposable resource, such as the execution of an Observable. A\n * Subscription has one important method, `unsubscribe`, that takes no argument\n * and just disposes the resource held by the subscription.\n *\n * Additionally, subscriptions may be grouped together through the `add()`\n * method, which will attach a child Subscription to the current Subscription.\n * When a Subscription is unsubscribed, all its children (and its grandchildren)\n * will be unsubscribed as well.\n *\n * @class Subscription\n */\nexport class Subscription implements SubscriptionLike {\n /** @nocollapse */\n public static EMPTY = (() => {\n const empty = new Subscription();\n empty.closed = true;\n return empty;\n })();\n\n /**\n * A flag to indicate whether this Subscription has already been unsubscribed.\n */\n public closed = false;\n\n private _parentage: Subscription[] | Subscription | null = null;\n\n /**\n * The list of registered finalizers to execute upon unsubscription. Adding and removing from this\n * list occurs in the {@link #add} and {@link #remove} methods.\n */\n private _finalizers: Exclude[] | null = null;\n\n /**\n * @param initialTeardown A function executed first as part of the finalization\n * process that is kicked off when {@link #unsubscribe} is called.\n */\n constructor(private initialTeardown?: () => void) {}\n\n /**\n * Disposes the resources held by the subscription. May, for instance, cancel\n * an ongoing Observable execution or cancel any other type of work that\n * started when the Subscription was created.\n * @return {void}\n */\n unsubscribe(): void {\n let errors: any[] | undefined;\n\n if (!this.closed) {\n this.closed = true;\n\n // Remove this from it's parents.\n const { _parentage } = this;\n if (_parentage) {\n this._parentage = null;\n if (Array.isArray(_parentage)) {\n for (const parent of _parentage) {\n parent.remove(this);\n }\n } else {\n _parentage.remove(this);\n }\n }\n\n const { initialTeardown: initialFinalizer } = this;\n if (isFunction(initialFinalizer)) {\n try {\n initialFinalizer();\n } catch (e) {\n errors = e instanceof UnsubscriptionError ? e.errors : [e];\n }\n }\n\n const { _finalizers } = this;\n if (_finalizers) {\n this._finalizers = null;\n for (const finalizer of _finalizers) {\n try {\n execFinalizer(finalizer);\n } catch (err) {\n errors = errors ?? [];\n if (err instanceof UnsubscriptionError) {\n errors = [...errors, ...err.errors];\n } else {\n errors.push(err);\n }\n }\n }\n }\n\n if (errors) {\n throw new UnsubscriptionError(errors);\n }\n }\n }\n\n /**\n * Adds a finalizer to this subscription, so that finalization will be unsubscribed/called\n * when this subscription is unsubscribed. If this subscription is already {@link #closed},\n * because it has already been unsubscribed, then whatever finalizer is passed to it\n * will automatically be executed (unless the finalizer itself is also a closed subscription).\n *\n * Closed Subscriptions cannot be added as finalizers to any subscription. Adding a closed\n * subscription to a any subscription will result in no operation. (A noop).\n *\n * Adding a subscription to itself, or adding `null` or `undefined` will not perform any\n * operation at all. (A noop).\n *\n * `Subscription` instances that are added to this instance will automatically remove themselves\n * if they are unsubscribed. Functions and {@link Unsubscribable} objects that you wish to remove\n * will need to be removed manually with {@link #remove}\n *\n * @param teardown The finalization logic to add to this subscription.\n */\n add(teardown: TeardownLogic): void {\n // Only add the finalizer if it's not undefined\n // and don't add a subscription to itself.\n if (teardown && teardown !== this) {\n if (this.closed) {\n // If this subscription is already closed,\n // execute whatever finalizer is handed to it automatically.\n execFinalizer(teardown);\n } else {\n if (teardown instanceof Subscription) {\n // We don't add closed subscriptions, and we don't add the same subscription\n // twice. Subscription unsubscribe is idempotent.\n if (teardown.closed || teardown._hasParent(this)) {\n return;\n }\n teardown._addParent(this);\n }\n (this._finalizers = this._finalizers ?? []).push(teardown);\n }\n }\n }\n\n /**\n * Checks to see if a this subscription already has a particular parent.\n * This will signal that this subscription has already been added to the parent in question.\n * @param parent the parent to check for\n */\n private _hasParent(parent: Subscription) {\n const { _parentage } = this;\n return _parentage === parent || (Array.isArray(_parentage) && _parentage.includes(parent));\n }\n\n /**\n * Adds a parent to this subscription so it can be removed from the parent if it\n * unsubscribes on it's own.\n *\n * NOTE: THIS ASSUMES THAT {@link _hasParent} HAS ALREADY BEEN CHECKED.\n * @param parent The parent subscription to add\n */\n private _addParent(parent: Subscription) {\n const { _parentage } = this;\n this._parentage = Array.isArray(_parentage) ? (_parentage.push(parent), _parentage) : _parentage ? [_parentage, parent] : parent;\n }\n\n /**\n * Called on a child when it is removed via {@link #remove}.\n * @param parent The parent to remove\n */\n private _removeParent(parent: Subscription) {\n const { _parentage } = this;\n if (_parentage === parent) {\n this._parentage = null;\n } else if (Array.isArray(_parentage)) {\n arrRemove(_parentage, parent);\n }\n }\n\n /**\n * Removes a finalizer from this subscription that was previously added with the {@link #add} method.\n *\n * Note that `Subscription` instances, when unsubscribed, will automatically remove themselves\n * from every other `Subscription` they have been added to. This means that using the `remove` method\n * is not a common thing and should be used thoughtfully.\n *\n * If you add the same finalizer instance of a function or an unsubscribable object to a `Subscription` instance\n * more than once, you will need to call `remove` the same number of times to remove all instances.\n *\n * All finalizer instances are removed to free up memory upon unsubscription.\n *\n * @param teardown The finalizer to remove from this subscription\n */\n remove(teardown: Exclude): void {\n const { _finalizers } = this;\n _finalizers && arrRemove(_finalizers, teardown);\n\n if (teardown instanceof Subscription) {\n teardown._removeParent(this);\n }\n }\n}\n\nexport const EMPTY_SUBSCRIPTION = Subscription.EMPTY;\n\nexport function isSubscription(value: any): value is Subscription {\n return (\n value instanceof Subscription ||\n (value && 'closed' in value && isFunction(value.remove) && isFunction(value.add) && isFunction(value.unsubscribe))\n );\n}\n\nfunction execFinalizer(finalizer: Unsubscribable | (() => void)) {\n if (isFunction(finalizer)) {\n finalizer();\n } else {\n finalizer.unsubscribe();\n }\n}\n", "import { Subscriber } from './Subscriber';\nimport { ObservableNotification } from './types';\n\n/**\n * The {@link GlobalConfig} object for RxJS. It is used to configure things\n * like how to react on unhandled errors.\n */\nexport const config: GlobalConfig = {\n onUnhandledError: null,\n onStoppedNotification: null,\n Promise: undefined,\n useDeprecatedSynchronousErrorHandling: false,\n useDeprecatedNextContext: false,\n};\n\n/**\n * The global configuration object for RxJS, used to configure things\n * like how to react on unhandled errors. Accessible via {@link config}\n * object.\n */\nexport interface GlobalConfig {\n /**\n * A registration point for unhandled errors from RxJS. These are errors that\n * cannot were not handled by consuming code in the usual subscription path. For\n * example, if you have this configured, and you subscribe to an observable without\n * providing an error handler, errors from that subscription will end up here. This\n * will _always_ be called asynchronously on another job in the runtime. This is because\n * we do not want errors thrown in this user-configured handler to interfere with the\n * behavior of the library.\n */\n onUnhandledError: ((err: any) => void) | null;\n\n /**\n * A registration point for notifications that cannot be sent to subscribers because they\n * have completed, errored or have been explicitly unsubscribed. By default, next, complete\n * and error notifications sent to stopped subscribers are noops. However, sometimes callers\n * might want a different behavior. For example, with sources that attempt to report errors\n * to stopped subscribers, a caller can configure RxJS to throw an unhandled error instead.\n * This will _always_ be called asynchronously on another job in the runtime. This is because\n * we do not want errors thrown in this user-configured handler to interfere with the\n * behavior of the library.\n */\n onStoppedNotification: ((notification: ObservableNotification, subscriber: Subscriber) => void) | null;\n\n /**\n * The promise constructor used by default for {@link Observable#toPromise toPromise} and {@link Observable#forEach forEach}\n * methods.\n *\n * @deprecated As of version 8, RxJS will no longer support this sort of injection of a\n * Promise constructor. If you need a Promise implementation other than native promises,\n * please polyfill/patch Promise as you see appropriate. Will be removed in v8.\n */\n Promise?: PromiseConstructorLike;\n\n /**\n * If true, turns on synchronous error rethrowing, which is a deprecated behavior\n * in v6 and higher. This behavior enables bad patterns like wrapping a subscribe\n * call in a try/catch block. It also enables producer interference, a nasty bug\n * where a multicast can be broken for all observers by a downstream consumer with\n * an unhandled error. DO NOT USE THIS FLAG UNLESS IT'S NEEDED TO BUY TIME\n * FOR MIGRATION REASONS.\n *\n * @deprecated As of version 8, RxJS will no longer support synchronous throwing\n * of unhandled errors. All errors will be thrown on a separate call stack to prevent bad\n * behaviors described above. Will be removed in v8.\n */\n useDeprecatedSynchronousErrorHandling: boolean;\n\n /**\n * If true, enables an as-of-yet undocumented feature from v5: The ability to access\n * `unsubscribe()` via `this` context in `next` functions created in observers passed\n * to `subscribe`.\n *\n * This is being removed because the performance was severely problematic, and it could also cause\n * issues when types other than POJOs are passed to subscribe as subscribers, as they will likely have\n * their `this` context overwritten.\n *\n * @deprecated As of version 8, RxJS will no longer support altering the\n * context of next functions provided as part of an observer to Subscribe. Instead,\n * you will have access to a subscription or a signal or token that will allow you to do things like\n * unsubscribe and test closed status. Will be removed in v8.\n */\n useDeprecatedNextContext: boolean;\n}\n", "import type { TimerHandle } from './timerHandle';\ntype SetTimeoutFunction = (handler: () => void, timeout?: number, ...args: any[]) => TimerHandle;\ntype ClearTimeoutFunction = (handle: TimerHandle) => void;\n\ninterface TimeoutProvider {\n setTimeout: SetTimeoutFunction;\n clearTimeout: ClearTimeoutFunction;\n delegate:\n | {\n setTimeout: SetTimeoutFunction;\n clearTimeout: ClearTimeoutFunction;\n }\n | undefined;\n}\n\nexport const timeoutProvider: TimeoutProvider = {\n // When accessing the delegate, use the variable rather than `this` so that\n // the functions can be called without being bound to the provider.\n setTimeout(handler: () => void, timeout?: number, ...args) {\n const { delegate } = timeoutProvider;\n if (delegate?.setTimeout) {\n return delegate.setTimeout(handler, timeout, ...args);\n }\n return setTimeout(handler, timeout, ...args);\n },\n clearTimeout(handle) {\n const { delegate } = timeoutProvider;\n return (delegate?.clearTimeout || clearTimeout)(handle as any);\n },\n delegate: undefined,\n};\n", "import { config } from '../config';\nimport { timeoutProvider } from '../scheduler/timeoutProvider';\n\n/**\n * Handles an error on another job either with the user-configured {@link onUnhandledError},\n * or by throwing it on that new job so it can be picked up by `window.onerror`, `process.on('error')`, etc.\n *\n * This should be called whenever there is an error that is out-of-band with the subscription\n * or when an error hits a terminal boundary of the subscription and no error handler was provided.\n *\n * @param err the error to report\n */\nexport function reportUnhandledError(err: any) {\n timeoutProvider.setTimeout(() => {\n const { onUnhandledError } = config;\n if (onUnhandledError) {\n // Execute the user-configured error handler.\n onUnhandledError(err);\n } else {\n // Throw so it is picked up by the runtime's uncaught error mechanism.\n throw err;\n }\n });\n}\n", "/* tslint:disable:no-empty */\nexport function noop() { }\n", "import { CompleteNotification, NextNotification, ErrorNotification } from './types';\n\n/**\n * A completion object optimized for memory use and created to be the\n * same \"shape\" as other notifications in v8.\n * @internal\n */\nexport const COMPLETE_NOTIFICATION = (() => createNotification('C', undefined, undefined) as CompleteNotification)();\n\n/**\n * Internal use only. Creates an optimized error notification that is the same \"shape\"\n * as other notifications.\n * @internal\n */\nexport function errorNotification(error: any): ErrorNotification {\n return createNotification('E', undefined, error) as any;\n}\n\n/**\n * Internal use only. Creates an optimized next notification that is the same \"shape\"\n * as other notifications.\n * @internal\n */\nexport function nextNotification(value: T) {\n return createNotification('N', value, undefined) as NextNotification;\n}\n\n/**\n * Ensures that all notifications created internally have the same \"shape\" in v8.\n *\n * TODO: This is only exported to support a crazy legacy test in `groupBy`.\n * @internal\n */\nexport function createNotification(kind: 'N' | 'E' | 'C', value: any, error: any) {\n return {\n kind,\n value,\n error,\n };\n}\n", "import { config } from '../config';\n\nlet context: { errorThrown: boolean; error: any } | null = null;\n\n/**\n * Handles dealing with errors for super-gross mode. Creates a context, in which\n * any synchronously thrown errors will be passed to {@link captureError}. Which\n * will record the error such that it will be rethrown after the call back is complete.\n * TODO: Remove in v8\n * @param cb An immediately executed function.\n */\nexport function errorContext(cb: () => void) {\n if (config.useDeprecatedSynchronousErrorHandling) {\n const isRoot = !context;\n if (isRoot) {\n context = { errorThrown: false, error: null };\n }\n cb();\n if (isRoot) {\n const { errorThrown, error } = context!;\n context = null;\n if (errorThrown) {\n throw error;\n }\n }\n } else {\n // This is the general non-deprecated path for everyone that\n // isn't crazy enough to use super-gross mode (useDeprecatedSynchronousErrorHandling)\n cb();\n }\n}\n\n/**\n * Captures errors only in super-gross mode.\n * @param err the error to capture\n */\nexport function captureError(err: any) {\n if (config.useDeprecatedSynchronousErrorHandling && context) {\n context.errorThrown = true;\n context.error = err;\n }\n}\n", "import { isFunction } from './util/isFunction';\nimport { Observer, ObservableNotification } from './types';\nimport { isSubscription, Subscription } from './Subscription';\nimport { config } from './config';\nimport { reportUnhandledError } from './util/reportUnhandledError';\nimport { noop } from './util/noop';\nimport { nextNotification, errorNotification, COMPLETE_NOTIFICATION } from './NotificationFactories';\nimport { timeoutProvider } from './scheduler/timeoutProvider';\nimport { captureError } from './util/errorContext';\n\n/**\n * Implements the {@link Observer} interface and extends the\n * {@link Subscription} class. While the {@link Observer} is the public API for\n * consuming the values of an {@link Observable}, all Observers get converted to\n * a Subscriber, in order to provide Subscription-like capabilities such as\n * `unsubscribe`. Subscriber is a common type in RxJS, and crucial for\n * implementing operators, but it is rarely used as a public API.\n *\n * @class Subscriber\n */\nexport class Subscriber extends Subscription implements Observer {\n /**\n * A static factory for a Subscriber, given a (potentially partial) definition\n * of an Observer.\n * @param next The `next` callback of an Observer.\n * @param error The `error` callback of an\n * Observer.\n * @param complete The `complete` callback of an\n * Observer.\n * @return A Subscriber wrapping the (partially defined)\n * Observer represented by the given arguments.\n * @nocollapse\n * @deprecated Do not use. Will be removed in v8. There is no replacement for this\n * method, and there is no reason to be creating instances of `Subscriber` directly.\n * If you have a specific use case, please file an issue.\n */\n static create(next?: (x?: T) => void, error?: (e?: any) => void, complete?: () => void): Subscriber {\n return new SafeSubscriber(next, error, complete);\n }\n\n /** @deprecated Internal implementation detail, do not use directly. Will be made internal in v8. */\n protected isStopped: boolean = false;\n /** @deprecated Internal implementation detail, do not use directly. Will be made internal in v8. */\n protected destination: Subscriber | Observer; // this `any` is the escape hatch to erase extra type param (e.g. R)\n\n /**\n * @deprecated Internal implementation detail, do not use directly. Will be made internal in v8.\n * There is no reason to directly create an instance of Subscriber. This type is exported for typings reasons.\n */\n constructor(destination?: Subscriber | Observer) {\n super();\n if (destination) {\n this.destination = destination;\n // Automatically chain subscriptions together here.\n // if destination is a Subscription, then it is a Subscriber.\n if (isSubscription(destination)) {\n destination.add(this);\n }\n } else {\n this.destination = EMPTY_OBSERVER;\n }\n }\n\n /**\n * The {@link Observer} callback to receive notifications of type `next` from\n * the Observable, with a value. The Observable may call this method 0 or more\n * times.\n * @param {T} [value] The `next` value.\n * @return {void}\n */\n next(value?: T): void {\n if (this.isStopped) {\n handleStoppedNotification(nextNotification(value), this);\n } else {\n this._next(value!);\n }\n }\n\n /**\n * The {@link Observer} callback to receive notifications of type `error` from\n * the Observable, with an attached `Error`. Notifies the Observer that\n * the Observable has experienced an error condition.\n * @param {any} [err] The `error` exception.\n * @return {void}\n */\n error(err?: any): void {\n if (this.isStopped) {\n handleStoppedNotification(errorNotification(err), this);\n } else {\n this.isStopped = true;\n this._error(err);\n }\n }\n\n /**\n * The {@link Observer} callback to receive a valueless notification of type\n * `complete` from the Observable. Notifies the Observer that the Observable\n * has finished sending push-based notifications.\n * @return {void}\n */\n complete(): void {\n if (this.isStopped) {\n handleStoppedNotification(COMPLETE_NOTIFICATION, this);\n } else {\n this.isStopped = true;\n this._complete();\n }\n }\n\n unsubscribe(): void {\n if (!this.closed) {\n this.isStopped = true;\n super.unsubscribe();\n this.destination = null!;\n }\n }\n\n protected _next(value: T): void {\n this.destination.next(value);\n }\n\n protected _error(err: any): void {\n try {\n this.destination.error(err);\n } finally {\n this.unsubscribe();\n }\n }\n\n protected _complete(): void {\n try {\n this.destination.complete();\n } finally {\n this.unsubscribe();\n }\n }\n}\n\n/**\n * This bind is captured here because we want to be able to have\n * compatibility with monoid libraries that tend to use a method named\n * `bind`. In particular, a library called Monio requires this.\n */\nconst _bind = Function.prototype.bind;\n\nfunction bind any>(fn: Fn, thisArg: any): Fn {\n return _bind.call(fn, thisArg);\n}\n\n/**\n * Internal optimization only, DO NOT EXPOSE.\n * @internal\n */\nclass ConsumerObserver implements Observer {\n constructor(private partialObserver: Partial>) {}\n\n next(value: T): void {\n const { partialObserver } = this;\n if (partialObserver.next) {\n try {\n partialObserver.next(value);\n } catch (error) {\n handleUnhandledError(error);\n }\n }\n }\n\n error(err: any): void {\n const { partialObserver } = this;\n if (partialObserver.error) {\n try {\n partialObserver.error(err);\n } catch (error) {\n handleUnhandledError(error);\n }\n } else {\n handleUnhandledError(err);\n }\n }\n\n complete(): void {\n const { partialObserver } = this;\n if (partialObserver.complete) {\n try {\n partialObserver.complete();\n } catch (error) {\n handleUnhandledError(error);\n }\n }\n }\n}\n\nexport class SafeSubscriber extends Subscriber {\n constructor(\n observerOrNext?: Partial> | ((value: T) => void) | null,\n error?: ((e?: any) => void) | null,\n complete?: (() => void) | null\n ) {\n super();\n\n let partialObserver: Partial>;\n if (isFunction(observerOrNext) || !observerOrNext) {\n // The first argument is a function, not an observer. The next\n // two arguments *could* be observers, or they could be empty.\n partialObserver = {\n next: (observerOrNext ?? undefined) as (((value: T) => void) | undefined),\n error: error ?? undefined,\n complete: complete ?? undefined,\n };\n } else {\n // The first argument is a partial observer.\n let context: any;\n if (this && config.useDeprecatedNextContext) {\n // This is a deprecated path that made `this.unsubscribe()` available in\n // next handler functions passed to subscribe. This only exists behind a flag\n // now, as it is *very* slow.\n context = Object.create(observerOrNext);\n context.unsubscribe = () => this.unsubscribe();\n partialObserver = {\n next: observerOrNext.next && bind(observerOrNext.next, context),\n error: observerOrNext.error && bind(observerOrNext.error, context),\n complete: observerOrNext.complete && bind(observerOrNext.complete, context),\n };\n } else {\n // The \"normal\" path. Just use the partial observer directly.\n partialObserver = observerOrNext;\n }\n }\n\n // Wrap the partial observer to ensure it's a full observer, and\n // make sure proper error handling is accounted for.\n this.destination = new ConsumerObserver(partialObserver);\n }\n}\n\nfunction handleUnhandledError(error: any) {\n if (config.useDeprecatedSynchronousErrorHandling) {\n captureError(error);\n } else {\n // Ideal path, we report this as an unhandled error,\n // which is thrown on a new call stack.\n reportUnhandledError(error);\n }\n}\n\n/**\n * An error handler used when no error handler was supplied\n * to the SafeSubscriber -- meaning no error handler was supplied\n * do the `subscribe` call on our observable.\n * @param err The error to handle\n */\nfunction defaultErrorHandler(err: any) {\n throw err;\n}\n\n/**\n * A handler for notifications that cannot be sent to a stopped subscriber.\n * @param notification The notification being sent\n * @param subscriber The stopped subscriber\n */\nfunction handleStoppedNotification(notification: ObservableNotification, subscriber: Subscriber) {\n const { onStoppedNotification } = config;\n onStoppedNotification && timeoutProvider.setTimeout(() => onStoppedNotification(notification, subscriber));\n}\n\n/**\n * The observer used as a stub for subscriptions where the user did not\n * pass any arguments to `subscribe`. Comes with the default error handling\n * behavior.\n */\nexport const EMPTY_OBSERVER: Readonly> & { closed: true } = {\n closed: true,\n next: noop,\n error: defaultErrorHandler,\n complete: noop,\n};\n", "/**\n * Symbol.observable or a string \"@@observable\". Used for interop\n *\n * @deprecated We will no longer be exporting this symbol in upcoming versions of RxJS.\n * Instead polyfill and use Symbol.observable directly *or* use https://www.npmjs.com/package/symbol-observable\n */\nexport const observable: string | symbol = (() => (typeof Symbol === 'function' && Symbol.observable) || '@@observable')();\n", "/**\n * This function takes one parameter and just returns it. Simply put,\n * this is like `(x: T): T => x`.\n *\n * ## Examples\n *\n * This is useful in some cases when using things like `mergeMap`\n *\n * ```ts\n * import { interval, take, map, range, mergeMap, identity } from 'rxjs';\n *\n * const source$ = interval(1000).pipe(take(5));\n *\n * const result$ = source$.pipe(\n * map(i => range(i)),\n * mergeMap(identity) // same as mergeMap(x => x)\n * );\n *\n * result$.subscribe({\n * next: console.log\n * });\n * ```\n *\n * Or when you want to selectively apply an operator\n *\n * ```ts\n * import { interval, take, identity } from 'rxjs';\n *\n * const shouldLimit = () => Math.random() < 0.5;\n *\n * const source$ = interval(1000);\n *\n * const result$ = source$.pipe(shouldLimit() ? take(5) : identity);\n *\n * result$.subscribe({\n * next: console.log\n * });\n * ```\n *\n * @param x Any value that is returned by this function\n * @returns The value passed as the first parameter to this function\n */\nexport function identity(x: T): T {\n return x;\n}\n", "import { identity } from './identity';\nimport { UnaryFunction } from '../types';\n\nexport function pipe(): typeof identity;\nexport function pipe(fn1: UnaryFunction): UnaryFunction;\nexport function pipe(fn1: UnaryFunction, fn2: UnaryFunction): UnaryFunction;\nexport function pipe(fn1: UnaryFunction, fn2: UnaryFunction, fn3: UnaryFunction): UnaryFunction;\nexport function pipe(\n fn1: UnaryFunction,\n fn2: UnaryFunction,\n fn3: UnaryFunction,\n fn4: UnaryFunction\n): UnaryFunction;\nexport function pipe(\n fn1: UnaryFunction,\n fn2: UnaryFunction,\n fn3: UnaryFunction,\n fn4: UnaryFunction,\n fn5: UnaryFunction\n): UnaryFunction;\nexport function pipe(\n fn1: UnaryFunction,\n fn2: UnaryFunction,\n fn3: UnaryFunction,\n fn4: UnaryFunction,\n fn5: UnaryFunction,\n fn6: UnaryFunction\n): UnaryFunction;\nexport function pipe(\n fn1: UnaryFunction,\n fn2: UnaryFunction,\n fn3: UnaryFunction,\n fn4: UnaryFunction,\n fn5: UnaryFunction,\n fn6: UnaryFunction,\n fn7: UnaryFunction\n): UnaryFunction;\nexport function pipe(\n fn1: UnaryFunction,\n fn2: UnaryFunction,\n fn3: UnaryFunction,\n fn4: UnaryFunction,\n fn5: UnaryFunction,\n fn6: UnaryFunction,\n fn7: UnaryFunction,\n fn8: UnaryFunction\n): UnaryFunction;\nexport function pipe(\n fn1: UnaryFunction,\n fn2: UnaryFunction,\n fn3: UnaryFunction,\n fn4: UnaryFunction,\n fn5: UnaryFunction,\n fn6: UnaryFunction,\n fn7: UnaryFunction,\n fn8: UnaryFunction,\n fn9: UnaryFunction\n): UnaryFunction;\nexport function pipe(\n fn1: UnaryFunction,\n fn2: UnaryFunction,\n fn3: UnaryFunction,\n fn4: UnaryFunction,\n fn5: UnaryFunction,\n fn6: UnaryFunction,\n fn7: UnaryFunction,\n fn8: UnaryFunction,\n fn9: UnaryFunction,\n ...fns: UnaryFunction[]\n): UnaryFunction;\n\n/**\n * pipe() can be called on one or more functions, each of which can take one argument (\"UnaryFunction\")\n * and uses it to return a value.\n * It returns a function that takes one argument, passes it to the first UnaryFunction, and then\n * passes the result to the next one, passes that result to the next one, and so on. \n */\nexport function pipe(...fns: Array>): UnaryFunction {\n return pipeFromArray(fns);\n}\n\n/** @internal */\nexport function pipeFromArray(fns: Array>): UnaryFunction {\n if (fns.length === 0) {\n return identity as UnaryFunction;\n }\n\n if (fns.length === 1) {\n return fns[0];\n }\n\n return function piped(input: T): R {\n return fns.reduce((prev: any, fn: UnaryFunction) => fn(prev), input as any);\n };\n}\n", "import { Operator } from './Operator';\nimport { SafeSubscriber, Subscriber } from './Subscriber';\nimport { isSubscription, Subscription } from './Subscription';\nimport { TeardownLogic, OperatorFunction, Subscribable, Observer } from './types';\nimport { observable as Symbol_observable } from './symbol/observable';\nimport { pipeFromArray } from './util/pipe';\nimport { config } from './config';\nimport { isFunction } from './util/isFunction';\nimport { errorContext } from './util/errorContext';\n\n/**\n * A representation of any set of values over any amount of time. This is the most basic building block\n * of RxJS.\n *\n * @class Observable\n */\nexport class Observable implements Subscribable {\n /**\n * @deprecated Internal implementation detail, do not use directly. Will be made internal in v8.\n */\n source: Observable | undefined;\n\n /**\n * @deprecated Internal implementation detail, do not use directly. Will be made internal in v8.\n */\n operator: Operator | undefined;\n\n /**\n * @constructor\n * @param {Function} subscribe the function that is called when the Observable is\n * initially subscribed to. This function is given a Subscriber, to which new values\n * can be `next`ed, or an `error` method can be called to raise an error, or\n * `complete` can be called to notify of a successful completion.\n */\n constructor(subscribe?: (this: Observable, subscriber: Subscriber) => TeardownLogic) {\n if (subscribe) {\n this._subscribe = subscribe;\n }\n }\n\n // HACK: Since TypeScript inherits static properties too, we have to\n // fight against TypeScript here so Subject can have a different static create signature\n /**\n * Creates a new Observable by calling the Observable constructor\n * @owner Observable\n * @method create\n * @param {Function} subscribe? the subscriber function to be passed to the Observable constructor\n * @return {Observable} a new observable\n * @nocollapse\n * @deprecated Use `new Observable()` instead. Will be removed in v8.\n */\n static create: (...args: any[]) => any = (subscribe?: (subscriber: Subscriber) => TeardownLogic) => {\n return new Observable(subscribe);\n };\n\n /**\n * Creates a new Observable, with this Observable instance as the source, and the passed\n * operator defined as the new observable's operator.\n * @method lift\n * @param operator the operator defining the operation to take on the observable\n * @return a new observable with the Operator applied\n * @deprecated Internal implementation detail, do not use directly. Will be made internal in v8.\n * If you have implemented an operator using `lift`, it is recommended that you create an\n * operator by simply returning `new Observable()` directly. See \"Creating new operators from\n * scratch\" section here: https://rxjs.dev/guide/operators\n */\n lift(operator?: Operator): Observable {\n const observable = new Observable();\n observable.source = this;\n observable.operator = operator;\n return observable;\n }\n\n subscribe(observerOrNext?: Partial> | ((value: T) => void)): Subscription;\n /** @deprecated Instead of passing separate callback arguments, use an observer argument. Signatures taking separate callback arguments will be removed in v8. Details: https://rxjs.dev/deprecations/subscribe-arguments */\n subscribe(next?: ((value: T) => void) | null, error?: ((error: any) => void) | null, complete?: (() => void) | null): Subscription;\n /**\n * Invokes an execution of an Observable and registers Observer handlers for notifications it will emit.\n *\n * Use it when you have all these Observables, but still nothing is happening.\n *\n * `subscribe` is not a regular operator, but a method that calls Observable's internal `subscribe` function. It\n * might be for example a function that you passed to Observable's constructor, but most of the time it is\n * a library implementation, which defines what will be emitted by an Observable, and when it be will emitted. This means\n * that calling `subscribe` is actually the moment when Observable starts its work, not when it is created, as it is often\n * the thought.\n *\n * Apart from starting the execution of an Observable, this method allows you to listen for values\n * that an Observable emits, as well as for when it completes or errors. You can achieve this in two\n * of the following ways.\n *\n * The first way is creating an object that implements {@link Observer} interface. It should have methods\n * defined by that interface, but note that it should be just a regular JavaScript object, which you can create\n * yourself in any way you want (ES6 class, classic function constructor, object literal etc.). In particular, do\n * not attempt to use any RxJS implementation details to create Observers - you don't need them. Remember also\n * that your object does not have to implement all methods. If you find yourself creating a method that doesn't\n * do anything, you can simply omit it. Note however, if the `error` method is not provided and an error happens,\n * it will be thrown asynchronously. Errors thrown asynchronously cannot be caught using `try`/`catch`. Instead,\n * use the {@link onUnhandledError} configuration option or use a runtime handler (like `window.onerror` or\n * `process.on('error)`) to be notified of unhandled errors. Because of this, it's recommended that you provide\n * an `error` method to avoid missing thrown errors.\n *\n * The second way is to give up on Observer object altogether and simply provide callback functions in place of its methods.\n * This means you can provide three functions as arguments to `subscribe`, where the first function is equivalent\n * of a `next` method, the second of an `error` method and the third of a `complete` method. Just as in case of an Observer,\n * if you do not need to listen for something, you can omit a function by passing `undefined` or `null`,\n * since `subscribe` recognizes these functions by where they were placed in function call. When it comes\n * to the `error` function, as with an Observer, if not provided, errors emitted by an Observable will be thrown asynchronously.\n *\n * You can, however, subscribe with no parameters at all. This may be the case where you're not interested in terminal events\n * and you also handled emissions internally by using operators (e.g. using `tap`).\n *\n * Whichever style of calling `subscribe` you use, in both cases it returns a Subscription object.\n * This object allows you to call `unsubscribe` on it, which in turn will stop the work that an Observable does and will clean\n * up all resources that an Observable used. Note that cancelling a subscription will not call `complete` callback\n * provided to `subscribe` function, which is reserved for a regular completion signal that comes from an Observable.\n *\n * Remember that callbacks provided to `subscribe` are not guaranteed to be called asynchronously.\n * It is an Observable itself that decides when these functions will be called. For example {@link of}\n * by default emits all its values synchronously. Always check documentation for how given Observable\n * will behave when subscribed and if its default behavior can be modified with a `scheduler`.\n *\n * #### Examples\n *\n * Subscribe with an {@link guide/observer Observer}\n *\n * ```ts\n * import { of } from 'rxjs';\n *\n * const sumObserver = {\n * sum: 0,\n * next(value) {\n * console.log('Adding: ' + value);\n * this.sum = this.sum + value;\n * },\n * error() {\n * // We actually could just remove this method,\n * // since we do not really care about errors right now.\n * },\n * complete() {\n * console.log('Sum equals: ' + this.sum);\n * }\n * };\n *\n * of(1, 2, 3) // Synchronously emits 1, 2, 3 and then completes.\n * .subscribe(sumObserver);\n *\n * // Logs:\n * // 'Adding: 1'\n * // 'Adding: 2'\n * // 'Adding: 3'\n * // 'Sum equals: 6'\n * ```\n *\n * Subscribe with functions ({@link deprecations/subscribe-arguments deprecated})\n *\n * ```ts\n * import { of } from 'rxjs'\n *\n * let sum = 0;\n *\n * of(1, 2, 3).subscribe(\n * value => {\n * console.log('Adding: ' + value);\n * sum = sum + value;\n * },\n * undefined,\n * () => console.log('Sum equals: ' + sum)\n * );\n *\n * // Logs:\n * // 'Adding: 1'\n * // 'Adding: 2'\n * // 'Adding: 3'\n * // 'Sum equals: 6'\n * ```\n *\n * Cancel a subscription\n *\n * ```ts\n * import { interval } from 'rxjs';\n *\n * const subscription = interval(1000).subscribe({\n * next(num) {\n * console.log(num)\n * },\n * complete() {\n * // Will not be called, even when cancelling subscription.\n * console.log('completed!');\n * }\n * });\n *\n * setTimeout(() => {\n * subscription.unsubscribe();\n * console.log('unsubscribed!');\n * }, 2500);\n *\n * // Logs:\n * // 0 after 1s\n * // 1 after 2s\n * // 'unsubscribed!' after 2.5s\n * ```\n *\n * @param {Observer|Function} observerOrNext (optional) Either an observer with methods to be called,\n * or the first of three possible handlers, which is the handler for each value emitted from the subscribed\n * Observable.\n * @param {Function} error (optional) A handler for a terminal event resulting from an error. If no error handler is provided,\n * the error will be thrown asynchronously as unhandled.\n * @param {Function} complete (optional) A handler for a terminal event resulting from successful completion.\n * @return {Subscription} a subscription reference to the registered handlers\n * @method subscribe\n */\n subscribe(\n observerOrNext?: Partial> | ((value: T) => void) | null,\n error?: ((error: any) => void) | null,\n complete?: (() => void) | null\n ): Subscription {\n const subscriber = isSubscriber(observerOrNext) ? observerOrNext : new SafeSubscriber(observerOrNext, error, complete);\n\n errorContext(() => {\n const { operator, source } = this;\n subscriber.add(\n operator\n ? // We're dealing with a subscription in the\n // operator chain to one of our lifted operators.\n operator.call(subscriber, source)\n : source\n ? // If `source` has a value, but `operator` does not, something that\n // had intimate knowledge of our API, like our `Subject`, must have\n // set it. We're going to just call `_subscribe` directly.\n this._subscribe(subscriber)\n : // In all other cases, we're likely wrapping a user-provided initializer\n // function, so we need to catch errors and handle them appropriately.\n this._trySubscribe(subscriber)\n );\n });\n\n return subscriber;\n }\n\n /** @internal */\n protected _trySubscribe(sink: Subscriber): TeardownLogic {\n try {\n return this._subscribe(sink);\n } catch (err) {\n // We don't need to return anything in this case,\n // because it's just going to try to `add()` to a subscription\n // above.\n sink.error(err);\n }\n }\n\n /**\n * Used as a NON-CANCELLABLE means of subscribing to an observable, for use with\n * APIs that expect promises, like `async/await`. You cannot unsubscribe from this.\n *\n * **WARNING**: Only use this with observables you *know* will complete. If the source\n * observable does not complete, you will end up with a promise that is hung up, and\n * potentially all of the state of an async function hanging out in memory. To avoid\n * this situation, look into adding something like {@link timeout}, {@link take},\n * {@link takeWhile}, or {@link takeUntil} amongst others.\n *\n * #### Example\n *\n * ```ts\n * import { interval, take } from 'rxjs';\n *\n * const source$ = interval(1000).pipe(take(4));\n *\n * async function getTotal() {\n * let total = 0;\n *\n * await source$.forEach(value => {\n * total += value;\n * console.log('observable -> ' + value);\n * });\n *\n * return total;\n * }\n *\n * getTotal().then(\n * total => console.log('Total: ' + total)\n * );\n *\n * // Expected:\n * // 'observable -> 0'\n * // 'observable -> 1'\n * // 'observable -> 2'\n * // 'observable -> 3'\n * // 'Total: 6'\n * ```\n *\n * @param next a handler for each value emitted by the observable\n * @return a promise that either resolves on observable completion or\n * rejects with the handled error\n */\n forEach(next: (value: T) => void): Promise;\n\n /**\n * @param next a handler for each value emitted by the observable\n * @param promiseCtor a constructor function used to instantiate the Promise\n * @return a promise that either resolves on observable completion or\n * rejects with the handled error\n * @deprecated Passing a Promise constructor will no longer be available\n * in upcoming versions of RxJS. This is because it adds weight to the library, for very\n * little benefit. If you need this functionality, it is recommended that you either\n * polyfill Promise, or you create an adapter to convert the returned native promise\n * to whatever promise implementation you wanted. Will be removed in v8.\n */\n forEach(next: (value: T) => void, promiseCtor: PromiseConstructorLike): Promise;\n\n forEach(next: (value: T) => void, promiseCtor?: PromiseConstructorLike): Promise {\n promiseCtor = getPromiseCtor(promiseCtor);\n\n return new promiseCtor((resolve, reject) => {\n const subscriber = new SafeSubscriber({\n next: (value) => {\n try {\n next(value);\n } catch (err) {\n reject(err);\n subscriber.unsubscribe();\n }\n },\n error: reject,\n complete: resolve,\n });\n this.subscribe(subscriber);\n }) as Promise;\n }\n\n /** @internal */\n protected _subscribe(subscriber: Subscriber): TeardownLogic {\n return this.source?.subscribe(subscriber);\n }\n\n /**\n * An interop point defined by the es7-observable spec https://github.com/zenparsing/es-observable\n * @method Symbol.observable\n * @return {Observable} this instance of the observable\n */\n [Symbol_observable]() {\n return this;\n }\n\n /* tslint:disable:max-line-length */\n pipe(): Observable;\n pipe(op1: OperatorFunction): Observable;\n pipe(op1: OperatorFunction, op2: OperatorFunction): Observable;\n pipe(op1: OperatorFunction, op2: OperatorFunction, op3: OperatorFunction): Observable;\n pipe(\n op1: OperatorFunction,\n op2: OperatorFunction,\n op3: OperatorFunction,\n op4: OperatorFunction\n ): Observable;\n pipe(\n op1: OperatorFunction,\n op2: OperatorFunction,\n op3: OperatorFunction,\n op4: OperatorFunction,\n op5: OperatorFunction\n ): Observable;\n pipe(\n op1: OperatorFunction,\n op2: OperatorFunction,\n op3: OperatorFunction,\n op4: OperatorFunction,\n op5: OperatorFunction,\n op6: OperatorFunction\n ): Observable;\n pipe(\n op1: OperatorFunction,\n op2: OperatorFunction,\n op3: OperatorFunction,\n op4: OperatorFunction,\n op5: OperatorFunction,\n op6: OperatorFunction,\n op7: OperatorFunction\n ): Observable;\n pipe(\n op1: OperatorFunction,\n op2: OperatorFunction,\n op3: OperatorFunction,\n op4: OperatorFunction,\n op5: OperatorFunction,\n op6: OperatorFunction,\n op7: OperatorFunction,\n op8: OperatorFunction\n ): Observable;\n pipe(\n op1: OperatorFunction,\n op2: OperatorFunction,\n op3: OperatorFunction,\n op4: OperatorFunction,\n op5: OperatorFunction,\n op6: OperatorFunction,\n op7: OperatorFunction,\n op8: OperatorFunction,\n op9: OperatorFunction\n ): Observable;\n pipe(\n op1: OperatorFunction,\n op2: OperatorFunction,\n op3: OperatorFunction,\n op4: OperatorFunction,\n op5: OperatorFunction,\n op6: OperatorFunction,\n op7: OperatorFunction,\n op8: OperatorFunction,\n op9: OperatorFunction,\n ...operations: OperatorFunction[]\n ): Observable;\n /* tslint:enable:max-line-length */\n\n /**\n * Used to stitch together functional operators into a chain.\n * @method pipe\n * @return {Observable} the Observable result of all of the operators having\n * been called in the order they were passed in.\n *\n * ## Example\n *\n * ```ts\n * import { interval, filter, map, scan } from 'rxjs';\n *\n * interval(1000)\n * .pipe(\n * filter(x => x % 2 === 0),\n * map(x => x + x),\n * scan((acc, x) => acc + x)\n * )\n * .subscribe(x => console.log(x));\n * ```\n */\n pipe(...operations: OperatorFunction[]): Observable {\n return pipeFromArray(operations)(this);\n }\n\n /* tslint:disable:max-line-length */\n /** @deprecated Replaced with {@link firstValueFrom} and {@link lastValueFrom}. Will be removed in v8. Details: https://rxjs.dev/deprecations/to-promise */\n toPromise(): Promise;\n /** @deprecated Replaced with {@link firstValueFrom} and {@link lastValueFrom}. Will be removed in v8. Details: https://rxjs.dev/deprecations/to-promise */\n toPromise(PromiseCtor: typeof Promise): Promise;\n /** @deprecated Replaced with {@link firstValueFrom} and {@link lastValueFrom}. Will be removed in v8. Details: https://rxjs.dev/deprecations/to-promise */\n toPromise(PromiseCtor: PromiseConstructorLike): Promise;\n /* tslint:enable:max-line-length */\n\n /**\n * Subscribe to this Observable and get a Promise resolving on\n * `complete` with the last emission (if any).\n *\n * **WARNING**: Only use this with observables you *know* will complete. If the source\n * observable does not complete, you will end up with a promise that is hung up, and\n * potentially all of the state of an async function hanging out in memory. To avoid\n * this situation, look into adding something like {@link timeout}, {@link take},\n * {@link takeWhile}, or {@link takeUntil} amongst others.\n *\n * @method toPromise\n * @param [promiseCtor] a constructor function used to instantiate\n * the Promise\n * @return A Promise that resolves with the last value emit, or\n * rejects on an error. If there were no emissions, Promise\n * resolves with undefined.\n * @deprecated Replaced with {@link firstValueFrom} and {@link lastValueFrom}. Will be removed in v8. Details: https://rxjs.dev/deprecations/to-promise\n */\n toPromise(promiseCtor?: PromiseConstructorLike): Promise {\n promiseCtor = getPromiseCtor(promiseCtor);\n\n return new promiseCtor((resolve, reject) => {\n let value: T | undefined;\n this.subscribe(\n (x: T) => (value = x),\n (err: any) => reject(err),\n () => resolve(value)\n );\n }) as Promise;\n }\n}\n\n/**\n * Decides between a passed promise constructor from consuming code,\n * A default configured promise constructor, and the native promise\n * constructor and returns it. If nothing can be found, it will throw\n * an error.\n * @param promiseCtor The optional promise constructor to passed by consuming code\n */\nfunction getPromiseCtor(promiseCtor: PromiseConstructorLike | undefined) {\n return promiseCtor ?? config.Promise ?? Promise;\n}\n\nfunction isObserver(value: any): value is Observer {\n return value && isFunction(value.next) && isFunction(value.error) && isFunction(value.complete);\n}\n\nfunction isSubscriber