From 2b7d2398a3849b14d368c87738b12dd6cd1d4f4f Mon Sep 17 00:00:00 2001 From: Johannes Nussbaum <39048939+jnussbaum@users.noreply.github.com> Date: Tue, 19 Mar 2024 18:37:19 +0100 Subject: [PATCH] docs: Lint with markdownlint (#3128) Co-authored-by: Balduin Landolt <33053745+BalduinLandolt@users.noreply.github.com> --- .markdownlint.yml | 43 +++ docs/01-introduction/example-project.md | 3 +- docs/01-introduction/standoff-rdf.md | 3 +- docs/02-dsp-ontologies/knora-base.md | 25 +- docs/03-endpoints/api-admin/groups.md | 4 +- docs/03-endpoints/api-admin/index.md | 10 +- docs/03-endpoints/api-admin/introduction.md | 4 +- docs/03-endpoints/api-admin/lists.md | 140 +++++---- docs/03-endpoints/api-admin/overview.md | 10 +- docs/03-endpoints/api-admin/permissions.md | 37 ++- docs/03-endpoints/api-admin/projects.md | 54 ++-- docs/03-endpoints/api-admin/stores.md | 2 +- docs/03-endpoints/api-admin/users.md | 22 +- docs/03-endpoints/api-v2/authentication.md | 7 +- docs/03-endpoints/api-v2/editing-resources.md | 4 +- docs/03-endpoints/api-v2/editing-values.md | 12 +- docs/03-endpoints/api-v2/getting-lists.md | 12 +- docs/03-endpoints/api-v2/knora-iris.md | 85 +++-- .../api-v2/ontology-information.md | 63 ++-- docs/03-endpoints/api-v2/query-language.md | 6 +- .../api-v2/reading-and-searching-resources.md | 43 +-- .../api-v2/text/custom-standoff.md | 117 +++---- docs/03-endpoints/api-v2/text/overview.md | 5 +- docs/03-endpoints/api-v2/text/tei-xml.md | 54 +++- .../instrumentation/introduction.md | 5 +- .../04-publishing-deployment/configuration.md | 78 ++--- docs/04-publishing-deployment/publishing.md | 13 +- ...rvice-manager-from-akka-actor-to-zlayer.md | 5 +- ...nager-and-sipi-implementation-to-zlayer.md | 3 +- ...ger-and-fuseki-implementation-to-zlayer.md | 8 +- ...respondermanager-to-a-simple-case-class.md | 9 +- .../design/adr/ADR-0006-use-zio-http.md | 12 +- .../ADR-0007-zio-fication-of-responders.md | 27 +- .../adr/ADR-0008-replace-akka-with-pekko.md | 37 ++- .../design/api-admin/administration.md | 296 ++++++++---------- docs/05-internals/design/api-v2/gravsearch.md | 51 ++- docs/05-internals/design/api-v2/json-ld.md | 4 +- docs/05-internals/design/api-v2/overview.md | 62 ++-- .../design/api-v2/query-design.md | 20 +- docs/05-internals/design/api-v2/sipi.md | 1 + docs/05-internals/design/api-v2/smart-iris.md | 1 + .../domain/class-and-property-hierarchies.md | 4 +- .../domain/domain-entities-and-relations.md | 1 - .../design/principles/consistency-checking.md | 137 ++++---- .../development/building-and-running.md | 80 ++--- .../development/docker-cheat-sheet.md | 84 +++-- .../development/docker-compose.md | 4 +- .../generating-client-test-data.md | 4 +- docs/05-internals/development/overview.md | 22 +- docs/05-internals/development/testing.md | 3 +- .../05-internals/development/vscode-config.md | 4 +- docs/06-sipi/setup-sipi-for-dsp-api.md | 16 +- docs/Readme.md | 10 +- docs/architecture/README.md | 4 +- .../docs/http-request-flow-with-events.md | 1 + 55 files changed, 947 insertions(+), 824 deletions(-) create mode 100644 .markdownlint.yml diff --git a/.markdownlint.yml b/.markdownlint.yml new file mode 100644 index 0000000000..03ee33f47c --- /dev/null +++ b/.markdownlint.yml @@ -0,0 +1,43 @@ +--- + +# Config file for https://github.com/igorshubovych/markdownlint-cli + +# MD007/ul-indent - Unordered list indentation +MD007: + # Whether to indent the first level of the list + start_indented: false + # By how many spaces every next level must be indented. The default of 2 is not compatible with mkdocs! + indent: 4 + +# MD009/no-trailing-spaces - Trailing spaces +MD009: false + +# MD012/no-multiple-blanks - Multiple consecutive blank lines +MD012: false + +# MD013/line-length - Line length +MD013: + line_length: 120 + heading_line_length: 120 + code_block_line_length: 120 + # Include code blocks + code_blocks: true + # Include tables + tables: false + # Include headings + headings: true + headers: true + # Strict length checking + strict: false + # Stern length checking + stern: false + +# MD033/no-inline-html - Inline HTML +MD033: + allowed_elements: [br, center] + +# MD041/first-line-heading/first-line-h1 - First line in a file should be a top-level heading +MD041: false + +# MD045/no-alt-text - Images should have alternate text (alt text) +MD045: false diff --git a/docs/01-introduction/example-project.md b/docs/01-introduction/example-project.md index d39c95d216..cc5d5d77cd 100644 --- a/docs/01-introduction/example-project.md +++ b/docs/01-introduction/example-project.md @@ -117,7 +117,8 @@ have the same predicate; a comma (`,`) is used to avoid repeating the predicate. The definition of `:title` says: * `rdf:type owl:ObjectProperty`: It is an `owl:ObjectProperty`. There are - two kinds of OWL properties: object properties and datatype properties. Object properties point to objects, which have IRIs and + two kinds of OWL properties: object properties and datatype properties. + Object properties point to objects, which have IRIs and can have their own properties. Datatype properties point to literal values, such as strings and integers. * `rdfs:subPropertyOf knora-base:hasValue, dcterms:title`: It is a diff --git a/docs/01-introduction/standoff-rdf.md b/docs/01-introduction/standoff-rdf.md index 0fca7320a0..2bdf470dc9 100644 --- a/docs/01-introduction/standoff-rdf.md +++ b/docs/01-introduction/standoff-rdf.md @@ -55,5 +55,6 @@ original XML. To represent overlapping or non-hierarchical markup in exported and imported XML, DSP-API supports [CLIX](https://web.archive.org/web/20171222112655/http://conferences.idealliance.org/extreme/html/2004/DeRose01/EML2004DeRose01.html) tags. -As XML-to-Standoff has proved to be complicated and not very well performing, the use of standoff with custom mappings is discouraged. +As XML-to-Standoff has proved to be complicated and not very well performing, +the use of standoff with custom mappings is discouraged. Improved integration of text with XML mark up, particularly TEI-XML, is in planning. diff --git a/docs/02-dsp-ontologies/knora-base.md b/docs/02-dsp-ontologies/knora-base.md index 3eeb2a4bd6..56c0dd96f3 100644 --- a/docs/02-dsp-ontologies/knora-base.md +++ b/docs/02-dsp-ontologies/knora-base.md @@ -616,18 +616,19 @@ because it does not allow to perform searches across multiple documents. The recommended way to store text with markup in DSP-API is to use the built-in support for "standoff" markup, which is stored separately from the text. This has some advantages over embedded markup such as XML. While XML requires markup to have a hierarchical structure, and does not allow overlapping tags, standoff nodes do not have these limitations -( -see [Using Standoff Properties for Marking-up Historical Documents in the Humanities](https://doi.org/10.1515/itit-2015-0030)). +(see +[Using Standoff Properties for Marking-up Historical Documents in the Humanities](https://doi.org/10.1515/itit-2015-0030)). A standoff tag can be attached to any substring in the text by giving its start and end positions. Unlike in corpus linguistics, we do not use any tokenisation resulting in a form of predefined segmentation, which would limit the user's ability to freely annotate any ranges in the text. For example, suppose we have the following text: +```xml
This sentence has overlapping visual attributes.
+``` -This would require just two standoff tags: `(italic, start=5, end=29)` -and `(bold, start=14, end=36)`. +This would require just two standoff tags: `(italic, start=5, end=29)` and `(bold, start=14, end=36)`. Moreover, standoff makes it possible to mark up the same text in different, possibly incompatible ways, allowing for different interpretations without making redundant copies of the text. In the Knora base ontology, any text value can @@ -1011,17 +1012,17 @@ only if the value's class has some cardinality for that property. Knora supports, and attempts to enforce, the following cardinality constraints: -* `owl:cardinality 1` - : _Exactly One `1`_ - A resource of this class must have exactly one instance of the specified property. +- `owl:cardinality 1`: + _Exactly One `1`_ - A resource of this class must have exactly one instance of the specified property. -* `owl:minCardinality 1` - : _At Least One `1-n`_ - A resource of this class must have at least one instance of the specified property. +- `owl:minCardinality 1`: + _At Least One `1-n`_ - A resource of this class must have at least one instance of the specified property. -* `owl:maxCardinality 1` - : _Zero Or One `0-1`_ - A resource of this class must have either zero or one instance of the specified property. +- `owl:maxCardinality 1`: + _Zero Or One `0-1`_ - A resource of this class must have either zero or one instance of the specified property. -* `owl:minCardinality 0` - : _Unbounded `0-n`_ - A resource of this class may have zero or more instances of the specified property. +- `owl:minCardinality 0`: + _Unbounded `0-n`_ - A resource of this class may have zero or more instances of the specified property. Knora requires cardinalities to be defined using blank nodes, as in the following example from `knora-base`: diff --git a/docs/03-endpoints/api-admin/groups.md b/docs/03-endpoints/api-admin/groups.md index 4a9272d883..25126325db 100644 --- a/docs/03-endpoints/api-admin/groups.md +++ b/docs/03-endpoints/api-admin/groups.md @@ -94,7 +94,7 @@ specified by the `id` in the request body as below: } ``` -### Change Group Status: +### Change Group Status - Required permission: SystemAdmin / hasProjectAllAdminPermission - Changeable information: `status` @@ -108,7 +108,7 @@ specified by the `id` in the request body as below: } ``` -### Delete Group: +### Delete Group - Required permission: SystemAdmin / hasProjectAllAdminPermission - Remark: The same as changing the groups `status` to diff --git a/docs/03-endpoints/api-admin/index.md b/docs/03-endpoints/api-admin/index.md index f93493deff..245004bbb9 100644 --- a/docs/03-endpoints/api-admin/index.md +++ b/docs/03-endpoints/api-admin/index.md @@ -1,6 +1,6 @@ +We provide an [OpenAPI](https://spec.openapis.org/oas/latest.html) specification. +The latest version is located at [api.dasch.swiss/api/docs/docs.yaml](https://api.dasch.swiss/api/docs/docs.yaml). +For an interactive documentation of all API endpoints, +please visit [api.dasch.swiss/api/docs/](https://api.dasch.swiss/api/docs/). -We provide an [OpenAPI](https://spec.openapis.org/oas/latest.html) specification. The latest version is located at [api.dasch.swiss/api/docs/docs.yaml](https://api.dasch.swiss/api/docs/docs.yaml). -For an interactive documentation of all API endpoints, please visit [api.dasch.swiss/api/docs/](https://api.dasch.swiss/api/docs/). - - -[OAD(./docs/03-endpoints/generated-openapi/openapi-admin-api.yml)] +[OAD](../generated-openapi/openapi-admin-api.yml) diff --git a/docs/03-endpoints/api-admin/introduction.md b/docs/03-endpoints/api-admin/introduction.md index 4ad9f9b12b..3e369fa494 100644 --- a/docs/03-endpoints/api-admin/introduction.md +++ b/docs/03-endpoints/api-admin/introduction.md @@ -21,7 +21,9 @@ values (see ## Knora IRIs in the Admin API Every resource that is created or hosted by Knora is identified by a -unique ID called an Internationalized Resource Identifier ([IRI](https://tools.ietf.org/html/rfc3987)). The IRI is required for every API operation to identify the resource in question. A Knora IRI has itself the format of a URL. +unique ID called an Internationalized Resource Identifier ([IRI](https://tools.ietf.org/html/rfc3987)). +The IRI is required for every API operation to identify the resource in question. +A Knora IRI has itself the format of a URL. For some API operations, the IRI has to be URL-encoded (HTTP GET requests). Unlike the DSP-API v2, the admin API uses internal IRIs, i.e. the actual IRIs diff --git a/docs/03-endpoints/api-admin/lists.md b/docs/03-endpoints/api-admin/lists.md index 26fd56e2d8..f29579bb39 100644 --- a/docs/03-endpoints/api-admin/lists.md +++ b/docs/03-endpoints/api-admin/lists.md @@ -10,8 +10,8 @@ **List Item Operations:** - `GET: /admin/lists[?projectIri=]` : return all lists optionally filtered by project -- `GET: /admin/lists/` : return complete list with all children if IRI of the list (i.e. root node) is given -If IRI of the child node is given, return the node with its immediate children +- `GET: /admin/lists/` : return complete list with all children if IRI of the list (i.e. root node) is given. + If IRI of the child node is given, return the node with its immediate children - `GET: /admin/lists/infos/` : return list information (without children) - `GET: /admin/lists/nodes/` : return list node information (without children) - `GET: /admin/lists//info` : return list basic information (without children) @@ -25,8 +25,8 @@ If IRI of the child node is given, return the node with its immediate children - `PUT: /admin/lists//name` : update the name of the node (root or child) - `PUT: /admin/lists//labels` : update labels of the node (root or child) - `PUT: /admin/lists//comments` : update comments of the node (root or child) -- `PUT: /admin/lists//position` : update position of a child node within its current parent or by changing its -parent node +- `PUT: /admin/lists//position` : update position of a child node within its current parent + or by changing its parent node - `DELETE: /admin/lists/` : delete a list (i.e. root node) or a child node and all its children, if not used - `DELETE: /admin/lists/comments/` : delete comments of a node (child only) @@ -35,16 +35,16 @@ parent node ### Get lists - - Required permission: none - - Return all lists optionally filtered by project - - GET: `/admin/lists[?projectIri=]` +- Required permission: none +- Return all lists optionally filtered by project +- GET: `/admin/lists[?projectIri=]` ### Get list - - Required permission: none - - Return complete `list` (or `node`) including basic information of the list (or child node), `listinfo` (or `nodeinfo`), -and all its children - - GET: `/admin/lists/` +- Required permission: none +- Return complete `list` (or `node`) including basic information of the list (or child node), + `listinfo` (or `nodeinfo`), and all its children +- GET: `/admin/lists/` ### Get list's information @@ -81,11 +81,11 @@ List (root node or child node with all its children) can be deleted only if it ( ### Create new list - - Required permission: SystemAdmin / ProjectAdmin - - Required fields: `projectIri`, `labels`, `comments` - - POST: `/admin/lists` - - BODY: - +- Required permission: SystemAdmin / ProjectAdmin +- Required fields: `projectIri`, `labels`, `comments` +- POST: `/admin/lists` +- BODY: + ```json { "projectIri": "someprojectiri", @@ -94,7 +94,8 @@ List (root node or child node with all its children) can be deleted only if it ( } ``` -Additionally, each list can have an optional custom IRI (of [Knora IRI](../api-v2/knora-iris.md#iris-for-data) form) specified by the `id` in the request body as below: +Additionally, each list can have an optional custom IRI (of [Knora IRI](../api-v2/knora-iris.md#iris-for-data) form) +specified by the `id` in the request body as below: ```json { @@ -107,6 +108,7 @@ Additionally, each list can have an optional custom IRI (of [Knora IRI](../api-v ``` The response will contain the basic information of the list, `listinfo` and an empty list of its children, as below: + ```json { "list": { @@ -148,7 +150,8 @@ The response will contain the basic information of the list, `listinfo` and an e } ``` -Additionally, each child node can have an optional custom IRI (of [Knora IRI](../api-v2/knora-iris.md#iris-for-data) form) specified by the `id` in the request body as below: +Additionally, each child node can have an optional custom IRI (of [Knora IRI](../api-v2/knora-iris.md#iris-for-data) +form) specified by the `id` in the request body as below: ```json { "id": "http://rdfh.ch/lists/0001/8u37MxBVMbX3XQ8-d31x6w", @@ -160,6 +163,7 @@ Additionally, each child node can have an optional custom IRI (of [Knora IRI](.. ``` The response will contain the basic information of the node, `nodeinfo`, as below: + ```json { "nodeinfo": { @@ -177,9 +181,10 @@ The response will contain the basic information of the node, `nodeinfo`, as belo } } ``` -The new node can be created and inserted in a specific position which must be given in the payload as shown below. If necessary, -according to the given position, the sibling nodes will be shifted. Note that `position` cannot have a value higher than the -number of existing children. + +The new node can be created and inserted in a specific position which must be given in the payload as shown below. +If necessary, according to the given position, the sibling nodes will be shifted. +Note that `position` cannot have a value higher than the number of existing children. ```json { "parentNodeIri": "http://rdfh.ch/lists/0001/yWQEGXl53Z4C4DYJ-S2c5A", @@ -192,28 +197,31 @@ number of existing children. In case the new node should be appended to the list of current children, either `position: -1` must be given in the payload or the `position` parameter must be left out of the payload. - + ### Update list's or node's information -The basic information of a list (or node) such as its labels, comments, name, or all of them can be updated. The parameters that -must be updated together with the new value must be given in the JSON body of the request together with the IRI of the -list and the IRI of the project it belongs to. - - - Required permission: SystemAdmin / ProjectAdmin - - Required fields: `listIri`, `projectIri` - - Update list information - - PUT: `/admin/lists/` - - BODY: - + +The basic information of a list (or node) such as its labels, comments, name, or all of them can be updated. +The parameters that must be updated together with the new value must be given in the JSON body of the request +together with the IRI of the list and the IRI of the project it belongs to. + +- Required permission: SystemAdmin / ProjectAdmin +- Required fields: `listIri`, `projectIri` +- Update list information +- PUT: `/admin/lists/` +- BODY: + ```json - { "listIri": "http://rdfh.ch/lists/0001/yWQEGXl53Z4C4DYJ-S2c5A", - "projectIri": "http://rdfh.ch/projects/0001", - "name": "new name for the list", - "labels": [{ "value": "a new label for the list", "language": "en"}], - "comments": [{ "value": "a new comment for the list", "language": "en"}] - } +{ + "listIri": "http://rdfh.ch/lists/0001/yWQEGXl53Z4C4DYJ-S2c5A", + "projectIri": "http://rdfh.ch/projects/0001", + "name": "new name for the list", + "labels": [{ "value": "a new label for the list", "language": "en"}], + "comments": [{ "value": "a new comment for the list", "language": "en"}] +} ``` The response will contain the basic information of the list, `listinfo` (or `nodeinfo`), without its children, as below: + ```json { "listinfo": { @@ -236,59 +244,63 @@ The response will contain the basic information of the list, `listinfo` (or `nod } } ``` + If only name of the list must be updated, it can be given as below in the body of the request: ```json - { - "listIri": "listIri", - "projectIri": "someprojectiri", - "name": "another name" - } +{ + "listIri": "listIri", + "projectIri": "someprojectiri", + "name": "another name" +} ``` -Alternatively, basic information `name`, `labels`, or `comments` of the root node (i.e. list) can be updated individually -as explained below. +Alternatively, basic information `name`, `labels`, or `comments` of the root node (i.e. list) +can be updated individually as explained below. ### Update list or node's name - - Required permission: SystemAdmin / ProjectAdmin - - Update name of the list (i.e. root node) or a child node whose IRI is specified by ``. - - PUT: `/admin/lists//name` - - BODY: - The new name of the node must be given in the body of the request as shown below: - ```json +- Required permission: SystemAdmin / ProjectAdmin +- Update name of the list (i.e. root node) or a child node whose IRI is specified by ``. +- PUT: `/admin/lists//name` +- BODY: The new name of the node must be given in the body of the request as shown below: + +```json { "name": "a new name" } ``` + There is no need to specify the project IRI because it is automatically extracted using the given ``. ### Update list or node's labels - - Required permission: SystemAdmin / ProjectAdmin - - Update labels of the list (i.e. root node) or a child node whose IRI is specified by ``. - - PUT: `/admin/lists//labels` - - BODY: - The new set of labels of the node must be given in the body of the request as shown below: - ```json +- Required permission: SystemAdmin / ProjectAdmin +- Update labels of the list (i.e. root node) or a child node whose IRI is specified by ``. +- PUT: `/admin/lists//labels` +- BODY: The new set of labels of the node must be given in the body of the request as shown below: + +```json { "labels": [{"language": "se", "value": "nya märkningen"}] } ``` + There is no need to specify the project IRI because it is automatically extracted using the given ``. ### Update list or node's comments - - Required permission: SystemAdmin / ProjectAdmin - - Update comments of the list (i.e. root node) or a child node whose IRI is specified by ``. - - PUT: `/admin/lists//labels` - - BODY: - The new set of comments of the node must be given in the body of the request as shown below: - ```json +- Required permission: SystemAdmin / ProjectAdmin +- Update comments of the list (i.e. root node) or a child node whose IRI is specified by ``. +- PUT: `/admin/lists//labels` +- BODY: The new set of comments of the node must be given in the body of the request as shown below: + +```json { "comments": [{"language": "se", "value": "nya kommentarer"}] } ``` + There is no need to specify the project IRI because it is automatically extracted using the given ``. ### Repositioning a child node @@ -302,6 +314,7 @@ If a node is supposed to be repositioned to the end of a parent node's children, Suppose a parent node `parentNode1` has five children in positions 0-4, to change the position of its child node `childNode4` from its original position 3 to position 1 the request body should specify the IRI of its parent node and the new position as below: + ```json { "parentNodeIri": "", @@ -342,6 +355,7 @@ Values less than -1 are not permitted for parameter `position`. - Put `/admin/lists//position` ### Delete a list or a node + An entire list or a single node of it can be completely deleted, if not in use. Before deleting an entire list (i.e. root node), the data and ontologies are checked for any usage of the list or its children. If not in use, the list and all its children are deleted. diff --git a/docs/03-endpoints/api-admin/overview.md b/docs/03-endpoints/api-admin/overview.md index 8932b7dc72..a8178f6a92 100644 --- a/docs/03-endpoints/api-admin/overview.md +++ b/docs/03-endpoints/api-admin/overview.md @@ -9,11 +9,11 @@ For the management of *users*, *projects*, *groups*, *lists*, and *permissions*, centric approach, provides the following endpoints corresponding to the respective classes of objects that they have an effect on, namely: - - [Users endpoint](lists.md): `http://server:port/admin/users` - `knora-base:User` - - [Projects endpoint](projects.md): `http://server:port/admin/projects` - `knora-base:knoraProject` - - [Groups endpoint](groups.md): `http://server:port/admin/groups` - `knora-base:UserGroup` - - [Lists endpoint](lists.md): `http://server:port/admin/lists` - `knora-base:ListNode` - - [Permissions endpoint](permissions.md): `http://server:port/admin/permissions` - `knora-admin:Permission` +- [Users endpoint](lists.md): `http://server:port/admin/users` - `knora-base:User` +- [Projects endpoint](projects.md): `http://server:port/admin/projects` - `knora-base:knoraProject` +- [Groups endpoint](groups.md): `http://server:port/admin/groups` - `knora-base:UserGroup` +- [Lists endpoint](lists.md): `http://server:port/admin/lists` - `knora-base:ListNode` +- [Permissions endpoint](permissions.md): `http://server:port/admin/permissions` - `knora-admin:Permission` All information regarding users, projects, groups, lists and permissions is stored in the `http://www.knora.org/admin` named graph. diff --git a/docs/03-endpoints/api-admin/permissions.md b/docs/03-endpoints/api-admin/permissions.md index f5146558a3..552eacd005 100644 --- a/docs/03-endpoints/api-admin/permissions.md +++ b/docs/03-endpoints/api-admin/permissions.md @@ -68,7 +68,7 @@ the `@id` attribute which will then be assigned to the permission; otherwise the A custom permission IRI must be `http://rdfh.ch/permissions/PROJECT_SHORTCODE/` (where `PROJECT_SHORTCODE` is the shortcode of the project that the permission belongs to), plus a custom ID string. For example: -``` +```json "id": "http://rdfh.ch/permissions/0001/jKIYuaEUETBcyxpenUwRzQ", ``` @@ -95,8 +95,8 @@ As a response, the created administrative permission and its IRI are returned as permission types](../../05-internals/design/api-admin/administration.md#administrative-permissions). In summary, each permission should contain followings: - - `additionalInformation`: should be left empty, otherwise will be ignored. - - `name` : indicates the type of the permission that can be one of the followings: +- `additionalInformation`: should be left empty, otherwise will be ignored. +- `name` : indicates the type of the permission that can be one of the followings: - `ProjectAdminAllPermission`: gives the user the permission to do anything on project level, i.e. create new groups, modify all existing groups @@ -114,13 +114,16 @@ In summary, each permission should contain followings: inside the project. - `ProjectResourceCreateRestrictedPermission`: gives restricted resource creation permission inside the project. - - `permissionCode`: should be left empty, otherwise will be ignored. +- `permissionCode`: should be left empty, otherwise will be ignored. -Note that during the creation of a new project, a default set of administrative permissions are added to its ProjectAdmin and -ProjectMember groups (See [Default set of permissions for a new project](./projects.md#default-set-of-permissions-for-a-new-project)). -Therefore, it is not possible to create new administrative permissions for the ProjectAdmin and ProjectMember groups of -a project. However, the default permissions set for these groups can be modified (See [update permission](./permissions.md#updating-a-permissions-scope)). +Note that during the creation of a new project, +a default set of administrative permissions are added to its ProjectAdmin and ProjectMember groups +(See [Default set of permissions for a new project](./projects.md#default-set-of-permissions-for-a-new-project)). +Therefore, it is not possible to create new administrative permissions +for the ProjectAdmin and ProjectMember groups of a project. +However, the default permissions set for these groups can be modified +(See [update permission](./permissions.md#updating-a-permissions-scope)). ### Creating New Default Object Access Permissions @@ -153,14 +156,14 @@ default object access permission for a group of a project the request body would permission types](../../05-internals/design/api-admin/administration.md#default-object-access-permissions). In summary, each permission should contain followings: - - `additionalInformation`: To whom the permission should be granted: project members, known users, unknown users, etc. - - `name` : indicates the type of the permission that can be one of the followings. +- `additionalInformation`: To whom the permission should be granted: project members, known users, unknown users, etc. +- `name` : indicates the type of the permission that can be one of the followings. - `RV`: restricted view permission (least privileged) - `V`: view permission - `M` modify permission - `D`: delete permission - `CR`: change rights permission (most privileged) - - `permissionCode`: The code assigned to a permission indicating its hierarchical level. These codes are as below: +- `permissionCode`: The code assigned to a permission indicating its hierarchical level. These codes are as below: - `1`: for restricted view permission (least privileged) - `2`: for view permission - `6`: for modify permission @@ -212,10 +215,12 @@ The response contains the newly created permission and its IRI, as: } ``` -Note that during the creation of a new project, a set of default object access permissions are created for its -ProjectAdmin and ProjectMember groups (See [Default set of permissions for a new project](./projects.md#default-set-of-permissions-for-a-new-project)). -Therefore, it is not possible to create new default object access permissions for the ProjectAdmin and ProjectMember -groups of a project. However, the default permissions set for these groups can be modified; see below for more information. +Note that during the creation of a new project, +a set of default object access permissions are created for its ProjectAdmin and ProjectMember groups +(See [Default set of permissions for a new project](./projects.md#default-set-of-permissions-for-a-new-project)). +Therefore, it is not possible to create new default object access permissions +for the ProjectAdmin and ProjectMember groups of a project. +However, the default permissions set for these groups can be modified; see below for more information. ### Updating a Permission's Group @@ -228,6 +233,7 @@ group as below: "forGroup": "http://www.knora.org/ontology/knora-admin#ProjectMember" } ``` + When updating an administrative permission, its previous `forGroup` value will be replaced with the new one. When updating a default object access permission, if it originally had a `forGroup` value defined, it will be replaced with the new group. Otherwise, if the default object access permission was defined for a resource class or a property or @@ -289,6 +295,7 @@ updating a default object access permission. The IRI of the new property must be "forProperty":"http://www.knora.org/ontology/00FF/images#titel" } ``` + Note that if the default object access permission was originally defined for a group, with this operation, the permission will be defined for the given property instead of the group. That means the value of the `forGroup` will be deleted. diff --git a/docs/03-endpoints/api-admin/projects.md b/docs/03-endpoints/api-admin/projects.md index 48e206f89b..466db4bdda 100644 --- a/docs/03-endpoints/api-admin/projects.md +++ b/docs/03-endpoints/api-admin/projects.md @@ -163,28 +163,32 @@ Errors: - `401 Unauthorized` if authorization failed. ### Default set of RestrictedViewSize + Starting from DSP 2023.10.02 release, the creation of new project will also set the `RestrictedViewSize` to default value, which is: `!512,512`. It is possible to change the value using [dedicated routes](#set-restricted-view-settings). -#### Default set of permissions for a new project: +#### Default set of permissions for a new project + When a new project is created, following default permissions are added to its admins and members: -- ProjectAdmin group receives an administrative permission to do all project level operations and to create resources -within the new project. This administrative permission is retrievable through its IRI: -`http://rdfh.ch/permissions/[projectShortcode]/defaultApForAdmin` +- ProjectAdmin group receives an administrative permission to do all project level operations + and to create resources within the new project. + This administrative permission is retrievable through its IRI: + `http://rdfh.ch/permissions/[projectShortcode]/defaultApForAdmin` -- ProjectAdmin group also gets a default object access permission to change rights (which includes delete, modify, view, -and restricted view permissions) of any entity that belongs to the project. This default object access permission is retrievable -through its IRI: -`http://rdfh.ch/permissions/[projectShortcode]/defaultDoapForAdmin` +- ProjectAdmin group also gets a default object access permission to change rights + (which includes delete, modify, view, and restricted view permissions) of any entity that belongs to the project. + This default object access permission is retrievable through its IRI: + `http://rdfh.ch/permissions/[projectShortcode]/defaultDoapForAdmin` -- ProjectMember group receives an administrative permission to create resources within the new project. This -administrative permission is retrievable through its IRI: -`http://rdfh.ch/permissions/[projectShortcode]/defaultApForMember` +- ProjectMember group receives an administrative permission to create resources within the new project. + This administrative permission is retrievable through its IRI: + `http://rdfh.ch/permissions/[projectShortcode]/defaultApForMember` -- ProjectMember group also gets a default object access permission to modify (which includes view and restricted view -permissions) of any entity that belongs to the project. This default object access permission is retrievable through its IRI: -`http://rdfh.ch/permissions/[projectShortcode]/defaultDoapForMember` +- ProjectMember group also gets a default object access permission to modify + (which includes view and restricted view permissions) of any entity that belongs to the project. + This default object access permission is retrievable through its IRI: + `http://rdfh.ch/permissions/[projectShortcode]/defaultDoapForMember` ### Get Project by ID @@ -322,8 +326,8 @@ Example response: Errors: - `400 Bad Request` - - if the provided IRI is not valid. - - if the provided payload is not valid. + - if the provided IRI is not valid. + - if the provided payload is not valid. - `404 Not Found` if no project with the provided IRI is found. @@ -562,6 +566,7 @@ NB: Permissions: SystemAdmin / ProjectAdmin Request definition: + - `GET /admin/projects/shortcode/{shortcode}/admin-members` - `GET /admin/projects/shortname/{shortname}/admin-members` - `GET /admin/projects/iri/{iri}/admin-members` @@ -756,6 +761,7 @@ Example response: Permissions: Request definition: + - `GET /admin/projects/iri/{iri}/Keywords` Description: returns the keywords of a single project @@ -824,10 +830,12 @@ Example response: Set how all still image resources of a projects should be displayed when viewed as restricted. This can be either a size restriction or a watermark. -For that, we support two of the (IIIF size)[https://iiif.io/api/image/3.0/#42-size] forms: +For that, we support two of the [IIIF size](https://iiif.io/api/image/3.0/#42-size) forms: - * `!d,d` The returned image is scaled so that the width and height of the returned image are not greater than d, while maintaining the aspect ratio. - * `pct:n` The width and height of the returned image is scaled to n percent of the width and height of the original image. 1<= n <= 100. +- `!d,d` The returned image is scaled so that the width and height of the returned image are not greater than d, + while maintaining the aspect ratio. +- `pct:n` The width and height of the returned image is scaled to n percent + of the width and height of the original image. 1<= n <= 100. If the watermark is set to `true`, the returned image will be watermarked, otherwise the default size `!128,128` is set. @@ -845,11 +853,13 @@ Description: Set the project's restricted view The endpoint accepts either a size or a watermark but not both. Size: + ```json { "size": "!512,512" } ``` Watermark: + ```json { "watermark": true } ``` @@ -857,23 +867,29 @@ Watermark: Examples : Request: + ```bash curl --request POST 'http://0.0.0.0:5555/admin/projects/iri/http%3A%2F%2Frdfh.ch%2Fprojects%2F0001/RestrictedViewSettings' \ --header 'Authorization: Basic cm9vdEBleGFtcGxlLmNvbTp0ZXN0' \ --data '{"size": "!512,512"} ``` + Response: + ```json { "size": "!512,512" } ``` Request: + ```bash curl --request POST 'http://0.0.0.0:5555/admin/projects/shortcode/0001/RestrictedViewSettings' \ --header 'Authorization: Basic cm9vdEBleGFtcGxlLmNvbTp0ZXN0' \ --data '{"watermark": true}' ``` + Response: + ```json { "watermark": true } ``` diff --git a/docs/03-endpoints/api-admin/stores.md b/docs/03-endpoints/api-admin/stores.md index a180eeed21..9770407a1e 100644 --- a/docs/03-endpoints/api-admin/stores.md +++ b/docs/03-endpoints/api-admin/stores.md @@ -7,5 +7,5 @@ This endpoint allows manipulation of the triplestore content. -` POST admin/store/ResetTriplestoreContent` resets the triplestore content, given that the `allowReloadOverHttp` +`POST admin/store/ResetTriplestoreContent` resets the triplestore content, given that the `allowReloadOverHttp` configuration flag is set to `true`. This route is mostly used in tests. diff --git a/docs/03-endpoints/api-admin/users.md b/docs/03-endpoints/api-admin/users.md index d109dc5e92..3ca59ae3dd 100644 --- a/docs/03-endpoints/api-admin/users.md +++ b/docs/03-endpoints/api-admin/users.md @@ -7,7 +7,7 @@ ## Endpoint Overview -**User Operations:** +### General User Operations - `GET: /admin/users` : return all users - `GET: /admin/users/[iri | email | username]/` : return single user identified by [IRI | email | username] @@ -17,14 +17,14 @@ - `PUT: /admin/users/iri//Status` : update user's status - `DELETE: /admin/users/iri/` : delete user (set status to false) -**User's project membership operations** +### Project membership operations - `GET: /admin/users/iri//project-memberships` : get user's project memberships - `POST: /admin/users/iri//project-memberships/` : add user to project (to ProjectMember group) - `DELETE: /admin/users/iri//project-memberships/` : remove user from project (to ProjectMember group) -**User's group membership operations** +### Group membership operations - `GET: /admin/users/iri//project-admin-memberships` : get user's ProjectAdmin group memberships - `POST: /admin/users/iri//project-admin-memberships/` : add user to ProjectAdmin group @@ -134,10 +134,10 @@ specified by the `id` in the request body as below: - PUT: `/admin/users/iri//Status` - BODY: -``` - { - "status": false // true or false - } +```json +{ + "status": false // true or false +} ``` ### Delete user (-\update user)** @@ -202,10 +202,10 @@ Note: In order to add a user to a project admin group, the user needs to be memb - PUT: `/admin/users/iri//SystemAdmin` - BODY: -``` - { - "systemAdmin": false - } +```json +{ + "systemAdmin": false +} ``` ## Example Data diff --git a/docs/03-endpoints/api-v2/authentication.md b/docs/03-endpoints/api-v2/authentication.md index 5e521b9b9b..5d473fa94e 100644 --- a/docs/03-endpoints/api-v2/authentication.md +++ b/docs/03-endpoints/api-v2/authentication.md @@ -8,12 +8,9 @@ Certain routes are secured and require authentication. When accessing any secured route we support three options for authentication: -- **Preferred method**: For each request an [Access Token](#Access-Token-/-Login-and-Logout) is sent in the HTTP - authorization - header with the +- **Preferred method**: For each request an Access Token is sent in the HTTP authorization header with the [HTTP bearer scheme](https://tools.ietf.org/html/rfc6750#section-2.1). -- **Deprecated method**: For each request an [Access Token](#Access-Token-/-Login-and-Logout) is provided as a cookie in - the HTTP request. +- **Deprecated method**: For each request an Access Token is provided as a cookie in the HTTP request. - **Deprecated method**: [HTTP basic authentication](https://en.wikipedia.org/wiki/Basic_access_authentication), where the username is the user's `email`. diff --git a/docs/03-endpoints/api-v2/editing-resources.md b/docs/03-endpoints/api-v2/editing-resources.md index 287b6b0fac..2cd24909c0 100644 --- a/docs/03-endpoints/api-v2/editing-resources.md +++ b/docs/03-endpoints/api-v2/editing-resources.md @@ -17,8 +17,8 @@ The body of the request is a JSON-LD document in the [complex API schema](introduction.md#api-schema), specifying the type,`rdfs:label`, and its Knora resource properties and their values. The representation of the resource is the same as when it is returned in a `GET` request, except that its `knora-api:attachedToUser` is not given, and the resource IRI and those of its values can be optionally specified. -The format of the values submitted is described in [Creating and Editing Values](editing-values.md). If there are multiple values for -a property, these must be given in an array. +The format of the values submitted is described in [Creating and Editing Values](editing-values.md). +If there are multiple values for a property, these must be given in an array. For example, here is a request to create a resource with various value types: diff --git a/docs/03-endpoints/api-v2/editing-values.md b/docs/03-endpoints/api-v2/editing-values.md index 6d984b8e2b..290b4fe789 100644 --- a/docs/03-endpoints/api-v2/editing-values.md +++ b/docs/03-endpoints/api-v2/editing-values.md @@ -241,12 +241,12 @@ Knora supports the storage of certain types of data as files, using [Sipi](https (see [FileValue](../../02-dsp-ontologies/knora-base.md#filevalue)). DSP-API v2 currently supports using Sipi to store the following types of files: -* Images: JPEG, JPEG2000, TIFF, or PNG which are stored internally as JPEG2000 -* Documents: PDF -* Audio: MPEG or Waveform audio file format (.wav, .x-wav, .vnd.wave) -* Text files: CSV, ODD, RNG, TXT, XLS, XLSX, XML, XSD, XSL -* Video files: MP4 -* Archive files: ZIP, TAR, GZIP +- Images: JPEG, JPEG2000, TIFF, or PNG which are stored internally as JPEG2000 +- Documents: PDF +- Audio: MPEG or Waveform audio file format (.wav, .x-wav, .vnd.wave) +- Text files: CSV, ODD, RNG, TXT, XLS, XLSX, XML, XSD, XSL +- Video files: MP4 +- Archive files: ZIP, TAR, GZIP Support for other types of files will be added in the future. diff --git a/docs/03-endpoints/api-v2/getting-lists.md b/docs/03-endpoints/api-v2/getting-lists.md index 1c5ee14e42..a6d0d7dea8 100644 --- a/docs/03-endpoints/api-v2/getting-lists.md +++ b/docs/03-endpoints/api-v2/getting-lists.md @@ -7,21 +7,25 @@ ## Getting a complete List -In order to request a complete list, make a HTTP GET request to the `lists` route appending the Iri of the list's root node (URL-encoded): +In order to request a complete list, make a HTTP GET request to the `lists` route, +appending the Iri of the list's root node (URL-encoded): ``` HTTP GET to http://host/v2/lists/listRootNodeIri ``` -Lists are only returned in the complex schema. The response to a list request is a `List` (see interface `List` in module `ListResponse`). +Lists are only returned in the complex schema. +The response to a list request is a `List` (see interface `List` in module `ListResponse`). ## Getting a single Node -In order to request a single node of a list, make a HTTP GET request to the `node` route appending the node's Iri (URL-encoded): +In order to request a single node of a list, make a HTTP GET request to the `node` route, +appending the node's Iri (URL-encoded): ``` HTTP GET to http://host/v2/node/nodeIri ``` -Nodes are only returned in the complex schema. The response to a node request is a `ListNode` (see interface `List` in module `ListResponse`). +Nodes are only returned in the complex schema. +The response to a node request is a `ListNode` (see interface `List` in module `ListResponse`). diff --git a/docs/03-endpoints/api-v2/knora-iris.md b/docs/03-endpoints/api-v2/knora-iris.md index fa56cfddcd..65de865dab 100644 --- a/docs/03-endpoints/api-v2/knora-iris.md +++ b/docs/03-endpoints/api-v2/knora-iris.md @@ -57,19 +57,19 @@ An ontology name must be a valid XML [NCName](https://www.w3.org/TR/xml-names/#NT-NCName) and must be URL safe. The following names are reserved for built-in internal DSP ontologies: - - `knora-base` - - `standoff` - - `salsah-gui` +- `knora-base` +- `standoff` +- `salsah-gui` Names starting with `knora` are reserved for future built-in Knora ontologies. A user-created ontology name may not start with the letter `v` followed by a digit, and may not contain these reserved words: - - `knora` - - `ontology` - - `simple` - - `shared` +- `knora` +- `ontology` +- `simple` +- `shared` ### External Ontology IRIs @@ -103,13 +103,13 @@ of `knora-base` is called `knora-api`. The API version identifier indicates not only the version of the API, but also an API 'schema'. The DSP-API v2 is available in two schemas: - - A complex schema, which is suitable both for reading and for editing - data. The complex schema represents values primarily as complex - objects. Its version identifier is `v2`. - - A simple schema, which is suitable for reading data but not for - editing it. The simple schema facilitates interoperability between - DSP ontologies and non-DSP ontologies, since it represents - values primarily as literals. Its version identifier is `simple/v2`. +- A complex schema, which is suitable both for reading and for editing + data. The complex schema represents values primarily as complex + objects. Its version identifier is `v2`. +- A simple schema, which is suitable for reading data but not for + editing it. The simple schema facilitates interoperability between + DSP ontologies and non-DSP ontologies, since it represents + values primarily as literals. Its version identifier is `simple/v2`. Other schemas could be added in the future for more specific use cases. @@ -122,20 +122,16 @@ For example, suppose a DSP-API server is running at `http://www.knora.org/ontology/0001/example`. That ontology can then be requested using either of these IRIs: - - `http://knora.example.org/ontology/0001/example/v2` (in the complex - schema) - - `http://knora.example.org/ontology/0001/example/simple/v2` (in the - simple schema) +- `http://knora.example.org/ontology/0001/example/v2` (in the complex schema) +- `http://knora.example.org/ontology/0001/example/simple/v2` (in the simple schema) While the internal `example` ontology refers to definitions in `knora-base`, the external `example` ontology that is served by the API refers instead to a `knora-api` ontology, whose IRI depends on the schema being used: - - `http://api.knora.org/ontology/knora-api/v2` (in the complex - schema) - - `http://api.knora.org/ontology/knora-api/simple/v2` (in the simple - schema) +- `http://api.knora.org/ontology/knora-api/v2` (in the complex schema) +- `http://api.knora.org/ontology/knora-api/simple/v2` (in the simple schema) ### Ontology Entity IRIs @@ -150,12 +146,9 @@ Thus, if there is a class called `ExampleThing` in an ontology whose internal IRI is `http://www.knora.org/ontology/0001/example`, that class has the following IRIs: - - `http://www.knora.org/ontology/0001/example#ExampleThing` (in the - internal ontology) - - `http://HOST[:PORT]/ontology/0001/example/v2#ExampleThing` (in the - API v2 complex schema) - - `http://HOST[:PORT]/ontology/0001/example/simple/v2#ExampleThing` - (in the API v2 simple schema) +- `http://www.knora.org/ontology/0001/example#ExampleThing` (in the internal ontology) +- `http://HOST[:PORT]/ontology/0001/example/v2#ExampleThing` (in the API v2 complex schema) +- `http://HOST[:PORT]/ontology/0001/example/simple/v2#ExampleThing` (in the API v2 simple schema) ### Shared Ontology IRIs @@ -178,9 +171,9 @@ The internal and external IRIs of shared ontologies always use the hostname The project code can be omitted, in which case the default shared ontology project, `0000`, is assumed. The sample shared ontology, `example-box`, has these IRIs: - - `http://www.knora.org/ontology/shared/example-box` (internal) - - `http://api.knora.org/ontology/shared/example-box/v2` (external, complex schema) - - `http://api.knora.org/ontology/shared/example-box/simple/v2` (external, simple schema) +- `http://www.knora.org/ontology/shared/example-box` (internal) +- `http://api.knora.org/ontology/shared/example-box/v2` (external, complex schema) +- `http://api.knora.org/ontology/shared/example-box/simple/v2` (external, simple schema) ## IRIs for Data @@ -202,18 +195,18 @@ citable, it needs to be a resource, not a value. The formats of generated data IRIs for different types of objects are as follows: - - Resource: `http://rdfh.ch/PROJECT_SHORTCODE/RESOURCE_UUID`. - - Value: - `http://rdfh.ch/PROJECT_SHORTCODE/RESOURCE_UUID/values/VALUE_UUID` - - Standoff tag: - `http://rdfh.ch/PROJECT_SHORTCODE/RESOURCE_UUID/values/VALUE_UUID/STANDOFF_UUID` - - XML-to-standoff mapping: - `http://rdfh.ch/projects/PROJECT_SHORTCODE/mappings/MAPPING_NAME` - - XML-to-standoff mapping element: - `http://rdfh.ch/projects/PROJECT_SHORTCODE/mappings/MAPPING_NAME/elements/MAPPING_ELEMENT_UUID` - - Project: `http://rdfh.ch/projects/PROJECT_UUID` - - Group: `http://rdfh.ch/groups/PROJECT_SHORTCODE/GROUP_UUID` - - Permission: - `http://rdfh.ch/permissions/PROJECT_SHORTCODE/PERMISSION_UUID` - - Lists: `http://rdfh.ch/lists/PROJECT_SHORTCODE/LIST_UUID` - - User: `http://rdfh.ch/users/USER_UUID` +- Resource: `http://rdfh.ch/PROJECT_SHORTCODE/RESOURCE_UUID`. +- Value: + `http://rdfh.ch/PROJECT_SHORTCODE/RESOURCE_UUID/values/VALUE_UUID` +- Standoff tag: + `http://rdfh.ch/PROJECT_SHORTCODE/RESOURCE_UUID/values/VALUE_UUID/STANDOFF_UUID` +- XML-to-standoff mapping: + `http://rdfh.ch/projects/PROJECT_SHORTCODE/mappings/MAPPING_NAME` +- XML-to-standoff mapping element: + `http://rdfh.ch/projects/PROJECT_SHORTCODE/mappings/MAPPING_NAME/elements/MAPPING_ELEMENT_UUID` +- Project: `http://rdfh.ch/projects/PROJECT_UUID` +- Group: `http://rdfh.ch/groups/PROJECT_SHORTCODE/GROUP_UUID` +- Permission: + `http://rdfh.ch/permissions/PROJECT_SHORTCODE/PERMISSION_UUID` +- Lists: `http://rdfh.ch/lists/PROJECT_SHORTCODE/LIST_UUID` +- User: `http://rdfh.ch/users/USER_UUID` diff --git a/docs/03-endpoints/api-v2/ontology-information.md b/docs/03-endpoints/api-v2/ontology-information.md index c056763c25..9669f3a125 100644 --- a/docs/03-endpoints/api-v2/ontology-information.md +++ b/docs/03-endpoints/api-v2/ontology-information.md @@ -1180,27 +1180,26 @@ relevant to the update. Moreover, the API enforces the following rules: - - An entity (i.e. a class or property) cannot be referred to until it - has been created. - - An entity cannot be modified or deleted if it is used in data, - except for changes to its `rdfs:label` or `rdfs:comment`. - - An entity cannot be modified if another entity refers to it, with - one exception: a `knora-api:subjectType` or `knora-api:objectType` - that refers to a class will not prevent the class's cardinalities - from being modified. +- An entity (i.e. a class or property) cannot be referred to until it has been created. +- An entity cannot be modified or deleted if it is used in data, + except for changes to its `rdfs:label` or `rdfs:comment`. +- An entity cannot be modified if another entity refers to it, with + one exception: a `knora-api:subjectType` or `knora-api:objectType` + that refers to a class will not prevent the class's cardinalities + from being modified. Because of these rules, some operations have to be done in a specific order: - - Properties have to be defined before they can be used in the - cardinalities of a class, but a property's `knora-api:subjectType` - cannot refer to a class that does not yet exist. The recommended - approach is to first create a class with no cardinalities, then - create the properties that it needs, then add cardinalities for - those properties to the class. - - To delete a class along with its properties, the client must first - remove the cardinalities from the class, then delete the property - definitions, then delete the class definition. +- Properties have to be defined before they can be used in the + cardinalities of a class, but a property's `knora-api:subjectType` + cannot refer to a class that does not yet exist. The recommended + approach is to first create a class with no cardinalities, then + create the properties that it needs, then add cardinalities for + those properties to the class. +- To delete a class along with its properties, the client must first + remove the cardinalities from the class, then delete the property + definitions, then delete the class definition. When changing an existing ontology, the client must always supply the ontology's `knora-api:lastModificationDate`, which is returned in the @@ -1797,7 +1796,8 @@ the property definition, submit the request without those predicates. ### Adding Cardinalities to a Class -If the class (or any of its sub-classes) is used in data, it is not allowed to add cardinalities `owl:minCardinality` greater than 0 or `owl:cardinality 1` to the class. +If the class (or any of its sub-classes) is used in data, +it is not allowed to add cardinalities `owl:minCardinality` greater than 0 or `owl:cardinality 1` to the class. ``` HTTP POST to http://host/v2/ontologies/cardinalities @@ -1853,23 +1853,30 @@ definition (but not any of the other entities in the ontology). It is possible to replace all cardinalities on properties used by a class. If it succeeds the request will effectively replace all direct cardinalities of the class as specified. That is, it removes all the cardinalities from the class and replaces them with the submitted cardinalities. -Meaning that, if no cardinalities are submitted (i.e. the request contains no `rdfs:subClassOf`), the class is left with no cardinalities. +Meaning that, if no cardinalities are submitted (i.e. the request contains no `rdfs:subClassOf`), +the class is left with no cardinalities. The request will fail if any of the "Pre-Update Checks" fails. A partial update of the ontology will not be performed. #### Pre-Update Checks -* _Ontology Check_ - * Any given cardinality on a property must be included in any of the existing cardinalities for the same property of the super-classes. - * Any given cardinality on a property must include the effective cardinalities for the same property of all subclasses, taking into account the respective inherited cardinalities from the class hierarchy of the subclasses. -* _Consistency Check with existing data_ - * Given that instances of the class or any of its subclasses exist then these instances are checked if they conform to the given cardinality. +- _Ontology Check_ + - Any given cardinality on a property must be included in any of the existing cardinalities + for the same property of the super-classes. + - Any given cardinality on a property must include the effective cardinalities + for the same property of all subclasses, + taking into account the respective inherited cardinalities from the class hierarchy of the subclasses. +- _Consistency Check with existing data_ + - Given that instances of the class or any of its subclasses exist, + then these instances are checked if they conform to the given cardinality. !!! note "Subproperty handling for cardinality pre-update checks" The Pre-Update check does not take into account any `subproperty` relations between the properties. - Every cardinality is checked against only the given property and not its subproperties, neither in the ontology nor the consistency check with existing data. - This means that currently it is necessary to maintain the cardinalities on all subproperties of a property in sync with the cardinalities on the superproperty. + Every cardinality is checked against only the given property and not its subproperties, + neither in the ontology nor the consistency check with existing data. + This means that currently it is necessary to maintain the cardinalities on all subproperties of a property + in sync with the cardinalities on the superproperty. ``` HTTP PUT to http://host/v2/ontologies/cardinalities @@ -1926,6 +1933,7 @@ HTTP GET to http://host/v2/ontologies/canreplacecardinalities/CLASS_IRI?property The response will look like this: Failure: + ```json { "knora-api:canDo": false, @@ -1956,6 +1964,7 @@ Failure: ``` Success: + ```json { "knora-api:canDo": true, @@ -1968,9 +1977,11 @@ Success: _Note_: The following check is still available but deprecated - use the more detailed check above. To check whether all class's cardinalities can be replaced: + ``` HTTP GET to http://host/v2/ontologies/canreplacecardinalities/CLASS_IRI ``` + The response will look like this: ```json diff --git a/docs/03-endpoints/api-v2/query-language.md b/docs/03-endpoints/api-v2/query-language.md index 03ddd0720d..f831570352 100644 --- a/docs/03-endpoints/api-v2/query-language.md +++ b/docs/03-endpoints/api-v2/query-language.md @@ -359,9 +359,9 @@ text markup (see [Matching Standoff Dates](#matching-standoff-dates)). Note that the given date value for comparison must have the following format: - ``` - (GREGORIAN|JULIAN|ISLAMIC):\d{1,4}(-\d{1,2}(-\d{1,2})?)?( BC| AD| BCE| CE)?(:\d{1,4}(-\d{1,2}(-\d{1,2})?)?( BC| AD| BCE| CE)?)? - ``` +``` +(GREGORIAN|JULIAN|ISLAMIC):\d{1,4}(-\d{1,2}(-\d{1,2})?)?( BC| AD| BCE| CE)?(:\d{1,4}(-\d{1,2}(-\d{1,2})?)?( BC| AD| BCE| CE)?)? +``` E.g. an exact date like `GREGORIAN:2015-12-03` or a period like `GREGORIAN:2015-12-03:2015-12-04`. Dates may also have month or year precision, e.g. `ISLAMIC:1407-02` (the whole month of december) or `JULIAN:1330` diff --git a/docs/03-endpoints/api-v2/reading-and-searching-resources.md b/docs/03-endpoints/api-v2/reading-and-searching-resources.md index ad0cd44504..45506f254e 100644 --- a/docs/03-endpoints/api-v2/reading-and-searching-resources.md +++ b/docs/03-endpoints/api-v2/reading-and-searching-resources.md @@ -73,7 +73,8 @@ the text value will only be available as `kb:textValueAsXml`, which will be of t where the content of `` is a limited set of HTML tags that can be handled by CKEditor in DSP-APP. This allows for both displaying and editing the text value. -In the second and third case, `kb:textValueHasMapping` will point to the custom mapping that may or may not specify an XSL transformation. +In the second and third case, `kb:textValueHasMapping` will point to the custom mapping +that may or may not specify an XSL transformation. If no transformation is specified (second case), the text value will be returned only as `kb:textValueAsXml`. This property will be a string containing the contents of the initially uploaded XML. @@ -83,7 +84,8 @@ the order of the attributes in one element may vary from the original. In the third case, when a transformation is specified, both `kb:textValueAsXml` and `kb:textValueAsHtml` will be returned. `kb:textValueAsHtml` is the result of the XSL transformation applied to `kb:textValueAsXml`. -The HTML representation is intended to display the text value in a human readable and properly styled way, while the XML representation can be used to update the text value. +The HTML representation is intended to display the text value in a human readable and properly styled way, +while the XML representation can be used to update the text value. ## Get the Representation of a Resource by IRI @@ -236,8 +238,8 @@ resource metadata (e.g. `rdfs:label`), changes to a resource's metadata are not version history. To request the resource as it was at each of these dates, see -[Get a Full Representation of a Version of a Resource by IRI](#get-a-full-representation-of-a-version-of-a-resource-by-iri). For consistency in citation, we recommend using these dates when -requesting resource versions. +[Get a Full Representation of a Version of a Resource by IRI](#get-a-full-representation-of-a-version-of-a-resource-by-iri). +For consistency in citation, we recommend using these dates when requesting resource versions. ### Get the preview of a resource by IRI @@ -254,8 +256,7 @@ HTTP GET to http://host/v2/resourcespreview/resourceIRI(/anotherResourceIri)* ## Get a Graph of Resources -DSP can return a graph of connections between resources, e.g. for generating -a network diagram. +DSP can return a graph of connections between resources, e.g. for generating a network diagram. ``` HTTP GET to http://host/v2/graph/resourceIRI[depth=Integer] @@ -336,12 +337,12 @@ resource as you type. E.g., the user wants to get a list of resources whose `rdfs:label` contain some search terms separated by a whitespace character: - - Zeit - - Zeitg - - ... - - Zeitglöcklein d - - ... - - Zeitglöcklein des Lebens +- Zeit +- Zeitg +- ... +- Zeitglöcklein d +- ... +- Zeitglöcklein des Lebens With each character added to the last term, the selection gets more specific. The first term should at least contain three characters. To @@ -390,11 +391,13 @@ The search index used by DSP transforms all text into lower case characters and For example, if a text value is: `The cake needs flour, sugar, and butter.`, the tokens are `the`, `cake`, `needs`, `flour,`, `sugar,`, `and`, `butter.`. Note that punctuation marks like `,` and `.` are left with the word where they occurred. -Therefore, if you search for `sugar` you would have to use `sugar*` or `sugar?` to get results that contain `sugar,` or `sugar.` as well. -The reason for this kind of tokenization is that some users need to be able to search explicitly for special characters including -punctuation marks. +Therefore, if you search for `sugar` you would have to use `sugar*` or `sugar?` +to get results that contain `sugar,` or `sugar.` as well. +The reason for this kind of tokenization is +that some users need to be able to search explicitly for special characters including punctuation marks. -Alphabetic, numeric, symbolic, and diacritical Unicode characters which are not in the first 127 ASCII characters (the "Basic Latin" Unicode block) +Alphabetic, numeric, symbolic, and diacritical Unicode characters +which are not in the first 127 ASCII characters (the "Basic Latin" Unicode block) are converted into their ASCII equivalents, if one exists, e.g. `é` or `ä` are converted into `e` and `a`. Please note that the search terms have to be URL-encoded. @@ -406,9 +409,12 @@ HTTP GET to http://host/v2/search/searchValue[limitToResourceClass=resourceClass The first parameter has to be preceded by a question mark `?`, any following parameter by an ampersand `&`. -A search value must have a minimal length of three characters (default value) as defined in `search-value-min-length` in `application.conf`. +A search value must have a minimal length of three characters (default value) +as defined in `search-value-min-length` in `application.conf`. -A search term may contain wildcards. A `?` represents a single character. It has to be URL-encoded as `%3F` since it has a special meaning in the URL syntax. For example, the term `Uniform` can be search for like this: +A search term may contain wildcards. A `?` represents a single character. +It has to be URL-encoded as `%3F` since it has a special meaning in the URL syntax. +For example, the term `Uniform` can be search for like this: ``` HTTP GET to http://host/v2/search/Unif%3Frm @@ -632,7 +638,6 @@ payload needed to update the value of the resource's `lastModificationDate`, see [modifying metadata of a resource](editing-resources.md#modifying-a-resources-metadata). - ### Get the Full History of all Resources of a Project as Events To get a list of the changes that have been made to the resources and their values of a project as events ordered by diff --git a/docs/03-endpoints/api-v2/text/custom-standoff.md b/docs/03-endpoints/api-v2/text/custom-standoff.md index c2525c5d89..b8fcf2014c 100644 --- a/docs/03-endpoints/api-v2/text/custom-standoff.md +++ b/docs/03-endpoints/api-v2/text/custom-standoff.md @@ -29,76 +29,46 @@ The mapping is written in XML itself (for a formal description, see structure (the indentation corresponds to the nesting in XML): - ``: the root element - - - ` (optional)`: the IRI of the - default XSL transformation to be applied to the XML when - reading it back from DSP-API. The XSL transformation is - expected to produce HTML. If given, the IRI has to refer to - a resource of type `knora-base:XSLTransformation`. - - - ``: an element of the mapping (at least - one) - - - ``: information about the XML element that - is mapped to a standoff class - - - ``: name of the XML element - - ``: value of the class attribute of - the XML element, if any. If the element has - no class attribute, the keyword `noClass` - has to be used. - - ``: the namespace the XML element - belongs to, if any. If the element does not - belong to a namespace, the keyword - `noNamespace` has to be used. - - ``: a Boolean value - indicating whether this tag separates words - in the text. Once an XML document is - converted to RDF-standoff the markup is - stripped from the text, possibly leading to - continuous text that has been separated by - tags before. For structural tags like - paragraphs etc., `` can be - set to `true` in which case a special - separator is inserted in the the text in the - RDF representation. In this way, words stay - separated and are represented in the - fulltext index as such. - - - ``: information about the - standoff class the XML element is mapped to - - - ``: IRI of the standoff class the - XML element is mapped to - - - ``: XML attributes to be - mapped to standoff properties (other - than `id` or `class`), if any - - - ``: an XML attribute - to be mapped to a standoff - property, may be repeated - - - ``: the name - of the XML attribute - - ``: the namespace - the attribute belongs to, if - any. If the attribute does - not belong to a namespace, - the keyword `noNamespace` - has to be used. - - ``: the IRI of - the standoff property the - XML attribute is mapped to. - - - ``: the data type of the - standoff class, if any. - - - ``: the IRI of the data type - standoff class - - ``: the name of the - attribute holding the typed value in - the expected standard format + - ` (optional)`: the IRI of the + default XSL transformation to be applied to the XML when + reading it back from DSP-API. The XSL transformation is + expected to produce HTML. If given, the IRI has to refer to + a resource of type `knora-base:XSLTransformation`. + - ``: an element of the mapping (at least one) + - ``: information about the XML element that is mapped to a standoff class + - ``: name of the XML element + - ``: value of the class attribute of + the XML element, if any. If the element has + no class attribute, the keyword `noClass` + has to be used. + - ``: the namespace the XML element + belongs to, if any. If the element does not + belong to a namespace, the keyword + `noNamespace` has to be used. + - ``: a Boolean value + indicating whether this tag separates words + in the text. Once an XML document is + converted to RDF-standoff the markup is + stripped from the text, possibly leading to + continuous text that has been separated by + tags before. For structural tags like + paragraphs etc., `` can be + set to `true` in which case a special + separator is inserted in the text in the + RDF representation. In this way, words stay + separated and are represented in the + fulltext index as such. + - ``: information about the standoff class the XML element is mapped to + - ``: IRI of the standoff class the XML element is mapped to + - ``: XML attributes to be mapped to standoff properties (other than `id` or `class`), if any + - ``: an XML attribute to be mapped to a standoff property, may be repeated + - ``: the name of the XML attribute + - ``: the namespace the attribute belongs to, if any. + If the attribute does not belong to a namespace, the keyword `noNamespace` has to be used. + - ``: the IRI of the standoff property the XML attribute is mapped to. + - ``: the data type of the standoff class, if any. + - ``: the IRI of the data type standoff class + - ``: the name of the attribute holding the typed value in the expected standard format XML structure of a mapping: @@ -291,7 +261,8 @@ sent to DSP-API and converted to standoff: ```xml - We had a party on New Year's Eve. It was a lot of fun. + We had a party on New Year's Eve. + It was a lot of fun. ``` @@ -462,7 +433,9 @@ by DSP-API. The mapping has to be sent as a multipart request to the standoff route using the path segment `mapping`: - HTTP POST http://host/v2/mapping +``` +HTTP POST http://host/v2/mapping +``` The multipart request consists of two named parts: diff --git a/docs/03-endpoints/api-v2/text/overview.md b/docs/03-endpoints/api-v2/text/overview.md index d520575079..a8e6eb8fd2 100644 --- a/docs/03-endpoints/api-v2/text/overview.md +++ b/docs/03-endpoints/api-v2/text/overview.md @@ -58,8 +58,9 @@ which allows for creating project specific custom markup for text values. Details can be found [here](custom-standoff.md). !!! info - Custom markup is not supported by DSP-TOLS and is viewe-only in DSP-APP. - Creating custom markup is relatively involved, so that it should only be used by projects working with complex textual data. + Custom markup is not supported by DSP-TOOLS and is view-only in DSP-APP. + Creating custom markup is relatively involved, + so that it should only be used by projects working with complex textual data. ## File Based diff --git a/docs/03-endpoints/api-v2/text/tei-xml.md b/docs/03-endpoints/api-v2/text/tei-xml.md index 5644b4b854..eee08f6531 100644 --- a/docs/03-endpoints/api-v2/text/tei-xml.md +++ b/docs/03-endpoints/api-v2/text/tei-xml.md @@ -31,10 +31,13 @@ Please note that the URL parameters have to be URL-encoded. HTTP GET to http://host/v2/tei/resourceIri?textProperty=textPropertyIri ``` -In addition to the resource's Iri, the Iri of the property containing the text with standoff has to be submitted. This will be converted to the TEI body. +In addition to the resource's Iri, the Iri of the property containing the text with standoff has to be submitted. +This will be converted to the TEI body. Please note that the resource can only have one instance of this property and the text must have standoff markup. -The test data contain the resource `http://rdfh.ch/0001/thing_with_richtext_with_markup` with the text property `http://0.0.0.0:3333/ontology/0001/anything/v2#hasRichtext` that can be converted to TEI as follows: +The test data contain the resource `http://rdfh.ch/0001/thing_with_richtext_with_markup` +with the text property `http://0.0.0.0:3333/ontology/0001/anything/v2#hasRichtext` +that can be converted to TEI as follows: ``` HTTP GET to http://host/v2/tei/http%3A%2F%2Frdfh.ch%2F0001%2Fthing_with_richtext_with_markup?textProperty=http%3A%2F%2F0.0.0.0%3A3333%2Fontology%2F0001%2Fanything%2Fv2%23hasRichtext @@ -62,18 +65,25 @@ The response to this request is a TEI XML document: -

This is a test that contains marked up elements. This is interesting text in italics. This is boring text in italics.

+

+ This is a test that contains marked up elements. + This is interesting text in italics. + This is boring text in italics. +

``` -The body of the TEI document contains the standoff markup as XML. The header contains contains some basic metadata about the resource such as the `rdfs:label` an its IRI. However, this might not be sufficient for more advanced use cases like digital edition projects. +The body of the TEI document contains the standoff markup as XML. +The header contains contains some basic metadata about the resource such as the `rdfs:label` an its IRI. +However, this might not be sufficient for more advanced use cases like digital edition projects. In that case, a custom conversion has to be performed (see below). ## Custom Conversion -If a project defines its own standoff entities, a custom conversion can be provided (body of the TEI document). Also for the TEI header, a custom conversion can be provided. +If a project defines its own standoff entities, a custom conversion can be provided (body of the TEI document). +Also for the TEI header, a custom conversion can be provided. For the custom conversion, additional configuration is required. @@ -82,18 +92,29 @@ TEI body: - additional mapping from standoff to XML (URL parameter `mappingIri`) - XSL transformation to turn the XML into a valid TEI body (referred to by the mapping). -The mapping has to refer to a `defaultXSLTransformation` that transforms the XML that was created from standoff markup (see [XML To Standoff Mapping](custom-standoff.md)). This step is necessary because the mapping assumes a one to one relation between standoff classes and properties and XML elements and attributes. -For example, we may want to convert a `standoff:StandoffItalicTag` into TEI/XML. TEI expresses this as `...`. In the mapping, the `standoff:StandoffItalicTag` may be mapped to a a temporary XML element that is going to be converted to `...` in a further step by the XSLT. +The mapping has to refer to a `defaultXSLTransformation` that transforms the XML that was created from standoff markup +(see [XML To Standoff Mapping](custom-standoff.md)). +This step is necessary because the mapping assumes a one to one relation +between standoff classes and properties and XML elements and attributes. +For example, we may want to convert a `standoff:StandoffItalicTag` into TEI/XML. +TEI expresses this as `...`. +In the mapping, the `standoff:StandoffItalicTag` may be mapped to a temporary XML element +that is going to be converted to `...` in a further step by the XSLT. -For sample data, see `webapi/_test_data/test_route/texts/beol/BEOLTEIMapping.xml` (mapping) and `webapi/_test_data/test_route/texts/beol/standoffToTEI.xsl`. The standoff entities are defined in `beol-onto.ttl`. +For sample data, see `webapi/_test_data/test_route/texts/beol/BEOLTEIMapping.xml` (mapping) +and `webapi/_test_data/test_route/texts/beol/standoffToTEI.xsl`. +The standoff entities are defined in `beol-onto.ttl`. TEI header: - Gravsearch template to query the resources metadata, results are serialized to RDF/XML (URL parameter `gravsearchTemplateIri`) - XSL transformation to turn that RDF/XML into a valid TEI header (URL parameter `teiHeaderXSLTIri`) -The Gravsearch template is expected to be of type `knora-base:TextRepresentation` and to contain a placeholder `$resourceIri` that is to be replaced by the actual resource Iri. -The Gravsearch template is expected to contain a query involving the text property (URL parameter `textProperty`) and more properties that are going to be mapped to the TEI header. The Gravsearch template is a simple text file with the files extension `.txt`. +The Gravsearch template is expected to be of type `knora-base:TextRepresentation` +and to contain a placeholder `$resourceIri` that is to be replaced by the actual resource Iri. +The Gravsearch template is expected to contain a query involving the text property (URL parameter `textProperty`) +and more properties that are going to be mapped to the TEI header. +The Gravsearch template is a simple text file with the files extension `.txt`. A Gravsearch template may look like this (see `test_data/test_route/texts/beol/gravsearch.txt`): @@ -184,10 +205,13 @@ PREFIX xsd: } ``` -Note the placeholder `BIND(<$resourceIri> as ?letter)` that is going to be replaced by the Iri of the resource the request is performed for. -The query asks for information about the letter's text `beol:hasText` and information about its author and recipient. This information is converted to the TEI header in the format required by [correspSearch](https://correspsearch.net). +Note the placeholder `BIND(<$resourceIri> as ?letter)` that is going to be replaced +by the Iri of the resource the request is performed for. +The query asks for information about the letter's text `beol:hasText` and information about its author and recipient. +This information is converted to the TEI header in the format required by [correspSearch](https://correspsearch.net). -To write the XSLT, do the Gravsearch query and request the data as RDF/XML using content negotiation (see [Introduction](../introduction.md)). +To write the XSLT, do the Gravsearch query and request the data as RDF/XML using content negotiation +(see [Introduction](../introduction.md)). The Gravsearch query's result may look like this (`RDF/XML`): @@ -258,7 +282,9 @@ The Gravsearch query's result may look like this (`RDF/XML`): ``` -In order to convert the metadata (not the actual standoff markup), a `knora-base:knora-base:XSLTransformation` has to be provided. For our example, it looks like this (see `test_data/test_route/texts/beol/header.xsl`): +In order to convert the metadata (not the actual standoff markup), +a `knora-base:knora-base:XSLTransformation` has to be provided. +For our example, it looks like this (see `test_data/test_route/texts/beol/header.xsl`): ```xml diff --git a/docs/03-endpoints/instrumentation/introduction.md b/docs/03-endpoints/instrumentation/introduction.md index 67002fefae..4c7b0f4fd4 100644 --- a/docs/03-endpoints/instrumentation/introduction.md +++ b/docs/03-endpoints/instrumentation/introduction.md @@ -10,5 +10,6 @@ defined in `application.conf` under the key: `app.instrumentaion-server-config.p and can also be set through the environment variable: `KNORA_INSTRUMENTATION_SERVER_PORT`. The exposed endpoints are: - - `/metrics` - a metrics endpoint, backed by the ZIO metrics backend exposing metrics in the prometheus format - - `/health` - provides information about the health state, see [Health Endpoint](./health.md) + +- `/metrics` - a metrics endpoint, backed by the ZIO metrics backend exposing metrics in the prometheus format +- `/health` - provides information about the health state, see [Health Endpoint](./health.md) diff --git a/docs/04-publishing-deployment/configuration.md b/docs/04-publishing-deployment/configuration.md index bd8bf93a05..815178547e 100644 --- a/docs/04-publishing-deployment/configuration.md +++ b/docs/04-publishing-deployment/configuration.md @@ -21,45 +21,45 @@ The relevant sections for tuning are: A number of core settings is additionally configurable through system environment variables. These are: -| key in application.conf | environment variable | default value | -|----------------------------------------|-------------------------------------------------|-----------------------| -| pekko.log-config-on-start | KNORA_AKKA_LOG_CONFIG_ON_START | off | -| pekko.loglevel | KNORA_AKKA_LOGLEVEL | INFO | -| pekko.stdout-loglevel | KNORA_AKKA_STDOUT_LOGLEVEL | INFO | -| app.print-extended-config | KNORA_WEBAPI_PRINT_EXTENDED_CONFIG | false | -| app.bcrypt-password-strength | KNORA_WEBAPI_BCRYPT_PASSWORD_STRENGTH | 12 | -| app.jwt.secret | KNORA_WEBAPI_JWT_SECRET_KEY | super-secret-key | -| app.jwt.expiration | KNORA_WEBAPI_JWT_LONGEVITY | 30 days | -| app.jwt.issuer | KNORA_WEBAPI_JWT_ISSUER | 0.0.0.0:3333 | -| app.dsp-ingest.audience | KNORA_WEBAPI_DSP_INGEST_AUDIENCE | http://localhost:3340 | -| app.dsp-ingest.base-url | KNORA_WEBAPI_DSP_INGEST_BASE_URL | http://localhost:3340 | -| app.cookie-domain | KNORA_WEBAPI_COOKIE_DOMAIN | localhost | -| app.allow-reload-over-http | KNORA_WEBAPI_ALLOW_RELOAD_OVER_HTTP | false | -| app.ark.resolver | KNORA_WEBAPI_ARK_RESOLVER_URL | http://0.0.0.0:3336 | -| app.ark.assigned-number | KNORA_WEBAPI_ARK_NAAN | 72163 | -| app.knora-api.internal-host | KNORA_WEBAPI_KNORA_API_INTERNAL_HOST | 0.0.0.0 | -| app.knora-api.internal-port | KNORA_WEBAPI_KNORA_API_INTERNAL_PORT | 3333 | -| app.knora-api.external-protocol | KNORA_WEBAPI_KNORA_API_EXTERNAL_PROTOCOL | http | -| app.knora-api.external-host | KNORA_WEBAPI_KNORA_API_EXTERNAL_HOST | 0.0.0.0 | -| app.knora-api.external-port | KNORA_WEBAPI_KNORA_API_EXTERNAL_PORT | 3333 | -| app.sipi.internal-protocol | KNORA_WEBAPI_SIPI_INTERNAL_PROTOCOL | http | -| app.sipi.internal-host | KNORA_WEBAPI_SIPI_INTERNAL_HOST | localhost | -| app.sipi.internal-port | KNORA_WEBAPI_SIPI_INTERNAL_PORT | 1024 | -| app.sipi.external-protocol | KNORA_WEBAPI_SIPI_EXTERNAL_PROTOCOL | http | -| app.sipi.external-host | KNORA_WEBAPI_SIPI_EXTERNAL_HOST | localhost | -| app.sipi.external-port | KNORA_WEBAPI_SIPI_EXTERNAL_PORT | 443 | -| app.ark.resolver | KNORA_WEBAPI_ARK_RESOLVER_URL | http://0.0.0.0:3336 | -| app.ark.assigned-number | KNORA_WEBAPI_ARK_NAAN | 72163 | -| app.salsah1.base-url | KNORA_WEBAPI_SALSAH1_BASE_URL | http://localhost:3335 | -| app.triplestore.dbtype | KNORA_WEBAPI_TRIPLESTORE_DBTYPE | fuseki | -| app.triplestore.use-https | KNORA_WEBAPI_TRIPLESTORE_USE_HTTPS | false | -| app.triplestore.host | KNORA_WEBAPI_TRIPLESTORE_HOST | localhost | -| app.triplestore.auto-init | KNORA_WEBAPI_TRIPLESTORE_AUTOINIT | false | -| app.triplestore.fuseki.port | KNORA_WEBAPI_TRIPLESTORE_FUSEKI_PORT | 3030 | -| app.triplestore.fuseki.repository-name | KNORA_WEBAPI_TRIPLESTORE_FUSEKI_REPOSITORY_NAME | knora-test | -| app.triplestore.fuseki.username | KNORA_WEBAPI_TRIPLESTORE_FUSEKI_USERNAME | admin | -| app.triplestore.fuseki.password | KNORA_WEBAPI_TRIPLESTORE_FUSEKI_PASSWORD | test | -| app.cache-service.enabled | KNORA_WEBAPI_CACHE_SERVICE_ENABLED | true | +| key in application.conf | environment variable | default value | +| -------------------------------------- | ----------------------------------------------- | ----------------------- | +| pekko.log-config-on-start | KNORA_AKKA_LOG_CONFIG_ON_START | off | +| pekko.loglevel | KNORA_AKKA_LOGLEVEL | INFO | +| pekko.stdout-loglevel | KNORA_AKKA_STDOUT_LOGLEVEL | INFO | +| app.print-extended-config | KNORA_WEBAPI_PRINT_EXTENDED_CONFIG | false | +| app.bcrypt-password-strength | KNORA_WEBAPI_BCRYPT_PASSWORD_STRENGTH | 12 | +| app.jwt.secret | KNORA_WEBAPI_JWT_SECRET_KEY | super-secret-key | +| app.jwt.expiration | KNORA_WEBAPI_JWT_LONGEVITY | 30 days | +| app.jwt.issuer | KNORA_WEBAPI_JWT_ISSUER | 0.0.0.0:3333 | +| app.dsp-ingest.audience | KNORA_WEBAPI_DSP_INGEST_AUDIENCE | | +| app.dsp-ingest.base-url | KNORA_WEBAPI_DSP_INGEST_BASE_URL | | +| app.cookie-domain | KNORA_WEBAPI_COOKIE_DOMAIN | localhost | +| app.allow-reload-over-http | KNORA_WEBAPI_ALLOW_RELOAD_OVER_HTTP | false | +| app.ark.resolver | KNORA_WEBAPI_ARK_RESOLVER_URL | | +| app.ark.assigned-number | KNORA_WEBAPI_ARK_NAAN | 72163 | +| app.knora-api.internal-host | KNORA_WEBAPI_KNORA_API_INTERNAL_HOST | 0.0.0.0 | +| app.knora-api.internal-port | KNORA_WEBAPI_KNORA_API_INTERNAL_PORT | 3333 | +| app.knora-api.external-protocol | KNORA_WEBAPI_KNORA_API_EXTERNAL_PROTOCOL | http | +| app.knora-api.external-host | KNORA_WEBAPI_KNORA_API_EXTERNAL_HOST | 0.0.0.0 | +| app.knora-api.external-port | KNORA_WEBAPI_KNORA_API_EXTERNAL_PORT | 3333 | +| app.sipi.internal-protocol | KNORA_WEBAPI_SIPI_INTERNAL_PROTOCOL | http | +| app.sipi.internal-host | KNORA_WEBAPI_SIPI_INTERNAL_HOST | localhost | +| app.sipi.internal-port | KNORA_WEBAPI_SIPI_INTERNAL_PORT | 1024 | +| app.sipi.external-protocol | KNORA_WEBAPI_SIPI_EXTERNAL_PROTOCOL | http | +| app.sipi.external-host | KNORA_WEBAPI_SIPI_EXTERNAL_HOST | localhost | +| app.sipi.external-port | KNORA_WEBAPI_SIPI_EXTERNAL_PORT | 443 | +| app.ark.resolver | KNORA_WEBAPI_ARK_RESOLVER_URL | | +| app.ark.assigned-number | KNORA_WEBAPI_ARK_NAAN | 72163 | +| app.salsah1.base-url | KNORA_WEBAPI_SALSAH1_BASE_URL | | +| app.triplestore.dbtype | KNORA_WEBAPI_TRIPLESTORE_DBTYPE | fuseki | +| app.triplestore.use-https | KNORA_WEBAPI_TRIPLESTORE_USE_HTTPS | false | +| app.triplestore.host | KNORA_WEBAPI_TRIPLESTORE_HOST | localhost | +| app.triplestore.auto-init | KNORA_WEBAPI_TRIPLESTORE_AUTOINIT | false | +| app.triplestore.fuseki.port | KNORA_WEBAPI_TRIPLESTORE_FUSEKI_PORT | 3030 | +| app.triplestore.fuseki.repository-name | KNORA_WEBAPI_TRIPLESTORE_FUSEKI_REPOSITORY_NAME | knora-test | +| app.triplestore.fuseki.username | KNORA_WEBAPI_TRIPLESTORE_FUSEKI_USERNAME | admin | +| app.triplestore.fuseki.password | KNORA_WEBAPI_TRIPLESTORE_FUSEKI_PASSWORD | test | +| app.cache-service.enabled | KNORA_WEBAPI_CACHE_SERVICE_ENABLED | true | ## Selectively Disabling Routes diff --git a/docs/04-publishing-deployment/publishing.md b/docs/04-publishing-deployment/publishing.md index fe5a09e310..67d06c71fd 100644 --- a/docs/04-publishing-deployment/publishing.md +++ b/docs/04-publishing-deployment/publishing.md @@ -10,12 +10,9 @@ DSP is published as a set of [Docker](https://www.docker.com) images under the The following Docker images are published: -- DSP-API: - - https://hub.docker.com/r/daschswiss/knora-api -- Sipi (includes DSP's specific Sipi scripts): - - https://hub.docker.com/r/daschswiss/knora-sipi -- DSP-APP: - - https://hub.docker.com/r/daschswiss/dsp-app +- [DSP-API](https://hub.docker.com/r/daschswiss/knora-api) +- [Sipi](https://hub.docker.com/r/daschswiss/knora-sipi) (includes DSP's specific Sipi scripts) +- [DSP-APP](https://hub.docker.com/r/daschswiss/dsp-app) DSP's Docker images are published automatically through Github CI each time a pull-request is merged into the `main` branch. @@ -27,11 +24,11 @@ using the result of `git describe`. The describe version is built from the The images can be published locally by running: ```bash -$ make docker-build +make docker-build ``` or to Dockerhub: ```bash -$ make docker-publish +make docker-publish ``` diff --git a/docs/05-internals/design/adr/ADR-0002-change-cache-service-manager-from-akka-actor-to-zlayer.md b/docs/05-internals/design/adr/ADR-0002-change-cache-service-manager-from-akka-actor-to-zlayer.md index 100159f335..cddd0e60a8 100644 --- a/docs/05-internals/design/adr/ADR-0002-change-cache-service-manager-from-akka-actor-to-zlayer.md +++ b/docs/05-internals/design/adr/ADR-0002-change-cache-service-manager-from-akka-actor-to-zlayer.md @@ -12,7 +12,10 @@ The `org.knora.webapi.store.cacheservice.CacheServiceManager` was implemented as ## Decision -As part of the move from `Akka` to `ZIO`, it was decided that the `CacheServiceManager` and the whole implementation of the in-memory and Redis backed cache is refactored using ZIO. +As part of the move from `Akka` to `ZIO`, +it was decided that the `CacheServiceManager` +and the whole implementation of the in-memory and Redis backed cache +is refactored using ZIO. ## Consequences diff --git a/docs/05-internals/design/adr/ADR-0003-change-iiif-service-manager-and-sipi-implementation-to-zlayer.md b/docs/05-internals/design/adr/ADR-0003-change-iiif-service-manager-and-sipi-implementation-to-zlayer.md index d42af8a57a..3759828c4b 100644 --- a/docs/05-internals/design/adr/ADR-0003-change-iiif-service-manager-and-sipi-implementation-to-zlayer.md +++ b/docs/05-internals/design/adr/ADR-0003-change-iiif-service-manager-and-sipi-implementation-to-zlayer.md @@ -13,7 +13,8 @@ where implemented as Akka-Actors ## Decision -As part of the move from `Akka` to `ZIO`, it was decided that the `IIIFServiceManager` and the `IIIFServiceSipiImpl` is refactored using ZIO. +As part of the move from `Akka` to `ZIO`, +it was decided that the `IIIFServiceManager` and the `IIIFServiceSipiImpl` is refactored using ZIO. ## Consequences diff --git a/docs/05-internals/design/adr/ADR-0004-change-triplestore-service-manager-and-fuseki-implementation-to-zlayer.md b/docs/05-internals/design/adr/ADR-0004-change-triplestore-service-manager-and-fuseki-implementation-to-zlayer.md index e7fb100090..ca32288fc3 100644 --- a/docs/05-internals/design/adr/ADR-0004-change-triplestore-service-manager-and-fuseki-implementation-to-zlayer.md +++ b/docs/05-internals/design/adr/ADR-0004-change-triplestore-service-manager-and-fuseki-implementation-to-zlayer.md @@ -8,12 +8,16 @@ Accepted ## Context -Both `org.knora.webapi.store.triplestore.TriplestoreServiceManager` and `org.knora.webapi.store.triplestore.impl.TriplestoreServiceHttpConnectorImpl` +Both `org.knora.webapi.store.triplestore.TriplestoreServiceManager` +and `org.knora.webapi.store.triplestore.impl.TriplestoreServiceHttpConnectorImpl` where implemented as Akka-Actors. ## Decision -As part of the move from `Akka` to `ZIO`, it was decided that the `TriplestoreServiceManager` and the `TriplestoreServiceHttpConnectorImpl` is refactored using ZIO. +As part of the move from `Akka` to `ZIO`, +it was decided that the `TriplestoreServiceManager` +and the `TriplestoreServiceHttpConnectorImpl` +is refactored using ZIO. ## Consequences diff --git a/docs/05-internals/design/adr/ADR-0005-change-respondermanager-to-a-simple-case-class.md b/docs/05-internals/design/adr/ADR-0005-change-respondermanager-to-a-simple-case-class.md index 2bb1d6fd87..e85c38ef82 100644 --- a/docs/05-internals/design/adr/ADR-0005-change-respondermanager-to-a-simple-case-class.md +++ b/docs/05-internals/design/adr/ADR-0005-change-respondermanager-to-a-simple-case-class.md @@ -16,4 +16,11 @@ In preparation of the move from `Akka` to `ZIO`, it was decided that the `Respon ## Consequences -The actor messages and responses don't change. All calls made previously to the `ResponderManager` and the `StorageManager` are now changed to the `ApplicationActor` which will route the calls to either the `ResponderManager` or the `StorageManager` based on the message type. The `ApplicationActor` is the only actor that is allowed to make calls to either the `ResponderManager` or the `StorageManager`. All requests from routes are now routed to the `ApplicationActor`. +The actor messages and responses don't change. +All calls made previously to the `ResponderManager` and the `StorageManager` +are now changed to the `ApplicationActor` +which will route the calls to either the `ResponderManager` +or the `StorageManager`, based on the message type. +The `ApplicationActor` is the only actor that is allowed to make calls +to either the `ResponderManager` or the `StorageManager`. +All requests from routes are now routed to the `ApplicationActor`. diff --git a/docs/05-internals/design/adr/ADR-0006-use-zio-http.md b/docs/05-internals/design/adr/ADR-0006-use-zio-http.md index e443097fde..b338a50db0 100644 --- a/docs/05-internals/design/adr/ADR-0006-use-zio-http.md +++ b/docs/05-internals/design/adr/ADR-0006-use-zio-http.md @@ -8,13 +8,19 @@ Accepted ## Context -The current routes use the `Akka Http` library. Because of changes to the licensing of the `Akka` framework, we want to move away from using `Akka Http`. This also fits the general strategic decision to use ZIO for the backend. +The current routes use the `Akka Http` library. +Because of changes to the licensing of the `Akka` framework, +we want to move away from using `Akka Http`. +This also fits the general strategic decision to use ZIO for the backend. ## Decision -In preparation of the move from `Akka` to `ZIO`, it was decided that the routes should be ported to use the `ZIO HTTP` server / library instead of `Akka Http`. +In preparation of the move from `Akka` to `ZIO`, +it was decided that the routes should be ported to use the `ZIO HTTP` server / library instead of `Akka Http`. ## Consequences -In a first step only the routes are going to be ported, one by one, to use `ZIO HTTP` instead of being routed through `Akka Http`. The `Akka Actor System` still remains and will be dealt with later. +In a first step only the routes are going to be ported, one by one, +to use `ZIO HTTP` instead of being routed through `Akka Http`. +The `Akka Actor System` still remains and will be dealt with later. diff --git a/docs/05-internals/design/adr/ADR-0007-zio-fication-of-responders.md b/docs/05-internals/design/adr/ADR-0007-zio-fication-of-responders.md index d9bfde1c87..3bdaa537b0 100644 --- a/docs/05-internals/design/adr/ADR-0007-zio-fication-of-responders.md +++ b/docs/05-internals/design/adr/ADR-0007-zio-fication-of-responders.md @@ -54,10 +54,10 @@ the `RoutingActor`, this is the `AppRouterRelayingMessageHandler`. In the long run we will prefer to invoke methods on the respective ziofied services directly. This is now already possible for example with the `TriplestoreServive`, i.e. instead of -calling `MessageRelay#ask[SparqlSelectResul](SparqlSelectRequest)` it is much easier and more importantly *typesafe* to +calling `MessageRelay#ask[SparqlSelectResul](SparqlSelectRequest)` it is much easier and more importantly _typesafe_ to call `TriplestoreService#sparqlHttpSelect(String): UIO[SparqlSelectResult]`. -#### Communication between Akka based Responder and another Akka based Responder +### Communication between Akka based Responder and another Akka based Responder Nothing changes with regard to existing communication patterns: @@ -74,7 +74,7 @@ sequenceDiagram deactivate RoutingActor ``` -#### Communication between Akka based Responder and ziofied Responder +### Communication between Akka based Responder and ziofied Responder The `AkkaResponder` code remains unchanged and will still `ask` the `ActorRef` to the `RoutingActor`. The `RoutingActor` will forward the message to the `MessageRelay` and return its response to the `AkkaResponder`. @@ -97,7 +97,7 @@ sequenceDiagram deactivate RoutingActor ``` -#### Communication between ziofied Responder and Akka based Responder +### Communication between ziofied Responder and Akka based Responder The `AppRouterRelayingMessageHandler` route all messages which _do not_ implement the `RelayedMessage` trait to the `RoutingActor`. @@ -123,9 +123,9 @@ sequenceDiagram deactivate MessageRelay ``` -#### Communication between two ziofied Responders +### Communication between two ziofied Responders -#### Variant using the MessageRelay +### Variant using the MessageRelay ```mermaid sequenceDiagram @@ -141,7 +141,7 @@ sequenceDiagram deactivate MessageRelay ``` -#### Variant if other Responder is a direct dependency +### Variant if other Responder is a direct dependency ```mermaid sequenceDiagram @@ -154,10 +154,15 @@ sequenceDiagram ## Decision -In preparation of the move from `Akka` to `ZIO`, it was decided that the `Responders` should be ported to use return `ZIO`s and the `MessageRelay` instead of `Future`s and the `ActorRef` to the `RoutingActor`. +In preparation of the move from `Akka` to `ZIO`, +it was decided that the `Responders` should be ported to use return `ZIO`s and the `MessageRelay` +instead of `Future`s and the `ActorRef` to the `RoutingActor`. ## Consequences -In a first step only the `Responders` are going to be ported, one by one, to use the above pattern. The `Akka Actor System` still remains, will be used in the test and will be removed in a later step. -Due to the added indirections and the blocking nature of `Unsafe.unsafe(implicit u => r.unsafe.run(effect))` it is necessary to spin up more `RoutingActor` instances as otherwise deadlocks will occur. -This should not be a problem as any shared state, e.g. caches, is not held within the `RoutingActor` or one of its contained `Responder` instances. +In a first step only the `Responders` are going to be ported, one by one, to use the above pattern. +The `Akka Actor System` still remains, will be used in the test and will be removed in a later step. +Due to the added indirections and the blocking nature of `Unsafe.unsafe(implicit u => r.unsafe.run(effect))` +it is necessary to spin up more `RoutingActor` instances as otherwise deadlocks will occur. +This should not be a problem as any shared state, e.g. caches, +is not held within the `RoutingActor` or one of its contained `Responder` instances. diff --git a/docs/05-internals/design/adr/ADR-0008-replace-akka-with-pekko.md b/docs/05-internals/design/adr/ADR-0008-replace-akka-with-pekko.md index a5e1bc60a5..893d1c881f 100644 --- a/docs/05-internals/design/adr/ADR-0008-replace-akka-with-pekko.md +++ b/docs/05-internals/design/adr/ADR-0008-replace-akka-with-pekko.md @@ -5,34 +5,47 @@ Accepted -# Context +## Context -On 7. September 2022 Lightbend announced a [license change](https://www.lightbend.com/blog/why-we-are-changing-the-license-for-akka) for the Akka project, the TL;DR being that you will need a commercial license to use future versions of Akka (2.7+) in production if you exceed a certain revenue threshold. +On 7. September 2022 Lightbend announced a +[license change](https://www.lightbend.com/blog/why-we-are-changing-the-license-for-akka) for the Akka project, +the TL;DR being that you will need a commercial license to use future versions of Akka (2.7+) in production +if you exceed a certain revenue threshold. -*For now*, we have staid on Akka 2.6, the current latest version that is still available under the original license. Historically Akka has been incredibly stable, and combined with our -limited use of features, we did not expect this to be a problem. +*For now*, we have staid on Akka 2.6, the current latest version that is still available under the original license. +Historically Akka has been incredibly stable, and combined with our limited use of features, +we did not expect this to be a problem. However, the [last update of Akka 2.6 is announced to be in September 2023](https://www.lightbend.com/akka/license-faq). > **Will critical vulnerabilities and bugs be patched in 2.6.x?** -> Yes, critical security updates and critical bugs will be patched in Akka v2.6.x under the current Apache 2 license until September of 2023. +> Yes, critical security updates and critical bugs will be patched in Akka v2.6.x + under the current Apache 2 license until September of 2023. As a result, we will not receive further updates and we will never get support for Scala 3 for Akka. -# Proposal +## Proposal -[Apache Pekko](https://pekko.apache.org/) is based on the latest version of Akka in the v2.6.x series. It is currently an incubator project in the ASF. [All Akka modules currently in use in the dsp-api are already released and ported to pekko](https://pekko.apache.org/modules.html): [https://mvnrepository.com/artifact/org.apache.pekko](https://mvnrepository.com/artifact/org.apache.pekko) +[Apache Pekko](https://pekko.apache.org/) is based on the latest version of Akka in the v2.6.x series. +It is currently an incubator project in the ASF. +All Akka modules currently in use in the dsp-api are already released and ported to +[pekko](https://pekko.apache.org/modules.html): +[https://mvnrepository.com/artifact/org.apache.pekko](https://mvnrepository.com/artifact/org.apache.pekko) -The latest stable version [1.0.1](https://pekko.apache.org/docs/pekko/current/release-notes/index.html#1-0-1) is compatible with Akka v2.6.x series and meant to be a plug in replacement. +The latest stable version [1.0.1](https://pekko.apache.org/docs/pekko/current/release-notes/index.html#1-0-1) +is compatible with Akka v2.6.x series and meant to be a plug in replacement. Scala 3.3.0 is the minimum Scala 3 version supported. Scala 2.12 and 2.13 are still supported. -The migration guide: [https://pekko.apache.org/docs/pekko/current/project/migration-guides.html](https://pekko.apache.org/docs/pekko/current/project/migration-guides.html) +[The migration guide](https://pekko.apache.org/docs/pekko/current/project/migration-guides.html) -Our current migration to another http server implementation is on currently on hold but we might want to switch to Pekko so that we could receive security updates and bugfixes. +Our current migration to another http server implementation is currently on hold, +but we might want to switch to Pekko so that we could receive security updates and bugfixes. -The proof of concept implementation has been shared in the pull request [here](https://github.com/dasch-swiss/dsp-api/pull/2848), allowing for further testing and validation of the proposed switch to Pekko. +The proof of concept implementation has been shared in the pull request +[here](https://github.com/dasch-swiss/dsp-api/pull/2848), +allowing for further testing and validation of the proposed switch to Pekko. -# Decision +## Decision We replace Akka and Akka/Http with Apache Pekko. diff --git a/docs/05-internals/design/api-admin/administration.md b/docs/05-internals/design/api-admin/administration.md index 65f2a92c5e..a707fe08cf 100644 --- a/docs/05-internals/design/api-admin/administration.md +++ b/docs/05-internals/design/api-admin/administration.md @@ -13,7 +13,7 @@ The permissions API endpoint is described [here](../../../03-endpoints/api-admin The default permissions when a project is created are described [here](../../../03-endpoints/api-admin/projects.md#default-set-of-permissions-for-a-new-project). -DSP’s concept of access control is that permissions +DSP's concept of access control is that permissions can only be granted to groups and not to individual users. There are two distinct ways of granting permission. @@ -71,43 +71,41 @@ An object (resource / value) can grant the following permissions, which are stored in a compact format in a single string, which is the object of the predicate `knora-base:hasPermissions`: -1. **Restricted view permission (RV)**: Allows a restricted view of - the object, e.g. a view of an image with a watermark. -2. **View permission (V)**: Allows an unrestricted view of the - object. Having view permission on a resource only affects the - user’s ability to view information about the resource other than - its values. To view a value, she must have view permission on the - value itself. -3. **Modify permission (M)**: For values, this permission allows a - new version of a value to be created. For resources, this allows - the user to create a new value (as opposed to a new version of an - existing value), or to change information about the resource other - than its values. When he wants to make a new version of a value, - his permissions on the containing resource are not relevant. - However, when he wants to change the target of a link, the old - link must be deleted and a new one created, so he needs modify - permission on the resource. -4. **Delete permission (D)**: Allows the item to be marked as - deleted. -5. **Change rights permission (CR)**: Allows the permissions granted - by the object to be changed. +1. **Restricted view permission (RV)**: Allows a restricted view of + the object, e.g. a view of an image with a watermark. +2. **View permission (V)**: Allows an unrestricted view of the + object. Having view permission on a resource only affects the + user's ability to view information about the resource other than + its values. To view a value, she must have view permission on the + value itself. +3. **Modify permission (M)**: For values, this permission allows a + new version of a value to be created. For resources, this allows + the user to create a new value (as opposed to a new version of an + existing value), or to change information about the resource other + than its values. When he wants to make a new version of a value, + his permissions on the containing resource are not relevant. + However, when he wants to change the target of a link, the old + link must be deleted and a new one created, so he needs modify + permission on the resource. +4. **Delete permission (D)**: Allows the item to be marked as deleted. +5. **Change rights permission (CR)**: Allows the permissions granted by the object to be changed. Each permission in the above list implies all lower-numbered permissions. -A user’s permission level on a particular object is calculated in +A user's permission level on a particular object is calculated in the following way: -1. Make a list of the groups that the user belongs to, including - Creator and/or ProjectMember and/or ProjectAdmin if applicable. -2. Make a list of the permissions that she can obtain on the - object, by iterating over the permissions that the object - grants. For each permission, if she is in the specified group, - add the specified permission to the list of permissions she can - obtain. -3. From the resulting list, select the highest-level permission. -4. If the result is that she would have no permissions, give her - whatever permission *UnknownUser* would have. +1. Make a list of the groups that the user belongs to, including + Creator and/or ProjectMember and/or ProjectAdmin if applicable. +2. Make a list of the permissions that she can obtain on the + object, by iterating over the permissions that the object + grants. For each permission, if she is in the specified group, + add the specified permission to the list of permissions she can + obtain. +3. From the resulting list, select the highest-level permission. +4. If the result is that she would have no permissions, give her + whatever permission *UnknownUser* would have. The format of the object of `knora-base:hasPermissions` is as follows: @@ -123,10 +121,7 @@ follows: For example, if an object grants view permission to *unknown* and *known users*, and modify permission to *project members*, the resulting -permission literal would be: - : - - V knora-admin:UnknownUser,knora-admin:KnownUser|M knora-admin:ProjectMember +permission literal would be: `V knora-admin:UnknownUser,knora-admin:KnownUser|M knora-admin:ProjectMember`. ### Administrative Permissions @@ -140,75 +135,54 @@ predicate `knora-base:hasPermissions` attached to an instance of the `knora-admin:AdministrativePermission` class. The following permission values can be used: -1. Resource / Value Creation Permissions: - - 1) **ProjectResourceCreateAllPermission**: - - - description: gives the permission to create resources - inside the project. - - usage: used as a value for *knora-base:hasPermissions*. - - 2) **ProjectResourceCreateRestrictedPermission**: - - - description: gives restricted resource creation permission - inside the project. - - usage: used as a value for *knora-base:hasPermissions*. - - value: `RestrictedProjectResourceCreatePermission` - followed by a comma-separated list of *ResourceClasses* - the user should only be able to create instances of. - -2. Project Administration Permissions: - - 1) **ProjectAdminAllPermission**: - - - description: gives the user the permission to do anything - on project level, i.e. create new groups, modify all - existing groups (*group info*, *group membership*, - *resource creation permissions*, *project administration - permissions*, and *default permissions*). - - usage: used as a value for *knora-base:hasPermissions*. - - 2) **ProjectAdminGroupAllPermission**: - - - description: gives the user the permission to modify - *group info* and *group membership* on *all* groups - belonging to the project. - - usage: used as a value for the *knora-base:hasPermissions* - property. - - 3) **ProjectAdminGroupRestrictedPermission**: - - - description: gives the user the permission to modify - *group info* and *group membership* on *certain* groups - belonging to the project. - - usage: used as a value for *knora-base:hasPermissions* - - value: `ProjectGroupAdminRestrictedPermission` followed by - a comma-separated list of `knora-admin:UserGroup`. - - 4) **ProjectAdminRightsAllPermission**: - - - description: gives the user the permission to change the - *permissions* on all objects belonging to the project - (e.g., default permissions attached to groups and - permissions on objects). - - usage: used as a value for the *knora-base:hasPermissions* - property. +1. Resource / Value Creation Permissions: + 1) **ProjectResourceCreateAllPermission**: + - description: gives the permission to create resources inside the project. + - usage: used as a value for *knora-base:hasPermissions*. + 2) **ProjectResourceCreateRestrictedPermission**: + - description: gives restricted resource creation permission inside the project. + - usage: used as a value for *knora-base:hasPermissions*. + - value: `RestrictedProjectResourceCreatePermission` + followed by a comma-separated list of *ResourceClasses* + the user should only be able to create instances of. +2. Project Administration Permissions: + 1) **ProjectAdminAllPermission**: + - description: gives the user the permission to do anything + on project level, i.e. create new groups, modify all + existing groups (*group info*, *group membership*, + *resource creation permissions*, *project administration + permissions*, and *default permissions*). + - usage: used as a value for *knora-base:hasPermissions*. + 2) **ProjectAdminGroupAllPermission**: + - description: gives the user the permission to modify + *group info* and *group membership* on *all* groups + belonging to the project. + - usage: used as a value for the *knora-base:hasPermissions* property. + 3) **ProjectAdminGroupRestrictedPermission**: + - description: gives the user the permission to modify + *group info* and *group membership* on *certain* groups + belonging to the project. + - usage: used as a value for *knora-base:hasPermissions* + - value: `ProjectGroupAdminRestrictedPermission` followed by + a comma-separated list of `knora-admin:UserGroup`. + 4) **ProjectAdminRightsAllPermission**: + - description: gives the user the permission to change the + *permissions* on all objects belonging to the project + (e.g., default permissions attached to groups and + permissions on objects). + - usage: used as a value for the *knora-base:hasPermissions* property. The administrative permissions are stored in a compact format in a single string, which is the object of the predicate `knora-base:hasPermissions` attached to an instance of the `knora-admin:AdministrativePermission` class. - - The format of the object of `knora-base:hasPermissions` is as - follows: - - - Each permission is represented by the name given above. - - Each permission is followed by a space, then if applicable, by a - comma separated list of IRIs, as defined above. - - The IRIs of built-in values (e.g., built-in groups, resource - classes, etc.) are shortened using the knora-admin prefix - `knora-admin:`. - - Multiple permissions are separated by a vertical bar (|). +- The format of the object of `knora-base:hasPermissions` is as follows: + - Each permission is represented by the name given above. + - Each permission is followed by a space, then if applicable, by comma separated list of IRIs, as defined above. + - The IRIs of built-in values (e.g., built-in groups, resource + classes, etc.) are shortened using the knora-admin prefix `knora-admin:`. + - Multiple permissions are separated by a vertical bar (|). For example, if an administrative permission grants the `knora-admin:ProjectMember` group the permission to create all resources @@ -238,41 +212,26 @@ groups, resource classes and/or properties via instances of The default object access permissions correspond to the earlier described object access permission: -1. **Default Restricted View Permission (RV)**: - - - description: any object, created by a user inside a group - holding this permission, is restricted to carry this permission - - value: `RV` followed by a comma-separated list of - `knora-admin:UserGroup` - -2. **Default View Permission (V)**: - - - description: any object, created by a user inside a group - holding this permission, is restricted to carry this permission - - value: `V` followed by a comma-separated list of - `knora-admin:UserGroup` - -3. **Default Modify Permission (M)** accompanied by a list of groups. - - - description: any object, created by a user inside a group - holding this permission, is restricted to carry this permission - - value: `M` followed by a comma-separated list of - `knora-admin:UserGroup` - -4. **Default Delete Permission (D)** accompanied by a list of groups. - - - description: any object, created by a user inside a group - holding this permission, is restricted to carry this permission - - value: `D` followed by a comma-separated list of - `knora-admin:UserGroup` - -5. **Default Change Rights Permission (CR)** accompanied by a list of - groups. - - - description: any object, created by a user inside a group - holding this permission, is restricted to carry this permission - - value: `CR` followed by a comma-separated list of - `knora-admin:UserGroup` +1. **Default Restricted View Permission (RV)**: + - description: any object, created by a user inside a group + holding this permission, is restricted to carry this permission + - value: `RV` followed by a comma-separated list of `knora-admin:UserGroup` +2. **Default View Permission (V)**: + - description: any object, created by a user inside a group + holding this permission, is restricted to carry this permission + - value: `V` followed by a comma-separated list of `knora-admin:UserGroup` +3. **Default Modify Permission (M)** accompanied by a list of groups. + - description: any object, created by a user inside a group + holding this permission, is restricted to carry this permission + - value: `M` followed by a comma-separated list of `knora-admin:UserGroup` +4. **Default Delete Permission (D)** accompanied by a list of groups. + - description: any object, created by a user inside a group + holding this permission, is restricted to carry this permission + - value: `D` followed by a comma-separated list of `knora-admin:UserGroup` +5. **Default Change Rights Permission (CR)** accompanied by a list of groups. + - description: any object, created by a user inside a group + holding this permission, is restricted to carry this permission + - value: `CR` followed by a comma-separated list of `knora-admin:UserGroup` A single instance of `knora-admin:DefaultObjectAccessPermission` must always reference a project, but can only reference **either** a group @@ -306,17 +265,14 @@ group. The following list is sorted by the permission precedence level in descending order: - - permissions on `knora-admin:ProjectAdmin` (highest level) - - permissions on resource classes and property combination (own - project) - - permissions on resource classes and property combination - (`knora-admin:SystemProject`) - - permissions on resource classes / properties (own project) - - permissions on resource classes / properties - (`knora-admin:SystemProject`) - - permissions on custom groups - - permissions on `knora-admin:ProjectMember` - - permissions on `knora-admin:KnownUser` (lowest level) +- permissions on `knora-admin:ProjectAdmin` (highest level) +- permissions on resource classes and property combination (own project) +- permissions on resource classes and property combination (`knora-admin:SystemProject`) +- permissions on resource classes / properties (own project) +- permissions on resource classes / properties (`knora-admin:SystemProject`) +- permissions on custom groups +- permissions on `knora-admin:ProjectMember` +- permissions on `knora-admin:KnownUser` (lowest level) The permissions on resource classes / properties are only relevant for default object access permissions. @@ -346,9 +302,9 @@ either of these groups, then the resulting permission will be `CR knora-admin:Cr The `knora-admin:SystemAdmin` group receives implicitly the following permissions: - - receives implicitly *ProjectAdminAllPermission* for all projects. - - receives implicitly *ProjectResourceCreateAllPermission* for all projects. - - receives implicitly *CR* on all objects from all projects. +- receives implicitly *ProjectAdminAllPermission* for all projects. +- receives implicitly *ProjectResourceCreateAllPermission* for all projects. +- receives implicitly *CR* on all objects from all projects. Theses permissions are baked into the system, and cannot be changed. @@ -360,17 +316,17 @@ by row headers), is permitted to perform on an *object* (represented by column headers). The different operation abbreviations used are defined as follows: - - *C*: *Create* - the subject inside the group is allowed to *create* the object. +- *C*: *Create* - the subject inside the group is allowed to *create* the object. - - *U*: *Update* - the subject inside the group is allowed to *update* the object. +- *U*: *Update* - the subject inside the group is allowed to *update* the object. - - *R*: *Read* - the subject inside the group is allowed to *read* **all** information about the object. +- *R*: *Read* - the subject inside the group is allowed to *read* **all** information about the object. - - *D*: *Delete* - the subject inside the group is allowed to *delete* the object. +- *D*: *Delete* - the subject inside the group is allowed to *delete* the object. - - *P*: *Permission* - the subject inside the group is allowed to change the *permissions* on the object. +- *P*: *Permission* - the subject inside the group is allowed to change the *permissions* on the object. - - *-*: *none* - none or not applicable +- *-*: *none* - none or not applicable | Built-In Group | Project | Group | User | Resource | Value | | ----------------- | ------- | ------- | ------------------- | ---------------------- | -------------------- | @@ -385,23 +341,23 @@ Default Permissions Matrix for new Projects The explicitly defined default permissions for a new project are as follows: - `knora-admin:ProjectAdmin` group: - - **Administrative Permissions:** - - *ProjectResourceCreateAllPermission*. - - *ProjectAdminAllPermission*. - - **Default Object Access Permissions:** - - *CR* for the *knora-admin:ProjectAdmin* group - - *D* for the *knora-admin:ProjectAdmin* group - - *M* for the *knora-admin:ProjectAdmin* group - - *V* for the *knora-admin:ProjectAdmin* group - - *RV* for the *knora-admin:ProjectAdmin* group + - **Administrative Permissions:** + - *ProjectResourceCreateAllPermission*. + - *ProjectAdminAllPermission*. + - **Default Object Access Permissions:** + - *CR* for the *knora-admin:ProjectAdmin* group + - *D* for the *knora-admin:ProjectAdmin* group + - *M* for the *knora-admin:ProjectAdmin* group + - *V* for the *knora-admin:ProjectAdmin* group + - *RV* for the *knora-admin:ProjectAdmin* group - The `knora-admin:ProjectMember` group: - - **Administrative Permissions:** - - *ProjectResourceCreateAllPermission*. - - **Default Object Access Permissions:** - - *M* for the *knora-admin:ProjectMember* group - - *V* for the *knora-admin:ProjectMember* group - - *RV* for the *knora-admin:ProjectMember* group + - **Administrative Permissions:** + - *ProjectResourceCreateAllPermission*. + - **Default Object Access Permissions:** + - *M* for the *knora-admin:ProjectMember* group + - *V* for the *knora-admin:ProjectMember* group + - *RV* for the *knora-admin:ProjectMember* group ## Basic Workflows involving Permissions @@ -525,6 +481,7 @@ either *knora-admin:forGroup*, *knora-admin:forResourceClass*, or knora-admin:forGroup ; knora-base:hasPermissions "ProjectGroupAdminRestrictedPermission "^^xsd:string . ``` + **Administrative permission restricting resource creation for a group:** ``` @@ -575,6 +532,7 @@ either *knora-admin:forGroup*, *knora-admin:forResourceClass*, or knora-base:hasPermissions "CR knora-admin:Creator,knora-admin:ProjectMember| V knora-admin:KnownUser,knora-admin:UnknownUser"^^xsd:string . ``` + **Default object access permission on a knora-admin property:** ``` diff --git a/docs/05-internals/design/api-v2/gravsearch.md b/docs/05-internals/design/api-v2/gravsearch.md index a4285cf03e..580f2ecd72 100644 --- a/docs/05-internals/design/api-v2/gravsearch.md +++ b/docs/05-internals/design/api-v2/gravsearch.md @@ -113,13 +113,15 @@ In `SearchResponderV2`, two queries are generated from a given Gravsearch query: The Gravsearch query is passed to `QueryTraverser` along with a query transformer. Query transformers are classes that implement traits supported by `QueryTraverser`: -- `WhereTransformer`: instructions how to convert statements in the WHERE clause of a SPARQL query (to generate the prequery's Where clause). +- `WhereTransformer`: instructions how to convert statements in the WHERE clause of a SPARQL query + (to generate the prequery's Where clause). To improve query performance, this trait defines the method `optimiseQueryPatterns` whose implementation can call private methods to optimise the generated SPARQL. For example, before transformation of statements in WHERE clause, query pattern orders must be optimised by moving `LuceneQueryPatterns` to the beginning and `isDeleted` statement patterns to the end of the WHERE clause. -- `AbstractPrequeryGenerator` (extends `WhereTransformer`): converts a Gravsearch query into a prequery; this one has two implementations for regular search queries and for count queries. +- `AbstractPrequeryGenerator` (extends `WhereTransformer`): converts a Gravsearch query into a prequery; + this one has two implementations for regular search queries and for count queries. - `SelectTransformer` (extends `WhereTransformer`): transforms a Select query into a Select query with simulated RDF inference. - `ConstructTransformer`: transforms a Construct query into a Construct query with simulated RDF inference. @@ -147,11 +149,21 @@ The transformation of the Gravsearch query's WHERE clause relies on the implemen `AbstractPrequeryGenerator` contains members whose state is changed during the iteration over the statements of the input query. They can then be used to create the converted query. -- `mainResourceVariable: Option[QueryVariable]`: SPARQL variable representing the main resource of the input query. Present in the prequery's SELECT clause. -- `dependentResourceVariables: mutable.Set[QueryVariable]`: a set of SPARQL variables representing dependent resources in the input query. Used in an aggregation function in the prequery's SELECT clause (see below). -- `dependentResourceVariablesGroupConcat: Set[QueryVariable]`: a set of SPARQL variables representing an aggregation of dependent resources. Present in the prequery's SELECT clause. -- `valueObjectVariables: mutable.Set[QueryVariable]`: a set of SPARQL variables representing value objects. Used in an aggregation function in the prequery's SELECT clause (see below). -- `valueObjectVarsGroupConcat: Set[QueryVariable]`: a set of SPARQL variables representing an aggregation of value objects. Present in the prequery's SELECT clause. +- `mainResourceVariable: Option[QueryVariable]`: + SPARQL variable representing the main resource of the input query. + Present in the prequery's SELECT clause. +- `dependentResourceVariables: mutable.Set[QueryVariable]`: + a set of SPARQL variables representing dependent resources in the input query. + Used in an aggregation function in the prequery's SELECT clause (see below). +- `dependentResourceVariablesGroupConcat: Set[QueryVariable]`: + a set of SPARQL variables representing an aggregation of dependent resources. + Present in the prequery's SELECT clause. +- `valueObjectVariables: mutable.Set[QueryVariable]`: + a set of SPARQL variables representing value objects. + Used in an aggregation function in the prequery's SELECT clause (see below). +- `valueObjectVarsGroupConcat: Set[QueryVariable]`: + a set of SPARQL variables representing an aggregation of value objects. + Present in the prequery's SELECT clause. The variables mentioned above are present in the prequery's result rows because they are part of the prequery's SELECT clause. @@ -213,7 +225,8 @@ The variable `?book` is bound to an IRI. Since more than one IRI could be bound to a variable representing a dependent resource, the results have to be aggregated. `GROUP_CONCAT` takes two arguments: a collection of strings (IRIs in our use case) and a separator (we use the non-printing Unicode character `INFORMATION SEPARATOR ONE`). -When accessing `?book__Concat` in the prequery's results containing the IRIs of dependent resources, the string has to be split with the separator used in the aggregation function. +When accessing `?book__Concat` in the prequery's results containing the IRIs of dependent resources, +the string has to be split with the separator used in the aggregation function. The result is a collection of IRIs representing dependent resources. The same logic applies to value objects. @@ -229,10 +242,12 @@ will return more than one row per main resource. To deal with this situation, ### Main Query -The purpose of the main query is to get all requested information about the main resource, dependent resources, and value objects. +The purpose of the main query is to get all requested information +about the main resource, dependent resources, and value objects. The IRIs of those resources and value objects were returned by the prequery. Since the prequery only returns resources and value objects matching the input query's criteria, -the main query can specifically ask for more detailed information on these resources and values without having to reconsider these criteria. +the main query can specifically ask for more detailed information on these resources and values +without having to reconsider these criteria. #### Generating the Main Query @@ -278,10 +293,13 @@ to the maximum allowed page size, the predicate ## Inference -Gravsearch queries support a subset of RDFS reasoning (see [Inference](../../../03-endpoints/api-v2/query-language.md#inference) in the API documentation -on Gravsearch). This is implemented as follows: +Gravsearch queries support a subset of RDFS reasoning +(see [Inference](../../../03-endpoints/api-v2/query-language.md#inference) in the API documentation on Gravsearch). +This is implemented as follows: -To simulate RDF inference, the API expands all `rdfs:subClassOf` and `rdfs:subPropertyOf` statements using `UNION` statements for all subclasses and subproperties from the ontologies (equivalent to `rdfs:subClassOf*` and `rdfs:subPropertyOf*`). +To simulate RDF inference, the API expands all `rdfs:subClassOf` and `rdfs:subPropertyOf` statements +using `UNION` statements for all subclasses and subproperties from the ontologies +(equivalent to `rdfs:subClassOf*` and `rdfs:subPropertyOf*`). Similarly, the API replaces `knora-api:standoffTagHasStartAncestor` with `knora-base:standoffTagHasStartParent*`. @@ -293,7 +311,8 @@ Lucene queries to the beginning of the block in which they occur. ## Query Optimization by Topological Sorting of Statements -In Jena Fuseki, the performance of a query highly depends on the order of the query statements. For example, a query such as the one below: +In Jena Fuseki, the performance of a query highly depends on the order of the query statements. +For example, a query such as the one below: ```sparql PREFIX beol: @@ -342,7 +361,8 @@ The rest of the query then reads: ?letter beol:creationDate ?date . ``` -Since users cannot be expected to know about performance of triplestores in order to write efficient queries, an optimization method to automatically rearrange the statements of the given queries has been implemented. +Since users cannot be expected to know about performance of triplestores in order to write efficient queries, +an optimization method to automatically rearrange the statements of the given queries has been implemented. Upon receiving the Gravsearch query, the algorithm converts the query to a graph. For each statement pattern, the subject of the statement is the origin node, the predicate is a directed edge, and the object is the target node. For the query above, this conversion would result in the following graph: @@ -411,6 +431,7 @@ UNION ?int knora-api:intValueAsInt 3 . } ``` + This would result in one graph per block of the `UNION`. Each graph is then sorted, and the statements of its block are rearranged according to the topological order of graph. This is the result: diff --git a/docs/05-internals/design/api-v2/json-ld.md b/docs/05-internals/design/api-v2/json-ld.md index 864883fd6d..ec8517051f 100644 --- a/docs/05-internals/design/api-v2/json-ld.md +++ b/docs/05-internals/design/api-v2/json-ld.md @@ -105,7 +105,9 @@ the string is invalid, `requireStringWithValidation` throws `BadRequestException It is also possible to get and validate an optional JSON-LD object member: ```scala -val maybeDateValueHasStartEra: Option[DateEraV2] = jsonLDObject.maybeStringWithValidation(OntologyConstants.KnoraApiV2Complex.DateValueHasStartEra, DateEraV2.parse) +val maybeDateValueHasStartEra: Option[DateEraV2] = jsonLDObject.maybeStringWithValidation( + OntologyConstants.KnoraApiV2Complex.DateValueHasStartEra, DateEraV2.parse +) ``` Here `JsonLDObject.maybeStringWithValidation` returns an `Option` that contains diff --git a/docs/05-internals/design/api-v2/overview.md b/docs/05-internals/design/api-v2/overview.md index 650c3eee3d..d8e5836d6d 100644 --- a/docs/05-internals/design/api-v2/overview.md +++ b/docs/05-internals/design/api-v2/overview.md @@ -7,23 +7,22 @@ ## General Principles - - DSP-API v2 requests and responses are RDF documents. Any API v2 - response can be returned as [JSON-LD](https://json-ld.org/spec/latest/json-ld/), - [Turtle](https://www.w3.org/TR/turtle/), - or [RDF/XML](https://www.w3.org/TR/rdf-syntax-grammar/). - - Each class or property used in a request or response has a - definition in an ontology, which Knora can serve. - - Response formats are reused for different requests whenever - possible, to minimise the number of different response formats a - client has to handle. For example, any request for one or more - resources (such as a search result, or a request for one specific - resource) returns a response in the same format. - - Response size is limited by design. Large amounts of data must be - retrieved by requesting small pages of data, one after the other. - - Responses that provide data are distinct from responses that provide - definitions (i.e. ontology entities). Data responses indicate which - types are used, and the client can request information about these - types separately. +- DSP-API v2 requests and responses are RDF documents. Any API v2 + response can be returned as [JSON-LD](https://json-ld.org/spec/latest/json-ld/), + [Turtle](https://www.w3.org/TR/turtle/), + or [RDF/XML](https://www.w3.org/TR/rdf-syntax-grammar/). +- Each class or property used in a request or response has a definition in an ontology, which Knora can serve. +- Response formats are reused for different requests whenever + possible, to minimise the number of different response formats a + client has to handle. For example, any request for one or more + resources (such as a search result, or a request for one specific + resource) returns a response in the same format. +- Response size is limited by design. Large amounts of data must be + retrieved by requesting small pages of data, one after the other. +- Responses that provide data are distinct from responses that provide + definitions (i.e. ontology entities). Data responses indicate which + types are used, and the client can request information about these + types separately. ## API Schemas @@ -31,13 +30,12 @@ The types used in the triplestore are not exposed directly in the API. Instead, they are mapped onto API 'schemas'. Two schemas are currently provided. - - A complex schema, which is suitable both for reading and for editing - data. The complex schema represents values primarily as complex - objects. - - A simple schema, which is suitable for reading data but not for - editing it. The simple schema facilitates interoperability between - DSP ontologies and non-DSP ontologies, since it represents - values primarily as literals. +- A complex schema, which is suitable both for reading and for editing + data. The complex schema represents values primarily as complex objects. +- A simple schema, which is suitable for reading data but not for + editing it. The simple schema facilitates interoperability between + DSP ontologies and non-DSP ontologies, since it represents + values primarily as literals. Each schema has its own type IRIs, which are derived from the ones used in the triplestore. For details of these different IRI formats, see @@ -169,14 +167,14 @@ Therefore, instances of `SmartIriImpl` created by different instances of There are in fact two instances of `StringFormatter`: - - one returned by `StringFormatter.getGeneralInstance` which is - available after Akka has started and has the API server's hostname - (and can therefore provide `SmartIri` instances capable of parsing - IRIs containing that hostname). This instance is used throughout the - DSP-API server. - - one returned by `StringFormatter.getInstanceForConstantOntologies`, - which is available before Akka has started, and is used only by the - hard-coded constant `knora-api` ontologies. +- one returned by `StringFormatter.getGeneralInstance` which is + available after Akka has started and has the API server's hostname + (and can therefore provide `SmartIri` instances capable of parsing + IRIs containing that hostname). This instance is used throughout the + DSP-API server. +- one returned by `StringFormatter.getInstanceForConstantOntologies`, + which is available before Akka has started, and is used only by the + hard-coded constant `knora-api` ontologies. This is the reason for the existence of the `SmartIri` trait, which is a top-level definition and has its own `equals` and `hashCode` methods. diff --git a/docs/05-internals/design/api-v2/query-design.md b/docs/05-internals/design/api-v2/query-design.md index b3baa2ee4d..843bed4d91 100644 --- a/docs/05-internals/design/api-v2/query-design.md +++ b/docs/05-internals/design/api-v2/query-design.md @@ -7,7 +7,10 @@ ## Inference -DSP-API does not require the triplestore to perform inference, as different triplestores implement inference quite differently, so that taking advantage of inference would require triplestore specific code, which is not well maintainable. Instead, the API simulates inference for each Gravsearch query, so that the expected results are returned. +DSP-API does not require the triplestore to perform inference, +as different triplestores implement inference quite differently, +so that taking advantage of inference would require triplestore specific code, which is not well maintainable. +Instead, the API simulates inference for each Gravsearch query, so that the expected results are returned. Gravsearch queries currently need to do the following: @@ -38,13 +41,20 @@ This query: - Finds the Knora values attached to the resource, and returns each value along with the property that explicitly attaches it to the resource. -However, such a query is very inefficient. Instead, the API does inference on the query, so that the relevant information can be found in a timely manner. +However, such a query is very inefficient. +Instead, the API does inference on the query, so that the relevant information can be found in a timely manner. -For this, the query is analyzed to check which project ontologies are relevant to the query. If an ontology is not relevant to a query, then all class and property definitions of this ontology are disregarded for inference. +For this, the query is analyzed to check which project ontologies are relevant to the query. +If an ontology is not relevant to a query, +then all class and property definitions of this ontology are disregarded for inference. -Then, each statement that requires inference (i.e. that could be phrased with property path syntax, as described above) is cross-referenced with the relevant ontologies, to see which property/class definitions would fit the statement according to the rules of RDF inference. And each of those definitions is added to the query as a separate `UNION` statement. +Then, each statement that requires inference (i.e. that could be phrased with property path syntax, as described above) +is cross-referenced with the relevant ontologies, +to see which property/class definitions would fit the statement according to the rules of RDF inference. +And each of those definitions is added to the query as a separate `UNION` statement. -E.g.: Given the resource class `B` is a subclass of `A` and the property `hasY` is a subproperty of `hasX`, then the following query +E.g.: Given the resource class `B` is a subclass of `A` and the property `hasY` is a subproperty of `hasX`, +then the following query ```sparql SELECT { diff --git a/docs/05-internals/design/api-v2/sipi.md b/docs/05-internals/design/api-v2/sipi.md index 93a2aea5e2..0565340e7f 100644 --- a/docs/05-internals/design/api-v2/sipi.md +++ b/docs/05-internals/design/api-v2/sipi.md @@ -63,6 +63,7 @@ must be a JSON object containing: - `filename`: must be the same as the filename submitted in the URL ### clean_temp_dir.lua + The `clean_temp_dir.lua` script is available at Sipi's `clean_temp_dir` route. When called, it deletes old temporary files from `tmp` and (recursively) from any subdirectories. The maximum allowed age of temporary files can be set in Sipi's configuration file, diff --git a/docs/05-internals/design/api-v2/smart-iris.md b/docs/05-internals/design/api-v2/smart-iris.md index 65f8a91722..9cd9e22243 100644 --- a/docs/05-internals/design/api-v2/smart-iris.md +++ b/docs/05-internals/design/api-v2/smart-iris.md @@ -30,6 +30,7 @@ it to a `SmartIri` like this: ```scala val propertyIri: SmartIri = "http://0.0.0.0:3333/ontology/0001/anything/v2#hasInteger".toSmartIri ```` + If the IRI came from a request, use this method to throw a specific exception if the IRI is invalid: diff --git a/docs/05-internals/design/domain/class-and-property-hierarchies.md b/docs/05-internals/design/domain/class-and-property-hierarchies.md index e5a1d80803..b03c76ec4f 100644 --- a/docs/05-internals/design/domain/class-and-property-hierarchies.md +++ b/docs/05-internals/design/domain/class-and-property-hierarchies.md @@ -9,8 +9,7 @@ For the sake of comprehensibility, it was necessary to split the ontology into m even though this obliterates the evident connections between those diagrams. !!! Note "Legend" - - - dotted lines: the boxes are copies from another diagram. + dotted lines: the boxes are copies from another diagram. ### Resources @@ -529,7 +528,6 @@ flowchart LR ### Resource Triples Structure !!! Note "Legend" - - round boxes: resources - square boxes: properties - hexagonal boxes: resoures that are duplicated for graphical reasons diff --git a/docs/05-internals/design/domain/domain-entities-and-relations.md b/docs/05-internals/design/domain/domain-entities-and-relations.md index 92179f252e..a0ee2e44bd 100644 --- a/docs/05-internals/design/domain/domain-entities-and-relations.md +++ b/docs/05-internals/design/domain/domain-entities-and-relations.md @@ -7,7 +7,6 @@ as implicitly modelled by the ontologies, code, validations and documentation of The following document aims to give a higher level overview of said domain. !!! Note - - As a high level overview, this document does not aim for exhaustivity. - Naming is tried to be kept as simple as possible, while trying to consolidate different naming schemes diff --git a/docs/05-internals/design/principles/consistency-checking.md b/docs/05-internals/design/principles/consistency-checking.md index ecc708fe55..19fd5a12ba 100644 --- a/docs/05-internals/design/principles/consistency-checking.md +++ b/docs/05-internals/design/principles/consistency-checking.md @@ -16,56 +16,55 @@ as far as is practical, in a triplestore-independent way (see useful to enforce consistency constraints in the triplestore itself, for two reasons: -1. To prevent inconsistencies resulting from bugs in the DSP-API - server. -2. To prevent users from inserting inconsistent data directly into the - triplestore, bypassing Knora. +1. To prevent inconsistencies resulting from bugs in the DSP-API server. +2. To prevent users from inserting inconsistent data directly into the triplestore, bypassing Knora. The design of the `knora-base` ontology supports two ways of specifying constraints on data (see [knora-base: Consistency Checking](../../../02-dsp-ontologies/knora-base.md#consistency-checking) for details): -1. A property definition should specify the types that are allowed as - subjects and objects of the property, using - `knora-base:subjectClassConstraint` and (if it is an object - property) `knora-base:objectClassConstraint`. Every subproperty of - `knora-base:hasValue` or a `knora-base:hasLinkTo` (i.e. every - property of a resource that points to a `knora-base:Value` or to - another resource) is required have this constraint, because the - DSP-API server relies on it to know what type of object to expect - for the property. Use of `knora-base:subjectClassConstraint` is - recommended but not required. -2. A class definition should use OWL cardinalities (see [OWL 2 Quick Reference Guide](https://www.w3.org/TR/owl2-quick-reference/)) - to indicate the properties that instances of the class are allowed to - have, and to constrain the number of objects that each property can - have. Subclasses of `knora-base:Resource` are required to have a - cardinality for each subproperty of `knora-base:hasValue` or a - `knora-base:hasLinkTo` that resources of that class can have. +1. A property definition should specify the types that are allowed as + subjects and objects of the property, using + `knora-base:subjectClassConstraint` and (if it is an object + property) `knora-base:objectClassConstraint`. Every subproperty of + `knora-base:hasValue` or a `knora-base:hasLinkTo` (i.e. every + property of a resource that points to a `knora-base:Value` or to + another resource) is required have this constraint, because the + DSP-API server relies on it to know what type of object to expect + for the property. Use of `knora-base:subjectClassConstraint` is + recommended but not required. +2. A class definition should use OWL cardinalities + (see [OWL 2 Quick Reference Guide](https://www.w3.org/TR/owl2-quick-reference/)) + to indicate the properties that instances of the class are allowed to + have, and to constrain the number of objects that each property can + have. Subclasses of `knora-base:Resource` are required to have a + cardinality for each subproperty of `knora-base:hasValue` or a + `knora-base:hasLinkTo` that resources of that class can have. Specifically, consistency checking should prevent the following: - - An object property or datatype property has a subject of the wrong - class, or an object property has an object of the wrong class - (GraphDB's consistency checke cannot check the types of literals). - - An object property has an object that does not exist (i.e. the - object is an IRI that is not used as the subject of any statements - in the repository). This can be treated as if the object is of the - wrong type (i.e. it can cause a violation of - `knora-base:objectClassConstraint`, because there is no compatible - `rdf:type` statement for the object). - - A class has `owl:cardinality 1` or `owl:minCardinality 1` on an - object property or datatype property, and an instance of the class - does not have that property. - - A class has `owl:cardinality 1` or `owl:maxCardinality 1` on an - object property or datatype property, and an instance of the class - has more than one object for that property. - - An instance of `knora-base:Resource` has an object property pointing - to a `knora-base:Value` or to another `Resource`, and its class has - no cardinality for that property. - - An instance of `knora-base:Value` has a subproperty of - `knora-base:valueHas`, and its class has no cardinality for that - property. - - A datatype property has an empty string as an object. +- An object property or datatype property has a subject of the wrong + class, or an object property has an object of the wrong class + (GraphDB's consistency checke cannot check the types of literals). +- An object property has an object that does not exist (i.e. the + object is an IRI that is not used as the subject of any statements + in the repository). This can be treated as if the object is of the + wrong type (i.e. it can cause a violation of + `knora-base:objectClassConstraint`, because there is no compatible + `rdf:type` statement for the object). +- A class has `owl:cardinality 1` or `owl:minCardinality 1` on an + object property or datatype property, and an instance of the class + does not have that property. +- A class has `owl:cardinality 1` or `owl:maxCardinality 1` on an + object property or datatype property, and an instance of the class + has more than one object for that property. +- An instance of `knora-base:Resource` has an object property pointing + to a `knora-base:Value` or to another `Resource`, and its class has + no cardinality for that property. +- An instance of `knora-base:Value` has a subproperty of + `knora-base:valueHas`, and its class has no cardinality for that + property. +- A datatype property has an empty string as an object. Cardinalities in base classes are inherited by derived classes. Derived classes can override inherited cardinalities by making them more @@ -75,19 +74,19 @@ the original cardinality. Instances of `Resource` and `Value` can be marked as deleted, using the property `isDeleted`. This must be taken into account as follows: - - With `owl:cardinality 1` or `owl:maxCardinality 1`, if the object of - the property can be marked as deleted, the property must not have - more than one object that has not been marked as deleted. In other - words, it's OK if there is more than one object, as long only one of - them has `knora-base:isDeleted false`. - - With `owl:cardinality 1` or `owl:minCardinality 1`, the property - must have an object, but it's OK if the property's only object is - marked as deleted. We allow this because the subject and object may - have different owners, and it may not be feasible for them to - coordinate their work. The owner of the object should always be able - to mark it as deleted. (It could be useful to notify the owner of - the subject when this happens, but that is beyond the scope of - consistency checking.) +- With `owl:cardinality 1` or `owl:maxCardinality 1`, if the object of + the property can be marked as deleted, the property must not have + more than one object that has not been marked as deleted. In other + words, it's OK if there is more than one object, as long only one of + them has `knora-base:isDeleted false`. +- With `owl:cardinality 1` or `owl:minCardinality 1`, the property + must have an object, but it's OK if the property's only object is + marked as deleted. We allow this because the subject and object may + have different owners, and it may not be feasible for them to + coordinate their work. The owner of the object should always be able + to mark it as deleted. (It could be useful to notify the owner of + the subject when this happens, but that is beyond the scope of + consistency checking.) ## Design @@ -160,14 +159,14 @@ Consistency: The differences between inference rules and consistency rules are: - - A consistency rule begins with `Consistency` instead of `Id`. - - In a consistency rule, the consequences are optional. Instead of - representing statements to be inferred, they represent statements - that must exist if the premises are satisfied. In other words, if - the premises are satisfied and the consequences are not found, the - rule is violated. - - If a consistency rule doesn't specify any consequences, and the - premises are satisfied, the rule is violated. +- A consistency rule begins with `Consistency` instead of `Id`. +- In a consistency rule, the consequences are optional. Instead of + representing statements to be inferred, they represent statements + that must exist if the premises are satisfied. In other words, if + the premises are satisfied and the consequences are not found, the + rule is violated. +- If a consistency rule doesn't specify any consequences, and the + premises are satisfied, the rule is violated. Rules use variable names for subjects, predicates, and objects, and they can use actual property names. @@ -234,12 +233,12 @@ whether `i` is actually something that can be marked as deleted. However, this implementation would be much too slow. We therefore use two optimisations suggested by Ontotext: -1. Add custom inference rules to make tables (i.e. named graphs) of - pre-calculated information about the cardinalities on properties of - subjects, and use those tables to simplify the consistency rules. -2. Use the `[Cut]` constraint to avoid generating certain redundant - compiled rules (see [Entailment - rules](http://graphdb.ontotext.com/documentation/standard/reasoning.html#entailment-rules)). +1. Add custom inference rules to make tables (i.e. named graphs) of + pre-calculated information about the cardinalities on properties of + subjects, and use those tables to simplify the consistency rules. +2. Use the `[Cut]` constraint to avoid generating certain redundant + compiled rules (see [Entailment + rules](http://graphdb.ontotext.com/documentation/standard/reasoning.html#entailment-rules)). For example, to construct a table of subjects belonging to classes that have `owl:maxCardinality 1` on some property `p`, we use the following diff --git a/docs/05-internals/development/building-and-running.md b/docs/05-internals/development/building-and-running.md index eae03d7232..d4da87cb3c 100644 --- a/docs/05-internals/development/building-and-running.md +++ b/docs/05-internals/development/building-and-running.md @@ -19,51 +19,51 @@ With Docker installed and configured, 1. Run the following: - ``` - $ make init-db-test - ``` + ```bash + make init-db-test + ``` to create the knora-test repository and initialize it with loading some test data into the triplestore (Fuseki). 1. Start the entire knora-stack (fuseki (db), sipi, api, salsah1) with the following command: - ``` - $ make stack-up - ``` + ```bash + make stack-up + ``` **_Note_**: To delete the existing containers and for a clean start, before creating the knora-test repository explained in the first step above, run the following: -``` -$ make stack-down-delete-volumes +```bash +make stack-down-delete-volumes ``` This stops the knora-stack and deletes any created volumes (deletes the database!). To only shut down the Knora-Stack without deleting the containers: -``` -$ make stack-down +```bash +make stack-down ``` To restart the knora-api use the following command: -``` -$ make stack-restart-api +```bash +make stack-restart-api ``` If a change is made to knora-api code, only its image needs to be rebuilt. In that case, use -``` -$ make stack-up-fast +```bash +make stack-up-fast ``` which starts the knora-stack by skipping rebuilding most of the images (only api image is rebuilt). To work on Metadata, use -``` -$ make stack-up-with-metadata +```bash +make stack-up-with-metadata ``` which will put three example metadata sets to the projects `anything`, `images` and `dokubib`. @@ -95,8 +95,8 @@ triplestore. Note that, you can also print out the log information directly from the command line. For example, the same logs of the database container can be printed out using the following command: -``` -$ make stack-logs-db +```bash +make stack-logs-db ``` Similarly, the logs of the other containers can be printed out by running make with `stack-logs-api` @@ -104,14 +104,14 @@ or `stack-logs-sipi`. These commands print out and follow the logs, to only print the logs out without following, use `-no-follow` version of the commands for example: - ``` - $ make stack-logs-db-no-follow - ``` +```bash +make stack-logs-db-no-follow +``` Lastly, to print out the entire logs of the running knora-stack, use -``` -$ make stack-logs +```bash +make stack-logs ``` With the Docker plugin installed, you can attach a terminal to the docker container within VS Code. This will stream the @@ -126,16 +126,16 @@ attaching a shell to the container. To run all test targets, use the following in the command line: -``` -$ make test-all +```bash +make test-all ``` To run a single test from the command line, for example `SearchV2R2RSpec`, run the following: - ```bash - $ sbt " webapi / testOnly *SearchV2R2RSpec* " - ``` +```bash +sbt " webapi / testOnly *SearchV2R2RSpec* " +``` _**Note:** to run tests, the api container must be stopped first!_ @@ -143,40 +143,40 @@ _**Note:** to run tests, the api container must be stopped first!_ First, you need to install the requirements through: -``` -$ make docs-install-requirements +```bash +make docs-install-requirements ``` Then, to build docs into the local `site` folder, run the following command: -``` -$ make docs-build +```bash +make docs-build ``` At this point, you can serve the docs to view them locally using -``` -$ make docs-serve +```bash +make docs-serve ``` Lastly, to build and publish docs to Github Pages, use -``` -$ make docs-publish +```bash +make docs-publish ``` ## Build and Publish Docker Images To build and publish all Docker images locally -``` -$ make docker-build +```bash +make docker-build ``` To publish all Docker images to Dockerhub -``` -$ make docker-publish +```bash +make docker-publish ``` ## Continuous Integration diff --git a/docs/05-internals/development/docker-cheat-sheet.md b/docs/05-internals/development/docker-cheat-sheet.md index 61f863eb0b..a7f2890671 100644 --- a/docs/05-internals/development/docker-cheat-sheet.md +++ b/docs/05-internals/development/docker-cheat-sheet.md @@ -10,16 +10,16 @@ A complete cheat sheet can be found ## Lifecycle - - [docker create](https://docs.docker.com/engine/reference/commandline/create) - creates a container but does not start it. - - [docker run](https://docs.docker.com/engine/reference/commandline/run) - creates and starts a container in one operation. - - [docker rename](https://docs.docker.com/engine/reference/commandline/rename/) - allows the container to be renamed. - - [docker rm](https://docs.docker.com/engine/reference/commandline/rm) - deletes a container. - - [docker update](https://docs.docker.com/engine/reference/commandline/update/) - updates a container's resource limits. +- [docker create](https://docs.docker.com/engine/reference/commandline/create) + creates a container but does not start it. +- [docker run](https://docs.docker.com/engine/reference/commandline/run) + creates and starts a container in one operation. +- [docker rename](https://docs.docker.com/engine/reference/commandline/rename/) + allows the container to be renamed. +- [docker rm](https://docs.docker.com/engine/reference/commandline/rm) + deletes a container. +- [docker update](https://docs.docker.com/engine/reference/commandline/update/) + updates a container's resource limits. If you want a transient container, `docker run --rm` will remove the container after it stops. @@ -29,36 +29,36 @@ If you want to map a directory on the host to a docker container, ## Starting and Stopping - - [docker start](https://docs.docker.com/engine/reference/commandline/start) - starts a container so it is running. - - [docker stop](https://docs.docker.com/engine/reference/commandline/stop) stops a - running container. - - [docker restart](https://docs.docker.com/engine/reference/commandline/restart) - stops and starts a container. - - [docker pause](https://docs.docker.com/engine/reference/commandline/pause/) - pauses a running container, "freezing" it in place. - - [docker attach](https://docs.docker.com/engine/reference/commandline/attach) - will connect to a running container. +- [docker start](https://docs.docker.com/engine/reference/commandline/start) + starts a container so it is running. +- [docker stop](https://docs.docker.com/engine/reference/commandline/stop) stops a + running container. +- [docker restart](https://docs.docker.com/engine/reference/commandline/restart) + stops and starts a container. +- [docker pause](https://docs.docker.com/engine/reference/commandline/pause/) + pauses a running container, "freezing" it in place. +- [docker attach](https://docs.docker.com/engine/reference/commandline/attach) + will connect to a running container. ## Info - - [docker ps](https://docs.docker.com/engine/reference/commandline/ps) - shows running containers. - - [docker logs](https://docs.docker.com/engine/reference/commandline/logs) gets - logs from container. (You can use a custom log driver, but logs is - only available for json-file and journald in 1.10) - - [docker inspect](https://docs.docker.com/engine/reference/commandline/inspect) - looks at all the info on a container (including IP address). - - [docker events](https://docs.docker.com/engine/reference/commandline/events) - gets events from container. - - [docker port](https://docs.docker.com/engine/reference/commandline/port) shows - public facing port of container. - - [docker top](https://docs.docker.com/engine/reference/commandline/top) - shows running processes in container. - - [docker stats](https://docs.docker.com/engine/reference/commandline/stats) shows - containers' resource usage statistics. - - [docker diff](https://docs.docker.com/engine/reference/commandline/diff) shows - changed files in the container's FS. +- [docker ps](https://docs.docker.com/engine/reference/commandline/ps) + shows running containers. +- [docker logs](https://docs.docker.com/engine/reference/commandline/logs) gets + logs from container. (You can use a custom log driver, but logs is + only available for json-file and journald in 1.10) +- [docker inspect](https://docs.docker.com/engine/reference/commandline/inspect) + looks at all the info on a container (including IP address). +- [docker events](https://docs.docker.com/engine/reference/commandline/events) + gets events from container. +- [docker port](https://docs.docker.com/engine/reference/commandline/port) shows + public facing port of container. +- [docker top](https://docs.docker.com/engine/reference/commandline/top) + shows running processes in container. +- [docker stats](https://docs.docker.com/engine/reference/commandline/stats) shows + containers' resource usage statistics. +- [docker diff](https://docs.docker.com/engine/reference/commandline/diff) shows + changed files in the container's FS. `docker ps -a` shows running and stopped containers. @@ -66,15 +66,13 @@ If you want to map a directory on the host to a docker container, ## Executing Commands - - [docker exec](https://docs.docker.com/engine/reference/commandline/exec) to - execute a command in container. +- [docker exec](https://docs.docker.com/engine/reference/commandline/exec) to + execute a command in container. To enter a running container, attach a new shell process to a running container called foo, use: `docker exec -it foo /bin/bash`. ## Images - - [docker images](https://docs.docker.com/engine/reference/commandline/images) - shows all images. - - [docker build](https://docs.docker.com/engine/reference/commandline/build) - creates image from Dockerfile. +- [docker images](https://docs.docker.com/engine/reference/commandline/images) shows all images. +- [docker build](https://docs.docker.com/engine/reference/commandline/build) creates image from Dockerfile. diff --git a/docs/05-internals/development/docker-compose.md b/docs/05-internals/development/docker-compose.md index a3fd875c44..fdc68410af 100644 --- a/docs/05-internals/development/docker-compose.md +++ b/docs/05-internals/development/docker-compose.md @@ -10,8 +10,8 @@ Webapi running each in its own Docker container. To run the whole stack: -``` -$ make stack-up +```bash +make stack-up ``` For additional information please see the [Docker Compose documentation](https://docs.docker.com/compose/) diff --git a/docs/05-internals/development/generating-client-test-data.md b/docs/05-internals/development/generating-client-test-data.md index 358bd0743b..5acbba2fe4 100644 --- a/docs/05-internals/development/generating-client-test-data.md +++ b/docs/05-internals/development/generating-client-test-data.md @@ -23,7 +23,7 @@ with the list in `webapi/scripts/expected-client-test-data.txt`. To generate client test data, type: -``` +```bash make client-test-data ``` @@ -31,6 +31,6 @@ When the tests have finished running, you will find the file `client-test-data.zip` in the current directory. Then, run this script to update the list of expected test data files: -``` +```bash webapi/scripts/update-expected-client-test-data.sh client-test-data.zip ``` diff --git a/docs/05-internals/development/overview.md b/docs/05-internals/development/overview.md index 44d0ae202f..00dbdb8586 100644 --- a/docs/05-internals/development/overview.md +++ b/docs/05-internals/development/overview.md @@ -17,7 +17,9 @@ installation of Knora. The different parts are: ## Knora Github Repository - $ git clone https://github.com/dasch-swiss/dsp-api +```bash +git clone https://github.com/dasch-swiss/dsp-api +``` ## Triplestore @@ -36,7 +38,7 @@ Kakadu distribution. To build the image, and push it to the docker hub, follow the following steps: -``` +```bash $ git clone https://github.com/dhlab-basel/docker-sipi (copy the Kakadu distribution ``v7_8-01382N.zip`` to the ``docker-sipi`` directory) $ docker build -t daschswiss/sipi @@ -55,23 +57,23 @@ organisation. To use the docker image stored locally or on the docker hub repository type: -``` -$ docker run --name sipi -d -p 1024:1024 daschswiss/sipi +```bash +docker run --name sipi -d -p 1024:1024 daschswiss/sipi ``` This will create and start a docker container with the `daschswiss/sipi` image in the background. The default behaviour is to start Sipi by calling the following command: -``` -$ /sipi/local/bin/sipi -config /sipi/config/sipi.test-config.lua +```bash +/sipi/local/bin/sipi -config /sipi/config/sipi.test-config.lua ``` To override this default behaviour, start the container by supplying another config file: -``` -$ docker run --name sipi \ +```bash +docker run --name sipi \ -d \ -p 1024:1024 \ daschswiss/sipi \ @@ -81,8 +83,8 @@ $ docker run --name sipi \ You can also mount a directory (the local directory in this example), and use a config file that is outside of the docker container: -``` -$ docker run --name sipi \ +```bash +docker run --name sipi \ -d \ -p 1024:1024 \ -v $PWD:/localdir \ diff --git a/docs/05-internals/development/testing.md b/docs/05-internals/development/testing.md index 52839dc11b..1ae6faeb3d 100644 --- a/docs/05-internals/development/testing.md +++ b/docs/05-internals/development/testing.md @@ -25,7 +25,8 @@ sbt test ## How to Write and Run Integration Tests -[Mostly you should consider writing unit tests](https://www.youtube.com/watch?v=VDfX44fZoMc). These can be executed fast and help developers more in their daily work. +[Mostly you should consider writing unit tests](https://www.youtube.com/watch?v=VDfX44fZoMc). +These can be executed fast and help developers more in their daily work. You might need to create an integration test because: diff --git a/docs/05-internals/development/vscode-config.md b/docs/05-internals/development/vscode-config.md index 2152ea0b20..8f5eb3aa8c 100644 --- a/docs/05-internals/development/vscode-config.md +++ b/docs/05-internals/development/vscode-config.md @@ -2,7 +2,9 @@ To have full functionality, the [Scala Metals](https://scalameta.org/metals/) plugin should be installed. -Additionally, a number of plugins can be installed for convenience, but are not required. Those include but are by no means limited to: +Additionally, a number of plugins can be installed for convenience, but are not required. +Those include but are by no means limited to: + - Docker - to attach to running docker containers - Stardog RDF grammar - TTL syntax highlighting - Lua diff --git a/docs/06-sipi/setup-sipi-for-dsp-api.md b/docs/06-sipi/setup-sipi-for-dsp-api.md index ea83563d6a..0f62e0b6fc 100644 --- a/docs/06-sipi/setup-sipi-for-dsp-api.md +++ b/docs/06-sipi/setup-sipi-for-dsp-api.md @@ -14,10 +14,10 @@ building from source), or the published [docker image](https://hub.docker.com/r/ can be used. To start Sipi, run the following command from inside the `sipi/` folder: -``` -$ export DOCKERHOST=LOCAL_IP_ADDRESS -$ docker image rm --force daschswiss/sipi:main // deletes cached image and needs only to be used when newer image is available on dockerhub -$ docker run --rm -it --add-host webapihost:$DOCKERHOST -v $PWD/config:/sipi/config -v $PWD/scripts:/sipi/scripts -v /tmp:/tmp -v $HOME:$HOME -p 1024:1024 daschswiss/sipi:main --config=/sipi/config/sipi.docker-config.lua +```bash +export DOCKERHOST=LOCAL_IP_ADDRESS +docker image rm --force daschswiss/sipi:main // deletes cached image and needs only to be used when newer image is available on dockerhub +docker run --rm -it --add-host webapihost:$DOCKERHOST -v $PWD/config:/sipi/config -v $PWD/scripts:/sipi/scripts -v /tmp:/tmp -v $HOME:$HOME -p 1024:1024 daschswiss/sipi:main --config=/sipi/config/sipi.docker-config.lua ``` where `LOCAL_IP_ADDRESS` is the IP of the host running `DSP-API`. @@ -53,10 +53,10 @@ If you just want to test Sipi with DSP-API without serving the actual files (e.g. when executing browser tests), you can simply start Sipi like this: -``` -$ export DOCKERHOST=LOCAL_IP_ADDRESS -$ docker image rm --force daschswiss/sipi:main // deletes cached image and needs only to be used when newer image is available on dockerhub -$ docker run --rm -it --add-host webapihost:$DOCKERHOST -v $PWD/config:/sipi/config -v $PWD/scripts:/sipi/scripts -v /tmp:/tmp -v $HOME:$HOME -p 1024:1024 daschswiss/sipi:main --config=/sipi/config/sipi.docker-test-config.lua +```bash +export DOCKERHOST=LOCAL_IP_ADDRESS +docker image rm --force daschswiss/sipi:main // deletes cached image and needs only to be used when newer image is available on dockerhub +docker run --rm -it --add-host webapihost:$DOCKERHOST -v $PWD/config:/sipi/config -v $PWD/scripts:/sipi/scripts -v /tmp:/tmp -v $HOME:$HOME -p 1024:1024 daschswiss/sipi:main --config=/sipi/config/sipi.docker-test-config.lua ``` Then always the same test file will be served which is delivered with Sipi. In test mode, Sipi will diff --git a/docs/Readme.md b/docs/Readme.md index 9b568b13af..9eebf7fec9 100644 --- a/docs/Readme.md +++ b/docs/Readme.md @@ -19,8 +19,8 @@ make docs-serve # serve it locally You will need [Graphviz](http://www.graphviz.org/). On macOS: - ```shell - brew install graphviz - ``` - - On Linux, use your distribution's package manager. +```shell +brew install graphviz +``` + +On Linux, use your distribution's package manager. diff --git a/docs/architecture/README.md b/docs/architecture/README.md index d0a7fb913f..27d5936322 100644 --- a/docs/architecture/README.md +++ b/docs/architecture/README.md @@ -3,7 +3,7 @@ ## Installation ```bash -$ brew install adr-tools +brew install adr-tools ``` ## Usage @@ -11,5 +11,5 @@ $ brew install adr-tools Run the following command from the root directory to start the C4 model browser: ```bash -$ make structurizer +make structurizer ``` diff --git a/docs/architecture/docs/http-request-flow-with-events.md b/docs/architecture/docs/http-request-flow-with-events.md index e8c697ae1b..037c744243 100644 --- a/docs/architecture/docs/http-request-flow-with-events.md +++ b/docs/architecture/docs/http-request-flow-with-events.md @@ -1,6 +1,7 @@ ## Example for an HTTP Request Flow with Events ### Create a User + ```mermaid sequenceDiagram autonumber