diff --git a/docs/.gitignore b/docs/.gitignore
index 73dceabf77..f0ad786fbc 100644
--- a/docs/.gitignore
+++ b/docs/.gitignore
@@ -1,3 +1,4 @@
+_build
Pipfile.lock
*.aux
*.log
diff --git a/docs/conf.py b/docs/conf.py
index 36e4f2f107..ab279ba1f9 100644
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -48,20 +48,16 @@
# The master toctree document.
master_doc = "index"
+# This is overriden by readthedocs with the version tag anyway
+version = "devel"
+# To avoid repetition in
we set this to an empty string.
+release = ""
+
# General information about the project.
-project = "PostgREST"
+project = "PostgREST " + version
author = "Joe Nelson, Steve Chavez"
copyright = "2017, " + author
-# The version info for the project you're documenting, acts as replacement for
-# |version| and |release|, also used in various other places throughout the
-# built documents.
-#
-# The short X.Y version.
-version = "12.1"
-# The full version, including alpha/beta/rc tags.
-release = "12.1-dev"
-
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#
@@ -117,7 +113,7 @@
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
-# html_theme_options = {}
+html_theme_options = {"display_version": False}
# Add any paths that contain custom themes here, relative to this directory.
# html_theme_path = []
diff --git a/docs/explanations/install.rst b/docs/explanations/install.rst
index b2935a9375..9fcb8b8bb4 100644
--- a/docs/explanations/install.rst
+++ b/docs/explanations/install.rst
@@ -163,14 +163,15 @@ If you want to have a visual overview of your API in your browser you can add sw
.. code-block:: yaml
- swagger:
- image: swaggerapi/swagger-ui
- ports:
- - "8080:8080"
- expose:
- - "8080"
- environment:
- API_URL: http://localhost:3000/
+ # in services:
+ swagger:
+ image: swaggerapi/swagger-ui
+ ports:
+ - "8080:8080"
+ expose:
+ - "8080"
+ environment:
+ API_URL: http://localhost:3000/
With this you can see the swagger-ui in your browser on port 8080.
@@ -181,10 +182,6 @@ Building from Source
When a pre-built binary does not exist for your system you can build the project from source.
-.. note::
-
- We discourage building and using PostgREST on **Alpine Linux** because of a reported GHC memory leak on that platform.
-
You can build PostgREST from source with `Stack `_. It will install any necessary Haskell dependencies on your system.
* `Install Stack `_ for your platform
diff --git a/docs/how-tos/providing-images-for-img.rst b/docs/how-tos/providing-images-for-img.rst
index 03176376d2..95d1b07f3d 100644
--- a/docs/how-tos/providing-images-for-img.rst
+++ b/docs/how-tos/providing-images-for-img.rst
@@ -81,9 +81,11 @@ First, in addition to the minimal example, we need to store the media types and
.. code-block:: postgres
alter table files
- add column type text,
+ add column type text generated always as (byteamagic_mime(substr(blob, 0, 4100))) stored,
add column name text;
+This uses the :code:`byteamagic_mime()` function from the `pg_byteamagic extension `_ to automatically generate the type in the :code:`files` table. To guess the type of a file, it's generally enough to look at the beginning of the file, which is more efficient.
+
Next, we set modify the function to set the content type and filename.
We use this opportunity to configure some basic, client-side caching.
For production, you probably want to configure additional caches, e.g. on the :ref:`reverse proxy `.
diff --git a/docs/how-tos/sql-user-management-using-postgres-users-and-passwords.rst b/docs/how-tos/sql-user-management-using-postgres-users-and-passwords.rst
index e8e3d2c0de..b245bb6f7e 100644
--- a/docs/how-tos/sql-user-management-using-postgres-users-and-passwords.rst
+++ b/docs/how-tos/sql-user-management-using-postgres-users-and-passwords.rst
@@ -54,7 +54,7 @@ Concerning the `pgjwt extension `_, please cf.
In order to be able to work with postgres' SCRAM-SHA-256 password hashes, we also need the PBKDF2 key derivation function. Luckily there is `a PL/pgSQL implementation on stackoverflow `_:
-.. code-block:: plpgsql
+.. code-block:: postgres
CREATE FUNCTION basic_auth.pbkdf2(salt bytea, pw text, count integer, desired_length integer, algorithm text) RETURNS bytea
LANGUAGE plpgsql IMMUTABLE
@@ -120,7 +120,7 @@ In order to be able to work with postgres' SCRAM-SHA-256 password hashes, we als
Analogous to :ref:`sql_user_management` creates the function :code:`basic_auth.user_role`, we create a helper function to check the user's password, here with another name and signature (since we want the username, not an email address).
But contrary to :ref:`sql_user_management`, this function does not use a dedicated :code:`users` table with passwords, but instead utilizes the built-in table `pg_catalog.pg_authid `_:
-.. code-block:: plpgsql
+.. code-block:: postgres
CREATE FUNCTION basic_auth.check_user_pass(username text, password text) RETURNS name
LANGUAGE sql
@@ -160,7 +160,7 @@ Logins
As described in :ref:`client_auth`, we'll create a JWT token inside our login function. Note that you'll need to adjust the secret key which is hard-coded in this example to a secure (at least thirty-two character) secret of your choosing.
-.. code-block:: plpgsql
+.. code-block:: postgres
CREATE TYPE basic_auth.jwt_token AS (
token text
diff --git a/docs/how-tos/sql-user-management.rst b/docs/how-tos/sql-user-management.rst
index 5e599ea86e..c6981e62b3 100644
--- a/docs/how-tos/sql-user-management.rst
+++ b/docs/how-tos/sql-user-management.rst
@@ -28,7 +28,7 @@ First we'll need a table to keep track of our users:
We would like the role to be a foreign key to actual database roles, however PostgreSQL does not support these constraints against the :code:`pg_roles` table. We'll use a trigger to manually enforce it.
-.. code-block:: plpgsql
+.. code-block:: postgres
create or replace function
basic_auth.check_role_exists() returns trigger as $$
@@ -50,7 +50,7 @@ We would like the role to be a foreign key to actual database roles, however Pos
Next we'll use the pgcrypto extension and a trigger to keep passwords safe in the :code:`users` table.
-.. code-block:: plpgsql
+.. code-block:: postgres
create extension if not exists pgcrypto;
@@ -72,7 +72,7 @@ Next we'll use the pgcrypto extension and a trigger to keep passwords safe in th
With the table in place we can make a helper to check a password against the encrypted column. It returns the database role for a user if the email and password are correct.
-.. code-block:: plpgsql
+.. code-block:: postgres
create or replace function
basic_auth.user_role(email text, pass text) returns name
diff --git a/docs/how-tos/working-with-postgresql-data-types.rst b/docs/how-tos/working-with-postgresql-data-types.rst
index e0abd6947d..ce1e177217 100644
--- a/docs/how-tos/working-with-postgresql-data-types.rst
+++ b/docs/how-tos/working-with-postgresql-data-types.rst
@@ -5,96 +5,12 @@ Working with PostgreSQL data types
:author: `Laurence Isla `_
-PostgREST makes use of PostgreSQL string representations to work with data types. Thanks to this, you can use special values, such as ``now`` for timestamps, ``yes`` for booleans or time values including the time zones. This page describes how you can take advantage of these string representations to perform operations on different PostgreSQL data types.
+PostgREST makes use of PostgreSQL string representations to work with data types. Thanks to this, you can use special values, such as ``now`` for timestamps, ``yes`` for booleans or time values including the time zones. This page describes how you can take advantage of these string representations and some alternatives to perform operations on different PostgreSQL data types.
.. contents::
:local:
:depth: 1
-Timestamps
-----------
-
-You can use the **time zone** to filter or send data if needed.
-
-.. code-block:: postgres
-
- create table reports (
- id int primary key
- , due_date timestamptz
- );
-
-Suppose you are located in Sydney and want create a report with the date in the local time zone. Your request should look like this:
-
-.. code-block:: bash
-
- curl "http://localhost:3000/reports" \
- -X POST -H "Content-Type: application/json" \
- -d '[{ "id": 1, "due_date": "2022-02-24 11:10:15 Australia/Sydney" },{ "id": 2, "due_date": "2022-02-27 22:00:00 Australia/Sydney" }]'
-
-Someone located in Cairo can retrieve the data using their local time, too:
-
-.. code-block:: bash
-
- curl "http://localhost:3000/reports?due_date=eq.2022-02-24+02:10:15+Africa/Cairo"
-
-.. code-block:: json
-
- [
- {
- "id": 1,
- "due_date": "2022-02-23T19:10:15-05:00"
- }
- ]
-
-The response has the date in the time zone configured by the server: ``UTC -05:00`` (see :ref:`prefer_timezone`).
-
-You can use other comparative filters and also all the `PostgreSQL special date/time input values `_ as illustrated in this example:
-
-.. code-block:: bash
-
- curl "http://localhost:3000/reports?or=(and(due_date.gte.today,due_date.lte.tomorrow),and(due_date.gt.-infinity,due_date.lte.epoch))"
-
-.. code-block:: json
-
- [
- {
- "id": 2,
- "due_date": "2022-02-27T06:00:00-05:00"
- }
- ]
-
-JSON
-----
-
-To work with a ``json`` type column, you can handle the value as a JSON object.
-
-.. code-block:: postgres
-
- create table products (
- id int primary key,
- name text unique,
- extra_info json
- );
-
-You can insert a new product using a JSON object for the ``extra_info`` column:
-
-.. code-block:: bash
-
- curl "http://localhost:3000/products" \
- -X POST -H "Content-Type: application/json" \
- -d @- << EOF
- {
- "id": 1,
- "name": "Canned fish",
- "extra_info": {
- "expiry_date": "2025-12-31",
- "exportable": true
- }
- }
- EOF
-
-To query and filter the data see :ref:`json_columns` for a complete reference.
-
Arrays
------
@@ -179,6 +95,57 @@ Then, for example, to query the auditoriums that are located in the first cinema
}
]
+Bytea
+-----
+
+To send raw binary to PostgREST you need a function with a single unnamed parameter of `bytea type `_.
+
+.. code-block:: postgres
+
+ create table files (
+ id int primary key generated always as identity,
+ file bytea
+ );
+
+ create function upload_binary(bytea) returns void as $$
+ insert into files (file) values ($1);
+ $$ language sql;
+
+Let's download the PostgREST logo for our test.
+
+.. code-block:: bash
+
+ curl "https://postgrest.org/en/latest/_images/logo.png" -o postgrest-logo.png
+
+Now, to send the file ``postgrest-logo.png`` we need to set the ``Content-Type: application/octet-stream`` header in the request:
+
+.. code-block:: bash
+
+ curl "http://localhost:3000/rpc/upload_binary" \
+ -X POST -H "Content-Type: application/octet-stream" \
+ --data-binary "@postgrest-logo.png"
+
+To get the image from the database, use :ref:`custom_media` like so:
+
+.. code-block:: postgres
+
+ create domain "image/png" as bytea;
+
+ create or replace get_image(id int) returns "image/png" as $$
+ select file from files where id = $1;
+ $$ language sql;
+
+.. code-block:: bash
+
+ curl "http://localhost:3000/get_image?id=1" \
+ -H "Accept: image/png"
+
+See :ref:`providing_img` for a step-by-step example on how to handle images in HTML.
+
+.. warning::
+
+ Be careful when saving binaries in the database, having a separate storage service for these is preferable in most cases. See `Storing Binary files in the Database `_.
+
Composite Types
---------------
@@ -231,159 +198,6 @@ Or you could insert the same data in JSON format.
You can also query the data using arrow operators. See :ref:`composite_array_columns`.
-Ranges
-------
-
-PostgREST allows you to handle `ranges `_.
-
-.. code-block:: postgres
-
- create table events (
- id int primary key,
- name text unique,
- duration tsrange
- );
-
-To insert a new event, specify the ``duration`` value as a string representation of the ``tsrange`` type:
-
-.. code-block:: bash
-
- curl "http://localhost:3000/events" \
- -X POST -H "Content-Type: application/json" \
- -d @- << EOF
- {
- "id": 1,
- "name": "New Year's Party",
- "duration": "['2022-12-31 11:00','2023-01-01 06:00']"
- }
- EOF
-
-You can use range :ref:`operators ` to filter the data. But, in this case, requesting a filter like ``events?duration=cs.2023-01-01`` will return an error, because PostgreSQL needs an explicit cast from string to timestamp. A workaround is to use a range starting and ending in the same date:
-
-.. code-block:: bash
-
- curl "http://localhost:3000/events?duration=cs.\[2023-01-01,2023-01-01\]"
-
-.. code-block:: json
-
- [
- {
- "id": 1,
- "name": "New Year's Party",
- "duration": "[\"2022-12-31 11:00:00\",\"2023-01-01 06:00:00\"]"
- }
- ]
-
-.. _casting_range_to_json:
-
-Casting a Range to a JSON Object
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-As you may have noticed, the ``tsrange`` value is returned as a string literal. To return it as a JSON value, first you need to create a function that will do the conversion from a ``tsrange`` type:
-
-.. code-block:: postgres
-
- create or replace function tsrange_to_json(tsrange) returns json as $$
- select json_build_object(
- 'lower', lower($1)
- , 'upper', upper($1)
- , 'lower_inc', lower_inc($1)
- , 'upper_inc', upper_inc($1)
- );
- $$ language sql;
-
-Then, create the cast using this function:
-
-.. code-block:: postgres
-
- create cast (tsrange as json) with function tsrange_to_json(tsrange) as assignment;
-
-Finally, do the request :ref:`casting the range column `:
-
-.. code-block:: bash
-
- curl "http://localhost:3000/events?select=id,name,duration::json"
-
-.. code-block:: json
-
- [
- {
- "id": 1,
- "name": "New Year's Party",
- "duration": {
- "lower": "2022-12-31T11:00:00",
- "upper": "2023-01-01T06:00:00",
- "lower_inc": true,
- "upper_inc": true
- }
- }
- ]
-
-.. note::
-
- If you don't want to modify casts for built-in types, an option would be to `create a custom type `_
- for your own ``tsrange`` and add its own cast.
-
- .. code-block:: postgres
-
- create type mytsrange as range (subtype = timestamp, subtype_diff = tsrange_subdiff);
-
- -- define column types and casting function analogously to the above example
- -- ...
-
- create cast (mytsrange as json) with function mytsrange_to_json(mytsrange) as assignment;
-
-Bytea
------
-
-To send raw binary to PostgREST you need a function with a single unnamed parameter of `bytea type `_.
-
-.. code-block:: postgres
-
- create table files (
- id int primary key generated always as identity,
- file bytea
- );
-
- create function upload_binary(bytea) returns void as $$
- insert into files (file) values ($1);
- $$ language sql;
-
-Let's download the PostgREST logo for our test.
-
-.. code-block:: bash
-
- curl "https://postgrest.org/en/latest/_images/logo.png" -o postgrest-logo.png
-
-Now, to send the file ``postgrest-logo.png`` we need to set the ``Content-Type: application/octet-stream`` header in the request:
-
-.. code-block:: bash
-
- curl "http://localhost:3000/rpc/upload_binary" \
- -X POST -H "Content-Type: application/octet-stream" \
- --data-binary "@postgrest-logo.png"
-
-To get the image from the database, use :ref:`custom_media` like so:
-
-.. code-block:: postgres
-
- create domain "image/png" as bytea;
-
- create or replace get_image(id int) returns "image/png" as $$
- select file from files where id = $1;
- $$ language sql;
-
-.. code-block:: bash
-
- curl "http://localhost:3000/get_image?id=1" \
- -H "Accept: image/png"
-
-See :ref:`providing_img` for a step-by-step example on how to handle images in HTML.
-
-.. warning::
-
- Be careful when saving binaries in the database, having a separate storage service for these is preferable in most cases. See `Storing Binary files in the Database `_.
-
hstore
------
@@ -424,6 +238,38 @@ You can also query and filter the value of a ``hstore`` column using the arrow o
[{ "native": "مصر" }]
+JSON
+----
+
+To work with a ``json`` type column, you can handle the value as a JSON object.
+
+.. code-block:: postgres
+
+ create table products (
+ id int primary key,
+ name text unique,
+ extra_info json
+ );
+
+You can insert a new product using a JSON object for the ``extra_info`` column:
+
+.. code-block:: bash
+
+ curl "http://localhost:3000/products" \
+ -X POST -H "Content-Type: application/json" \
+ -d @- << EOF
+ {
+ "id": 1,
+ "name": "Canned fish",
+ "extra_info": {
+ "expiry_date": "2025-12-31",
+ "exportable": true
+ }
+ }
+ EOF
+
+To query and filter the data see :ref:`json_columns` for a complete reference.
+
.. _ww_postgis:
PostGIS
@@ -561,3 +407,157 @@ Now this query will return the same results:
}
]
}
+
+Ranges
+------
+
+PostgREST allows you to handle `ranges `_.
+
+.. code-block:: postgres
+
+ create table events (
+ id int primary key,
+ name text unique,
+ duration tsrange
+ );
+
+To insert a new event, specify the ``duration`` value as a string representation of the ``tsrange`` type:
+
+.. code-block:: bash
+
+ curl "http://localhost:3000/events" \
+ -X POST -H "Content-Type: application/json" \
+ -d @- << EOF
+ {
+ "id": 1,
+ "name": "New Year's Party",
+ "duration": "['2022-12-31 11:00','2023-01-01 06:00']"
+ }
+ EOF
+
+You can use range :ref:`operators ` to filter the data. But, in this case, requesting a filter like ``events?duration=cs.2023-01-01`` will return an error, because PostgreSQL needs an explicit cast from string to timestamp. A workaround is to use a range starting and ending in the same date:
+
+.. code-block:: bash
+
+ curl "http://localhost:3000/events?duration=cs.\[2023-01-01,2023-01-01\]"
+
+.. code-block:: json
+
+ [
+ {
+ "id": 1,
+ "name": "New Year's Party",
+ "duration": "[\"2022-12-31 11:00:00\",\"2023-01-01 06:00:00\"]"
+ }
+ ]
+
+.. _casting_range_to_json:
+
+Casting a Range to a JSON Object
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+As you may have noticed, the ``tsrange`` value is returned as a string literal. To return it as a JSON value, first you need to create a function that will do the conversion from a ``tsrange`` type:
+
+.. code-block:: postgres
+
+ create or replace function tsrange_to_json(tsrange) returns json as $$
+ select json_build_object(
+ 'lower', lower($1)
+ , 'upper', upper($1)
+ , 'lower_inc', lower_inc($1)
+ , 'upper_inc', upper_inc($1)
+ );
+ $$ language sql;
+
+Then, create the cast using this function:
+
+.. code-block:: postgres
+
+ create cast (tsrange as json) with function tsrange_to_json(tsrange) as assignment;
+
+Finally, do the request :ref:`casting the range column `:
+
+.. code-block:: bash
+
+ curl "http://localhost:3000/events?select=id,name,duration::json"
+
+.. code-block:: json
+
+ [
+ {
+ "id": 1,
+ "name": "New Year's Party",
+ "duration": {
+ "lower": "2022-12-31T11:00:00",
+ "upper": "2023-01-01T06:00:00",
+ "lower_inc": true,
+ "upper_inc": true
+ }
+ }
+ ]
+
+.. note::
+
+ If you don't want to modify casts for built-in types, an option would be to `create a custom type `_
+ for your own ``tsrange`` and add its own cast.
+
+ .. code-block:: postgres
+
+ create type mytsrange as range (subtype = timestamp, subtype_diff = tsrange_subdiff);
+
+ -- define column types and casting function analogously to the above example
+ -- ...
+
+ create cast (mytsrange as json) with function mytsrange_to_json(mytsrange) as assignment;
+
+Timestamps
+----------
+
+You can use the **time zone** to filter or send data if needed.
+
+.. code-block:: postgres
+
+ create table reports (
+ id int primary key
+ , due_date timestamptz
+ );
+
+Suppose you are located in Sydney and want create a report with the date in the local time zone. Your request should look like this:
+
+.. code-block:: bash
+
+ curl "http://localhost:3000/reports" \
+ -X POST -H "Content-Type: application/json" \
+ -d '[{ "id": 1, "due_date": "2022-02-24 11:10:15 Australia/Sydney" },{ "id": 2, "due_date": "2022-02-27 22:00:00 Australia/Sydney" }]'
+
+Someone located in Cairo can retrieve the data using their local time, too:
+
+.. code-block:: bash
+
+ curl "http://localhost:3000/reports?due_date=eq.2022-02-24+02:10:15+Africa/Cairo"
+
+.. code-block:: json
+
+ [
+ {
+ "id": 1,
+ "due_date": "2022-02-23T19:10:15-05:00"
+ }
+ ]
+
+The response has the date in the time zone configured by the server: ``UTC -05:00`` (see :ref:`prefer_timezone`).
+
+You can use other comparative filters and also all the `PostgreSQL special date/time input values `_ as illustrated in this example:
+
+.. code-block:: bash
+
+ curl "http://localhost:3000/reports?or=(and(due_date.gte.today,due_date.lte.tomorrow),and(due_date.gt.-infinity,due_date.lte.epoch))"
+
+.. code-block:: json
+
+ [
+ {
+ "id": 2,
+ "due_date": "2022-02-27T06:00:00-05:00"
+ }
+ ]
diff --git a/docs/integrations/jwt_gen.rst b/docs/integrations/jwt_gen.rst
index f2b80517ae..2bc2d8d223 100644
--- a/docs/integrations/jwt_gen.rst
+++ b/docs/integrations/jwt_gen.rst
@@ -9,23 +9,3 @@ JWT from Auth0
An external service like `Auth0 `_ can do the hard work transforming OAuth from Github, Twitter, Google etc into a JWT suitable for PostgREST. Auth0 can also handle email signup and password reset flows.
To use Auth0, create `an application `_ for your app and `an API `_ for your PostgREST server. Auth0 supports both HS256 and RS256 scheme for the issued tokens for APIs. For simplicity, you may first try HS256 scheme while creating your API on Auth0. Your application should use your PostgREST API's `API identifier `_ by setting it with the `audience parameter `_ during the authorization request. This will ensure that Auth0 will issue an access token for your PostgREST API. For PostgREST to verify the access token, you will need to set ``jwt-secret`` on PostgREST config file with your API's signing secret.
-
-.. note::
-
- Our code requires a database role in the JWT. To add it you need to save the database role in Auth0 `app metadata `_. Then, you will need to write `a rule `_ that will extract the role from the user's app_metadata and set it as a `custom claim `_ in the access token. Note that, you may use Auth0's `core authorization feature `_ for more complex use cases. Metadata solution is mentioned here for simplicity.
-
- .. code:: javascript
-
- function (user, context, callback) {
-
- // Follow the documentations at
- // https://postgrest.org/en/latest/configuration.html#db-role-claim-key
- // to set a custom role claim on PostgREST
- // and use it as custom claim attribute in this rule
- const myRoleClaim = 'https://myapp.com/role';
-
- user.app_metadata = user.app_metadata || {};
- context.accessToken[myRoleClaim] = user.app_metadata.role;
- callback(null, user, context);
- }
-
diff --git a/docs/postgrest.dict b/docs/postgrest.dict
index 496a83ae6c..ddfe7f2e43 100644
--- a/docs/postgrest.dict
+++ b/docs/postgrest.dict
@@ -36,7 +36,6 @@ filename
FreeBSD
fts
GeoJSON
-GHC
Github
Google
grantor
diff --git a/docs/references/api/computed_fields.rst b/docs/references/api/computed_fields.rst
index c7b7b65123..4b5afe3a4c 100644
--- a/docs/references/api/computed_fields.rst
+++ b/docs/references/api/computed_fields.rst
@@ -66,7 +66,7 @@ Ordering on Computed Fields
.. important::
- Computed columns must be created in the :ref:`exposed schema ` or in a schema in the :ref:`extra search path ` to be used in this way. When placing the computed column in the :ref:`exposed schema ` you can use an **unnamed** parameter, as in the example above, to prevent it from being exposed as an :ref:`RPC ` under ``/rpc``.
+ Computed fields must be created in the :ref:`exposed schema ` or in a schema in the :ref:`extra search path ` to be used in this way. When placing the computed field in the :ref:`exposed schema ` you can use an **unnamed** parameter, as in the example above, to prevent it from being exposed as an :ref:`RPC ` under ``/rpc``.
.. note::
diff --git a/docs/references/api/openapi.rst b/docs/references/api/openapi.rst
index 32f0f03484..d13ade1b2c 100644
--- a/docs/references/api/openapi.rst
+++ b/docs/references/api/openapi.rst
@@ -11,7 +11,7 @@ PostgREST automatically serves a full `OpenAPI `_ des
For extra customization, the OpenAPI output contains a "description" field for every `SQL comment `_ on any database object. For instance,
-.. code-block:: sql
+.. code-block:: postgres
COMMENT ON SCHEMA mammals IS
'A warm-blooded vertebrate animal of a class that is distinguished by the secretion of milk by females for the nourishment of the young';
@@ -26,7 +26,7 @@ These unsavory comments will appear in the generated JSON as the fields, ``info.
Also if you wish to generate a ``summary`` field you can do it by having a multiple line comment, the ``summary`` will be the first line and the ``description`` the lines that follow it:
-.. code-block:: plpgsql
+.. code-block:: postgres
COMMENT ON TABLE entities IS
$$Entities summary
@@ -37,7 +37,7 @@ Also if you wish to generate a ``summary`` field you can do it by having a multi
Similarly, you can override the API title by commenting the schema.
-.. code-block:: plpgsql
+.. code-block:: postgres
COMMENT ON SCHEMA api IS
$$FooBar API
diff --git a/docs/references/api/preferences.rst b/docs/references/api/preferences.rst
index d1d6e425cd..29a5f7af3c 100644
--- a/docs/references/api/preferences.rst
+++ b/docs/references/api/preferences.rst
@@ -236,7 +236,7 @@ Single JSON object as Function Parameter
:code:`Prefer: params=single-object` allows sending the JSON request body as the single argument of a :ref:`function `.
-.. code-block:: plpgsql
+.. code-block:: postgres
CREATE FUNCTION mult_them(param json) RETURNS int AS $$
SELECT (param->>'x')::int * (param->>'y')::int
diff --git a/docs/references/api/resource_embedding.rst b/docs/references/api/resource_embedding.rst
index 83feb30928..8e0e79ddf3 100644
--- a/docs/references/api/resource_embedding.rst
+++ b/docs/references/api/resource_embedding.rst
@@ -178,7 +178,7 @@ The join table determines many-to-many relationships. It must contain foreign ke
The join table is also detected if the composite key has additional columns.
-.. code-block:: postgresql
+.. code-block:: postgres
create table roles(
id int generated always as identity,
@@ -214,7 +214,7 @@ One-to-one relationships are detected in two ways.
- When the foreign key is a primary key as specified in the :ref:`sample film database `.
- When the foreign key has a unique constraint.
- .. code-block:: postgresql
+ .. code-block:: postgres
create table technical_specs(
film_id int references films(id) unique,
@@ -246,7 +246,7 @@ You can manually define relationships by using functions. This is useful for dat
Assuming there's a foreign table ``premieres`` that we want to relate to ``films``.
-.. code-block:: postgresql
+.. code-block:: postgres
create foreign table premieres (
id integer,
@@ -478,7 +478,7 @@ Recursive One-To-One
To get either side of the Recursive One-To-One relationship, create the functions:
-.. code-block:: postgresql
+.. code-block:: postgres
create or replace function predecessor(presidents) returns setof presidents rows 1 as $$
select * from presidents where id = $1.predecessor_id
@@ -530,7 +530,7 @@ Recursive One-To-Many
To get the One-To-Many embedding, that is, the supervisors with their supervisees, create a function like this one:
-.. code-block:: postgresql
+.. code-block:: postgres
create or replace function supervisees(employees) returns setof employees as $$
select * from employees where supervisor_id = $1.id
@@ -562,7 +562,7 @@ Recursive Many-To-One
Let's take the same ``employees`` table from :ref:`recursive_o2m_embed`.
To get the Many-To-One relationship, that is, the employees with their respective supervisor, you need to create a function like this one:
-.. code-block:: postgresql
+.. code-block:: postgres
create or replace function supervisor(employees) returns setof employees rows 1 as $$
select * from employees where id = $1.supervisor_id
@@ -614,7 +614,7 @@ Recursive Many-To-Many
To get all the subscribers of a user as well as the ones they're following, define these functions:
-.. code-block:: postgresql
+.. code-block:: postgres
create or replace function subscribers(users) returns setof users as $$
select u.*
@@ -756,7 +756,7 @@ If you have a :ref:`Stored Procedure ` that returns a table type, you c
Here's a sample function (notice the ``RETURNS SETOF films``).
-.. code-block:: plpgsql
+.. code-block:: postgres
CREATE FUNCTION getallfilms() RETURNS SETOF films AS $$
SELECT * FROM films;
diff --git a/docs/references/api/schemas.rst b/docs/references/api/schemas.rst
index 79ddbcb32c..c954bbe31b 100644
--- a/docs/references/api/schemas.rst
+++ b/docs/references/api/schemas.rst
@@ -88,7 +88,7 @@ To add schemas dynamically, you can use :ref:`in_db_config` plus :ref:`config re
- If the schemas' names have a pattern, like a ``tenant_`` prefix, do:
-.. code-block:: postgresql
+.. code-block:: postgres
create or replace function postgrest.pre_config()
returns void as $$
@@ -100,7 +100,7 @@ To add schemas dynamically, you can use :ref:`in_db_config` plus :ref:`config re
- If there's no name pattern but they're created with a particular role (``CREATE SCHEMA mine AUTHORIZATION joe``), do:
-.. code-block:: postgresql
+.. code-block:: postgres
create or replace function postgrest.pre_config()
returns void as $$
@@ -112,7 +112,7 @@ To add schemas dynamically, you can use :ref:`in_db_config` plus :ref:`config re
- Otherwise, you might need to create a table that stores the allowed schemas.
-.. code-block:: postgresql
+.. code-block:: postgres
create table postgrest.config (schemas text);
@@ -125,7 +125,7 @@ To add schemas dynamically, you can use :ref:`in_db_config` plus :ref:`config re
Then each time you add an schema, do:
-.. code-block:: postgresql
+.. code-block:: postgres
NOTIFY pgrst, 'reload config';
NOTIFY pgrst, 'reload schema';
diff --git a/docs/references/api/stored_procedures.rst b/docs/references/api/stored_procedures.rst
index 11ffeba529..ea95affbac 100644
--- a/docs/references/api/stored_procedures.rst
+++ b/docs/references/api/stored_procedures.rst
@@ -23,7 +23,7 @@ To supply arguments in an API call, include a JSON object in the request payload
For instance, assume we have created this function in the database.
-.. code-block:: plpgsql
+.. code-block:: postgres
CREATE FUNCTION add_them(a integer, b integer)
RETURNS integer AS $$
@@ -73,7 +73,7 @@ Functions with a single unnamed JSON parameter
If you want the JSON request body to be sent as a single argument, you can create a function with a single unnamed ``json`` or ``jsonb`` parameter.
For this the ``Content-Type: application/json`` header must be included in the request.
-.. code-block:: plpgsql
+.. code-block:: postgres
CREATE FUNCTION mult_them(json) RETURNS int AS $$
SELECT ($1->>'x')::int * ($1->>'y')::int
@@ -108,7 +108,7 @@ To send raw XML, the parameter type must be ``xml`` and the header ``Content-Typ
To send raw binary, the parameter type must be ``bytea`` and the header ``Content-Type: application/octet-stream`` must be included in the request.
-.. code-block:: plpgsql
+.. code-block:: postgres
CREATE TABLE files(blob bytea);
@@ -252,7 +252,7 @@ Let's get its :ref:`explain_plan` when calling it with filters applied:
curl "http://localhost:3000/rpc/getallprojects?id=eq.1" \
-H "Accept: application/vnd.pgrst.plan"
-.. code-block:: psql
+.. code-block:: postgres
Aggregate (cost=8.18..8.20 rows=1 width=112)
-> Index Scan using projects_pkey on projects (cost=0.15..8.17 rows=1 width=40)
diff --git a/docs/references/api/tables_views.rst b/docs/references/api/tables_views.rst
index cef16eee85..a075ff8929 100644
--- a/docs/references/api/tables_views.rst
+++ b/docs/references/api/tables_views.rst
@@ -88,7 +88,7 @@ any :code:`ANY` comparison matches any value in the list
For more complicated filters you will have to create a new view in the database, or use a stored procedure. For instance, here's a view to show "today's stories" including possibly older pinned stories:
-.. code-block:: postgresql
+.. code-block:: postgres
CREATE VIEW fresh_stories AS
SELECT *
@@ -294,6 +294,21 @@ Note that ``->>`` is used to compare ``blood_type`` as ``text``. To compare with
{ "id": 12, "age": 30 },
{ "id": 15, "age": 35 }
]
+
+Ordering is also supported:
+
+.. code-block:: bash
+
+ curl "http://localhost:3000/people?select=id,json_data->age&order=json_data->>age.desc"
+
+.. code-block:: json
+
+ [
+ { "id": 15, "age": 35 },
+ { "id": 12, "age": 30 },
+ { "id": 11, "age": 25 }
+ ]
+
.. _composite_array_columns:
Composite / Array Columns
@@ -397,7 +412,7 @@ To create a row in a database table post a JSON object whose keys are the names
HTTP/1.1 201 Created
-No response body will be returned by default but you can use :ref:`prefer_return` to get the affected resource.
+No response body will be returned by default but you can use :ref:`prefer_return` to get the affected resource and :ref:`resource_embedding` to add related resources.
x-www-form-urlencoded
---------------------
@@ -546,7 +561,7 @@ To update a row or rows in a table, use the PATCH verb. Use :ref:`h_filter` to s
-X PATCH -H "Content-Type: application/json" \
-d '{ "category": "child" }'
-Updates also support :ref:`prefer_return` plus :ref:`v_filter`.
+Updates also support :ref:`prefer_return`, :ref:`resource_embedding` and :ref:`v_filter`.
.. warning::
@@ -625,7 +640,7 @@ To delete rows in a table, use the DELETE verb plus :ref:`h_filter`. For instanc
curl "http://localhost:3000/user?active=is.false" -X DELETE
-Deletions also support :ref:`prefer_return` plus :ref:`v_filter`.
+Deletions also support :ref:`prefer_return`, :ref:`resource_embedding` and :ref:`v_filter`.
.. code-block:: bash
diff --git a/docs/references/auth.rst b/docs/references/auth.rst
index 6f4ede0214..cff41806e6 100644
--- a/docs/references/auth.rst
+++ b/docs/references/auth.rst
@@ -168,9 +168,7 @@ There are at least three types of common critiques against using JWT: 1) against
The critique against the `JWT standard `_ is voiced in detail `elsewhere on the web `_. The most relevant part for PostgREST is the so-called :code:`alg=none` issue. Some servers implementing JWT allow clients to choose the algorithm used to sign the JWT. In this case, an attacker could set the algorithm to :code:`none`, remove the need for any signature at all and gain unauthorized access. The current implementation of PostgREST, however, does not allow clients to set the signature algorithm in the HTTP request, making this attack irrelevant. The critique against the standard is that it requires the implementation of the :code:`alg=none` at all.
-Critiques against JWT libraries are only relevant to PostgREST via the library it uses. As mentioned above, not allowing clients to choose the signature algorithm in HTTP requests removes the greatest risk. Another more subtle attack is possible where servers use asymmetric algorithms like RSA for signatures. Once again this is not relevant to PostgREST since it is not supported. Curious readers can find more information in `this article `_. Recommendations about high quality libraries for usage in API clients can be found on `jwt.io `_.
-
-The last type of critique focuses on the misuse of JWT for maintaining web sessions. The basic recommendation is to `stop using JWT for sessions `_ because most, if not all, solutions to the problems that arise when you do, `do not work `_. The linked articles discuss the problems in depth but the essence of the problem is that JWT is not designed to be secure and stateful units for client-side storage and therefore not suited to session management.
+Another type of critique focuses on the misuse of JWT for maintaining web sessions. The basic recommendation is to `stop using JWT for sessions `_ because most, if not all, solutions to the problems that arise when you do, `do not work `_. The linked articles discuss the problems in depth but the essence of the problem is that JWT is not designed to be secure and stateful units for client-side storage and therefore not suited to session management.
PostgREST uses JWT mainly for authentication and authorization purposes and encourages users to do the same. For web sessions, using cookies over HTTPS is good enough and well catered for by standard web frameworks.
diff --git a/docs/references/configuration.rst b/docs/references/configuration.rst
index 775bb2c06f..f7f97416e4 100644
--- a/docs/references/configuration.rst
+++ b/docs/references/configuration.rst
@@ -80,7 +80,7 @@ You can also configure the server with database settings by using a :ref:`pre-co
PGRST_DB_PRE_CONFIG = "postgrest.pre_config"
-.. code-block:: postgresql
+.. code-block:: postgres
-- create a dedicated schema, hidden from the API
create schema postgrest;
@@ -569,6 +569,10 @@ jwt-aud
Specifies the `JWT audience claim `_. If this claim is present in the client provided JWT then you must set this to the same value as in the JWT, otherwise verifying the JWT will fail.
+ .. warning::
+
+ Using this setting will only reject tokens with a different audience claim. Tokens **without** audience claim will still be accepted.
+
.. _jwt-role-claim-key:
jwt-role-claim-key
diff --git a/docs/references/errors.rst b/docs/references/errors.rst
index 8419448d03..f5c5619855 100644
--- a/docs/references/errors.rst
+++ b/docs/references/errors.rst
@@ -339,7 +339,7 @@ RAISE errors with HTTP Status Codes
Custom status codes can be done by raising SQL exceptions inside :ref:`functions `. For instance, here's a saucy function that always responds with an error:
-.. code-block:: postgresql
+.. code-block:: postgres
CREATE OR REPLACE FUNCTION just_fail() RETURNS void
LANGUAGE plpgsql
@@ -366,7 +366,7 @@ One way to customize the HTTP status code is by raising particular exceptions ac
For even greater control of the HTTP status code, raise an exception of the ``PTxyz`` type. For instance to respond with HTTP 402, raise ``PT402``:
-.. code-block:: sql
+.. code-block:: postgres
RAISE sqlstate 'PT402' using
message = 'Payment Required',
@@ -394,7 +394,7 @@ Add HTTP Headers with RAISE
For full control over headers and status you can raise a ``PGRST`` SQLSTATE error. You can achieve this by adding the ``code``, ``message``, ``detail`` and ``hint`` in the PostgreSQL error message field as a JSON object. Here, the ``details`` and ``hint`` are optional. Similarly, the ``status`` and ``headers`` must be added to the SQL error detail field as a JSON object. For instance:
-.. code-block:: sql
+.. code-block:: postgres
RAISE sqlstate 'PGRST' USING
message = '{"code":"123","message":"Payment Required","details":"Quota exceeded","hint":"Upgrade your plan"}',
@@ -418,7 +418,7 @@ Returns:
For non standard HTTP status, you can optionally add ``status_text`` to describe the status code. For status code ``419`` the detail field may look like this:
-.. code-block:: sql
+.. code-block:: postgres
detail = '{"status":419,"status_text":"Page Expired","headers":{"X-Powered-By":"Nerd Rage"}}';
diff --git a/docs/references/observability.rst b/docs/references/observability.rst
index 6f5765e220..a89adf23a9 100644
--- a/docs/references/observability.rst
+++ b/docs/references/observability.rst
@@ -96,7 +96,7 @@ When debugging a problem it's important to verify the running PostgREST version.
- Query ``application_name`` on `pg_stat_activity `_.
-.. code-block:: psql
+.. code-block:: postgres
select distinct application_name
from pg_stat_activity
@@ -177,7 +177,7 @@ This is enabled by :ref:`db-plan-enabled` (false by default).
curl "http://localhost:3000/users?select=name&order=id" \
-H "Accept: application/vnd.pgrst.plan"
-.. code-block:: psql
+.. code-block:: postgres
Aggregate (cost=73.65..73.68 rows=1 width=112)
-> Index Scan using users_pkey on users (cost=0.15..60.90 rows=850 width=36)
@@ -237,7 +237,7 @@ However, if you choose to use it in production you can add a :ref:`db-pre-reques
For example, to only allow requests from an IP address to get the execution plans:
-.. code-block:: postgresql
+.. code-block:: postgres
-- Assuming a proxy(Nginx, Cloudflare, etc) passes an "X-Forwarded-For" header(https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-Forwarded-For)
create or replace function filter_plan_requests()
diff --git a/docs/references/schema_cache.rst b/docs/references/schema_cache.rst
index 8be8e977dd..3497cf1aa2 100644
--- a/docs/references/schema_cache.rst
+++ b/docs/references/schema_cache.rst
@@ -35,6 +35,9 @@ One operational problem that comes with a cache is that it can go stale. This ca
You can solve this by reloading the cache manually or automatically.
+.. note::
+ If you are using :ref:`in_db_config`, a schema reload will always :ref:`reload the configuration` as well.
+
.. _schema_reloading:
Schema Cache Reloading
@@ -65,7 +68,7 @@ Reloading with NOTIFY
PostgREST also allows you to reload its schema cache through PostgreSQL `NOTIFY `_.
-.. code-block:: postgresql
+.. code-block:: postgres
NOTIFY pgrst, 'reload schema'
@@ -80,7 +83,7 @@ Automatic Schema Cache Reloading
You can do automatic schema cache reloading in a pure SQL way and forget about stale schema cache errors. For this use an `event trigger `_ and ``NOTIFY``.
-.. code-block:: postgresql
+.. code-block:: postgres
-- Create an event trigger function
CREATE OR REPLACE FUNCTION pgrst_watch() RETURNS event_trigger
@@ -100,7 +103,7 @@ Now, whenever the ``pgrst_watch`` trigger fires, PostgREST will auto-reload the
To disable auto reloading, drop the trigger.
-.. code-block:: postgresql
+.. code-block:: postgres
DROP EVENT TRIGGER pgrst_watch
@@ -110,7 +113,7 @@ Finer-Grained Event Trigger
You can refine the previous event trigger to only react to the events relevant to the schema cache. This also prevents unnecessary
reloading when creating temporary tables inside functions.
-.. code-block:: postgresql
+.. code-block:: postgres
-- watch CREATE and ALTER
CREATE OR REPLACE FUNCTION pgrst_ddl_watch() RETURNS event_trigger AS $$
diff --git a/docs/references/transactions.rst b/docs/references/transactions.rst
index c0eb9bab04..29c074caab 100644
--- a/docs/references/transactions.rst
+++ b/docs/references/transactions.rst
@@ -5,7 +5,7 @@ Transactions
After :ref:`user_impersonation`, every request to an :doc:`API resource ` runs inside a transaction. The sequence of the transaction is as follows:
-.. code-block:: postgresql
+.. code-block:: postgres
START TRANSACTION; --
--
@@ -21,7 +21,7 @@ The access mode determines whether the transaction can modify the database or no
Modifying the database inside READ ONLY transactions is not possible. PostgREST uses this fact to enforce HTTP semantics in GET and HEAD requests. Consider the following:
-.. code-block:: postgresql
+.. code-block:: postgres
CREATE SEQUENCE callcounter_count START 1;
@@ -92,7 +92,7 @@ Isolation Level
Every transaction uses the PostgreSQL default isolation level: READ COMMITTED. Unless you modify `default_transaction_isolation `_ for an impersonated role or function.
-.. code-block:: postgresql
+.. code-block:: postgres
ALTER ROLE webuser SET default_transaction_isolation TO 'repeatable read';
@@ -100,7 +100,7 @@ Every ``webuser`` gets its queries executed with ``default_transaction_isolation
Or to change the isolation level per function call.
-.. code-block:: postgresql
+.. code-block:: postgres
CREATE OR REPLACE FUNCTION myfunc()
RETURNS text as $$
@@ -118,7 +118,7 @@ PostgREST uses settings tied to the transaction lifetime. These can be used to g
You can get these with ``current_setting``
-.. code-block:: postgresql
+.. code-block:: postgres
-- request settings use the ``request.`` prefix.
SELECT
@@ -126,7 +126,7 @@ You can get these with ``current_setting``
And you can set them with ``set_config``
-.. code-block:: postgresql
+.. code-block:: postgres
-- response settings use the ``response.`` prefix.
SELECT
@@ -139,7 +139,7 @@ Request Headers, Cookies and JWT claims
PostgREST stores the headers, cookies and headers as JSON. To get them:
-.. code-block:: postgresql
+.. code-block:: postgres
-- To get all the headers sent in the request
SELECT current_setting('request.headers', true)::json;
@@ -162,7 +162,7 @@ PostgREST stores the headers, cookies and headers as JSON. To get them:
+ This is considered expected behavior by PostgreSQL. For more details, see `this discussion `_.
+ To avoid this inconsistency, you can create a wrapper function like:
- .. code-block:: postgresql
+ .. code-block:: postgres
CREATE FUNCTION my_current_setting(text) RETURNS text
LANGUAGE SQL AS $$
@@ -176,7 +176,7 @@ Request Path and Method
The path and method are stored as ``text``.
-.. code-block:: postgresql
+.. code-block:: postgres
SELECT current_setting('request.path', true);
@@ -187,7 +187,7 @@ Request Role and Search Path
Because of :ref:`user_impersonation`, PostgREST sets the standard ``role``. You can get this in different ways:
-.. code-block:: postgresql
+.. code-block:: postgres
SELECT current_role;
@@ -204,7 +204,7 @@ Response Headers
You can set ``response.headers`` to add headers to the HTTP response. For instance, this statement would add caching headers to the response:
-.. code-block:: sql
+.. code-block:: postgres
-- tell client to cache response for two days
@@ -265,7 +265,7 @@ This allows finer-grained control over actions made by a role.
For example, consider `statement_timeout `__. It allows you to abort any statement that takes more than a specified time. It is disabled by default.
-.. code-block:: postgresql
+.. code-block:: postgres
ALTER ROLE authenticator SET statement_timeout TO '10s';
ALTER ROLE anonymous SET statement_timeout TO '1s';
@@ -280,7 +280,7 @@ For more details see `Understanding Postgres Parameter Context TO ;
@@ -290,7 +290,7 @@ Function Settings
In addition to :ref:`impersonated_settings`, PostgREST will also apply function settings as transaction-scoped settings. This allows functions settings to override
the impersonated and connection role settings.
-.. code-block:: postgresql
+.. code-block:: postgres
CREATE OR REPLACE FUNCTION myfunc()
RETURNS void as $$
@@ -340,7 +340,7 @@ Setting headers via pre-request
As an example, let's add some cache headers for all requests that come from an Internet Explorer(6 or 7) browser.
-.. code-block:: postgresql
+.. code-block:: postgres
create or replace function custom_headers()
returns void as $$
diff --git a/docs/tutorials/tut0.rst b/docs/tutorials/tut0.rst
index 3d9a5eae7c..7ef3d3cf61 100644
--- a/docs/tutorials/tut0.rst
+++ b/docs/tutorials/tut0.rst
@@ -29,11 +29,22 @@ If Docker is not installed, you can get it `here `, but this is all we need.
If you are not using Docker, make sure that your port number is correct and replace `postgres` with the name of the database where you added the todos table.
+.. note::
+
+ In case you had to adjust the port in Step 2, remember to adjust the port here, too!
+
Now run the server:
.. code-block:: bash
@@ -196,13 +211,17 @@ Now run the server:
# Running postgrest binary
./postgrest tutorial.conf
-You should see
+You should see something similar to:
.. code-block:: text
- Listening on port 3000
+ Starting PostgREST 12.0.2...
Attempting to connect to the database...
Connection successful
+ Listening on port 3000
+ Config reloaded
+ Listening for notifications on the pgrst channel
+ Schema cache loaded
It's now ready to serve web requests. There are many nice graphical API exploration tools you can use, but for this tutorial we'll use :code:`curl` because it's likely to be installed on your system already. Open a new terminal (leaving the one open that PostgREST is running inside). Try doing an HTTP request for the todos.
@@ -242,9 +261,9 @@ Response is 401 Unauthorized:
.. code-block:: json
{
- "hint": null,
- "details": null,
"code": "42501",
+ "details": null,
+ "hint": null,
"message": "permission denied for table todos"
}
diff --git a/docs/tutorials/tut1.rst b/docs/tutorials/tut1.rst
index 2bde08c5bf..59953ebe94 100644
--- a/docs/tutorials/tut1.rst
+++ b/docs/tutorials/tut1.rst
@@ -22,12 +22,11 @@ The previous tutorial created a :code:`web_anon` role in the database with which
grant usage on schema api to todo_user;
grant all on api.todos to todo_user;
- grant usage, select on sequence api.todos_id_seq to todo_user;
Step 2. Make a Secret
---------------------
-Clients authenticate with the API using JSON Web Tokens. These are JSON objects which are cryptographically signed using a secret known to only us and the server. Because clients do not know this secret, they cannot tamper with the contents of their tokens. PostgREST will detect counterfeit tokens and will reject them.
+Clients authenticate with the API using JSON Web Tokens. These are JSON objects which are cryptographically signed using a secret only known to the server. Because clients do not know this secret, they cannot tamper with the contents of their tokens. PostgREST will detect counterfeit tokens and will reject them.
Let's create a secret and provide it to PostgREST. Think of a nice long one, or use a tool to generate it. **Your secret must be at least 32 characters long.**
@@ -67,7 +66,7 @@ Ordinarily your own code in the database or in another server will create and si
.. note::
- While the token may look well obscured, it's easy to reverse engineer the payload. The token is merely signed, not encrypted, so don't put things inside that you don't want a determined client to see.
+ While the token may look well obscured, it's easy to reverse engineer the payload. The token is merely signed, not encrypted, so don't put things inside that you don't want a determined client to see. While it is possible to read the payload of the token, it is not possible to read the secret with which it was signed.
Step 4. Make a Request
----------------------
@@ -140,7 +139,7 @@ It's better policy to include an expiration timestamp for tokens using the :code
Epoch time is defined as the number of seconds that have elapsed since 00:00:00 Coordinated Universal Time (UTC), January 1st 1970, minus the number of leap seconds that have taken place since then.
-To observe expiration in action, we'll add an :code:`exp` claim of five minutes in the future to our previous token. First find the epoch value of five minutes from now. In psql run this:
+To observe expiration in action, we'll add an :code:`exp` claim of five minutes in the future to our previous token. First find the epoch value of five minutes from now. In :code:`psql` run this:
.. code-block:: postgres
@@ -155,7 +154,7 @@ Go back to jwt.io and change the payload to
"exp": 123456789
}
-**NOTE**: Don't forget to change the dummy epoch value :code:`123456789` in the snippet above to the epoch value returned by the psql command.
+**NOTE**: Don't forget to change the dummy epoch value :code:`123456789` in the snippet above to the epoch value returned by the :code:`psql` command.
Copy the updated token as before, and save it as a new environment variable.
@@ -175,9 +174,9 @@ After expiration, the API returns HTTP 401 Unauthorized:
.. code-block:: json
{
- "hint": null,
- "details": null,
"code": "PGRST301",
+ "details": null,
+ "hint": null,
"message": "JWT expired"
}
@@ -207,7 +206,7 @@ PostgREST allows us to specify a stored procedure to run during attempted authen
First make a new schema and add the function:
-.. code-block:: plpgsql
+.. code-block:: postgres
create schema auth;
grant usage on schema auth to web_anon, todo_user;
@@ -255,8 +254,8 @@ The server responds with 403 Forbidden:
.. code-block:: json
{
- "hint": "Nope, we are on to you",
- "details": null,
"code": "42501",
+ "details": null,
+ "hint": "Nope, we are on to you",
"message": "insufficient_privilege"
}