-
-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support for multipart/form-data? #922
Comments
or is there a way to post raw binary to postgrest so I can insert bytea in postgresql? this is what I actually want to do here. open a file, read bytes, make a post request with that bytes in the body (no json here, because json encodes to base64) and store it into a field type bytea. |
Right now posting raw binary is not possible in postgREST, but I think you could encode base64 in the client, post the payload in an |
ok that works. thanks. |
I'm interested in implementing My proposal for this would be to use PostgreSQL COPY BINARY, this would make the insertion off all the For this we would need to:
For anyone interested in this feature, consider supporting PostgREST development. |
This is outside of my understanding, but I was wondering if https://github.com/nikita-volkov/postgresql-binary might be a library for pg binary formats? |
@alf-mindshift True. That would only leave adding support for |
An alternative to an CREATE OR REPLACE FUNCTION b64decode() RETURNS trigger AS $$
BEGIN
IF current_setting('request.header.postgrest-encoding', TRUE) = 'base64' THEN
new.val = decode(convert_from(new.val, 'SQL_ASCII'), 'base64');
END IF;
RETURN new;
END
$$ LANGUAGE plpgsql;
CREATE TRIGGER decode
BEFORE INSERT OR UPDATE OF val
ON my_table
FOR EACH ROW
EXECUTE FUNCTION b64decode(); |
The data representations feature (#2523) will make it easy to receive base64 and save it to
[1]: Don't just take my experience, here's someone who ran the numbers: incompressible data is, as expected, 1.333x the size in base64, 1.0001x the size in gzipped binary, 1.009x the size in gzipped base64. An overhead of 0.9% is in my opinion too small to worry about unless the files are truly huge, in which case you're probably going to want to send them direct to your S3/GCS storage bucket anyhow. |
One reason to not do this with COPY(as proposed above): pipeline mode doesn't support it (We'll get pipeline mode haskellari/postgresql-libpq#42) Also, I believe if we solve #2711, we'll be able to do multipart on SQL. |
It's already possible to use a single unnamed IMHO, the only downside to this is, that it's right now not easily possible to provide other arguments/input to the RPC. So e.g. when uploading a file, you often like to send a filename with it. Or some metadata where to reference this file. Right now, this can basically only be done by sending custom headers and then parsing However.. what if we were to support mixing arguments for RPCs in the body and via query string? That could be really neat: CREATE FUNCTION upload(BYTEA, filename TEXT) RETURNS void ...; Right now this function would not be callable, I think. We expect a "single unnamed argument" to match the body with that. But if we could call it like the following, that would be great: POST /rpc/upload?filename=my_image.png HTTP/1.1
Content-Type: application/octet-stream
<raw file content as the body> |
Ah nice, that seems it would work. Though I guess the downsides are:
|
hi,
are there any plans to support multipart/form-data uploads? I need to upload raw binary. since json is encoding it to base64 I am not only suffering from 33% more space consumption but also more cpu usage for encoding/decoding.
it would be helpful to be able to upload raw binaries with eg. multipart/form-data.
The text was updated successfully, but these errors were encountered: