Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Simple alternative if you are not tied to the JS graphql ecosystem. #50

Closed
enjoylife opened this issue Dec 12, 2020 · 3 comments
Closed

Comments

@enjoylife
Copy link

enjoylife commented Dec 12, 2020

TL;DR If you are not using a server side graphql implementation with limitations on how scalars and other types should be "resolved", (such limitations arise in the apollo's js server see #11 and here) and your not tied to graphql-upload. Then I would not recommend using this spec verbatim. Instead just use uuid's to map between the schema and each body part.

# 1. Define a scalar which will be the type and identifier we use when reading off the multipart data.
scalar MultiPartUUID

# 2. Define an input type using said scalar
input FileUpload {
     uuid: MultiPartUUID
    # other data which your app requires for upload
    # e.g... `fileName` if you want to trust a users provided file extension, ... etc.
} 

The environment I'm assuming is on client side, we collect files from a <input type="file" /> element providing File objects. We ultimately want to use these File objects as variables within our request. Now instead of doing the funky null replacement which this spec defines, which is semantically questionable, just use the scalar to hold a uuid and use it when to constructing requests. High level steps:

  1. Set a uuid for each file
  2. Use the uuids within the variables to satisfy the schema {variables: files:[ {uuid: 'xxx'} ]
  3. Fire off the request appending on the file(s) using the uuid for each multipart name.

Following such an approach creates a curl request along the lines of:

curl localhost:3001/graphql \
  -F operations='{ "query": "mutation ($files: [FileUpload!]!) { filesUpload(files: $files) { id } }", "variables": { "files": [{uuid: ‘xxx’ }, {uuid: ‘yyy' }] }' \
  -F [email protected] \
  -F [email protected]

Then server side you simply read off the first part of the body which holds the query. Pretty much every graphql implementation gives you access when parsing the ast, and you can execute the schema as desired having read that first part. Now there you could either block and consume all the remaining parts, setting them aside in memory or temp files, which the resolvers would use as a lookup per each file variable uuid. Or you can read the parts as needed, streaming in the data, and have the resolvers, waiting in parallel for other resolvers to consume the body up to the needed part.

For additional clarity here’s an outline for given a Js client and Go server setup.

1 Set uuid's on the files to upload

// form component gets the `uploadedFiles`
function onChange(files) {
  return files.map(f => {
    // make sure we have our uuid ready
    if (!f.key) {
      f.key = uuidV4();
    }
    return f;
  })
}
// => uploadedFiles

Map the variables to satisfy the schema {variables: files:[ {uuid: 'xxx'} ]

const variables = {
  files: uploadedFiles.map(({uuid}) => {
    return { uuid, /* fileName, etc. */}
  }),
}

Fire off the request appending on the file(s) using the uuid as the multipart name

var data = new FormData();
data.append('operations', JSON.stringify(graphQLParams));
// look ma,  no map included. 

for (const file of uploadedFiles) {
  data.append(file.uuid, file, file.name);
}
let xhr = new XMLHttpRequest();
xhr.send(data);

Server Side rough example:

var MultiPartKey = "ctx-key"
func handler(w http.ResponseWriter, r *http.Request)  {
	w.Header().Set("Content-Type", "application/json")
	reader,_ := r.MultipartReader()
	// Step 1: Get the operations part
	// e.g. Content-Disposition: form-data; name="operations"
	operationsPart, _ := reader.NextPart()
	

	// Insert the multipart reader into the request context for use in resolvers
	r = r.WithContext(context.WithValue(r.Context(), MultiPartKey, reader))

	//  Execute your schema using operationsPart
	// ...
}

In the resolvers....

func utilToWaitForPart(ctx context.Context, uuid string) *multipart.Part {
	// if run in parallel this could wait until other resolvers have finished reading up the uuid needed

	reader := ctx.Value(MultiPartKey).(*multipart.Reader)
	// would need to smartly wait until part is ready
	part, _ := reader.NextPart()
	for part.FormName() != uuid {
		// *sync.Cond.Wait() this is only a rough sketch,
                // you will have to use more sync primitives for a real implementation
	}
	return part
}
func Resolver(ctx context.Context, input FileUploadInput) (Response, error) {
	for _, file := range input.Files {
		//
		// use the uuid to correctly get the part data
		part := utilToWaitForPart(ctx, file.uuid)
		bytes, _ := ioutil.ReadAll(part)
		// leverage the files...
	}
}

If you made it thus far, I just want to remind you that if you are using apollo or any of the heavy js graphql abstractions, this projects spec is good and correctly works within the limitations of node and the rest of the node based graphql libraries. But if your outside of that world, this simple uuid mapping like I have outlined above is a simple alternative.

@enjoylife enjoylife changed the title Simply alternative if you are not tied to the JS graphql ecosystem. Simple alternative if you are not tied to the JS graphql ecosystem. Dec 12, 2020
@jaydenseric
Copy link
Owner

I've left this open until I had the time and mental energy to compose an adequate response. I had planned to go above and beyond by digging up all the technical reasons we abandoned similar ideas to replace files with unique IDs in the early research and development days, but as reasoned below I don't think such effort is required to close this issue. If you’re curious about the history, there is a fair bit to read in old PR's and issues in this repo and the graphql-upload repo.

The first thing that comes to mind (although it's a pretty exotic) is that the current spec allows files to be used anywhere in the GraphQL operations objects, not just in variables. Would that be supported by your proposal?

With your proposal that doesn't include a map of where files are used in the operations, it's not clear to me how one file variable used as an argument in multiple mutations in the same request could be implemented on the server. Is that something you have considered?

Performance wise the map allows the server to cheaply see up front how many files there are, and where they are expected to be used in the operation without having to parse the GraphQL query looking for certain upload scalars in an AST, etc. For example, the map makes implementing the maxFiles setting in graphql-upload trivial:

https://github.com/jaydenseric/graphql-upload/blob/2ee7685bd990260ee0981378496a8a5b90347fff/public/processRequest.js#L202-L207

Simple alternative if you are not tied to the JS graphql ecosystem.

The point of this spec is to create a standard for interoperability between all GraphQL clients and servers, regardless of the languages or ecosystems. A spec that’s incompatible or overly burdensome to a JS browser implementation or a Node.js graphql based server implementation (the most used and relevant GraphQL environments) immediately fails in this goal.

I welcome constructive criticism around this spec that is actionable, but it seems you’re proposing an alternative with conflicting goals. Please exercise caution as multiple different specs for multipart GraphQL requests would be an interoperability disaster for the GraphQL community, beyond the JS GraphQL ecosystem.

At this point, this spec and it's numerous implementations have matured, so not aligning with the greater community will have significant productivity costs to your team. You won't be able to use the nice Altair playground GUI to test file upload variables, you won't be able to easily swap out your GraphQL client or server implementation without rewriting the other end, etc. etc.

I just want to remind you that if you are using apollo or any of the heavy js graphql abstractions, this projects spec is good and correctly works within the limitations of node and the rest of the node based graphql libraries. But if your outside of that world, this simple uuid mapping like I have outlined above is a simple alternative.

This spec and the graphql-upload JS server-side implementation are not tied in any way to Apollo, or a "heavy js graphql abstraction" - they are purposefully lightweight, generic, and compatible with most GraphQL clients and servers. I’ve dedicated my life to reducing complexity in the JS ecosystem and obsessively avoid heavy abstractions, creating new lightweight tools where none exist (e.g. graphql-api-koa has only a 128kB install size, instead of apollo-server-koa, which has a 14.4 MB install size).

Here is a breakdown of the [email protected] dependencies:

Screen Shot 2021-03-08 at 4 02 10 pm

  • busboy is responsible for parsing the multipart request stream, and is > 90% of the graphql-upload install/bundle size; hopefully one day someone will publish a more modern, lightweight parser we can use instead. It would not be eliminated if we adopted your spec proposal, since either way a multipart request needs to be parsed.
  • http-errors is a shared dependency with Koa and Express, so if you have either installed already this dependency does not increase your node_modules size. It is necessary to produce errors that result in appropriate HTTP response codes in the event of errors when processing the multipart request. It would not be eliminated if we adopted your spec proposal.
  • fs-capacitor is used to buffer file uploads to the filesystem and coordinate simultaneous reading and writing. This is necessary because a single variable representing a file upload can be used as input to multiple mutations in one request, and the multiple resolvers need to be able to process the file upload stream separately at the same time without interfering with each other. Your spec proposal would not allow this dependency to be removed.
  • isobject is a tiny dependency used for basic runtime type checking of things, such as checking if the GraphQL operation in JSON is an object, and not something unexpected. Your spec proposal would not allow this dependency to be removed.
  • object-path is the only dependency your spec proposal would allow us to remove. It only has a 47.9kB install size, and a 1.3kB bundle size. But, this would not be a net saving as for the alternative UUID approach you would need new dependencies, e.g. uuid which has a 114kB install size and a 3.3kB bundle size. You might be able to shop around for a lighter UUID dependency, but we're splitting hairs. Node.js v15.6 introduced a new crypto.randomUUID API, but it will be years before published software can make use of it. Note that creating an object path in the browser is generally the result of zero-dependency recursion/looping that can be done at the same time as you scan for file (e.g. see extract-files source), but a UUID solution for the browser is for sure a dependency to add to the client bundle.

Overally, it appears your proposed spec would not really decrease implementation complexity, and in some cases it might actually increase it.

Probably UUID is overkill; perhaps your proposal could be simplified to IDs that are only unique amongst the sibling multipart request field names.

instead of doing the funky null replacement which this spec defines, which is semantically questionable

Disagree that it's semantically questionable; please see #4 (comment) .

Further discussion here is welcome; closing because at this point I don't plan to action the suggestions raised in this issue in a major breaking spec change.

@dylanowen
Copy link

dylanowen commented May 14, 2021

I'd like to reopen this discussion on @enjoylife's suggested simplification of the spec. I believe their suggestion can greatly simplify this specification and ease the implementation for typesafe frameworks while still achieving the original goals for JS GraphQL libraries.

Just for clarity, this is a rough draft of how I imagine the new version of the spec could look building off of what @enjoylife has defined:

  1. Requests are multipart/form-data
  2. There is exactly 1 non-file form field. This contains the GraphQL request. The GraphQL server has a Scalar backed by a String that references the name of the related file form field.
    1. It doesn't need to be named operations anymore but it might be better to reserve this name for further extension
    2. It doesn't need to be before the file form fields but it might be best to enforce this for performance
    3. We might not need to specify the name of our scalar but Upload seems good.
  3. Every other form field should be a file with a unique name which will be referenced by the Upload scalar.

ex:

curl http://localhost:4000/graphql \
  -F gql='{ "query": "mutation { upload(files: [\"file_id\"]) }", "variables": null }' \
  -F [email protected]

I've implemented a prototype of this in my own Scala server and I have a very rough implementation of this for Apollo:

Prototype Apollo Attachments Gist

Regarding the comments in #11 (describing how the map field is necessary for performance), this is an implementation detail of the server and something we can solve for Apollo. If our Apollo plugin finds an Upload(file_id) we grab a promise from our shared Uploads object which we will resolve once we parse our file or reject after we finish parsing the entire request. This lets us execute our GraphQL request as soon as we find it in our form fields.

This is a trace from running my gist:

curl http://localhost:4000/graphql \
  -F gql='{ "query": "mutation { upload(files: [\"file1\", \"file2\", \"file1\"]) }", "variables": null }' \
  -F file1=@Untitled \
  -F file2=@API\ Wizard.mp4

2021-04-attachments_timing

You can see that we've achieved the same async waterfall where our GraphQL request execution starts immediately.


The first thing that comes to mind (although it's a pretty exotic) is that the current spec allows files to be used anywhere in the GraphQL operations objects, not just in variables

Yes, each file is always referenced by its uid so your server can choose to arrange its json however it desires without any issues.

An added benefit of this proposal over the current spec is the ability to define file references outside of variables. Right now you're required to always have a "variables" section to reference via your map form-field. It's not possible to send something like:

curl http://localhost:4000/graphql \
  -F gql='{ "query": "mutation { upload(files: [\"file_id\", \"file_id\"]) }", "variables": null }' \
  -F [email protected]

With your proposal that doesn't include a map of where files are used in the operations, it's not clear to me how one file variable used as an argument in multiple mutations in the same request could be implemented on the server. Is that something you have considered?

This doesn't really change between the current spec and this proposal. You're always looking up in your context for the file based on its uid. There's no reason you can't repeatedly query the same file based on its uid.

ex:

curl http://localhost:4000/graphql \
  -F gql='{ "query": "mutation { a: upload(files: [\"file_id\"]) b: upload(files: [\"file_id\"]) }", "variables": null }' \
  -F [email protected]

Performance wise the map allows the server to cheaply see up front how many files there are, and where they are expected to be used in the operation without having to parse the GraphQL query looking for certain upload scalars in an AST, etc. For example, the map makes implementing the maxFiles setting in graphql-upload trivial.

Although this is true of the new spec change, we'll always be parsing GraphQL requests in GraphQL servers, it's a matter of leveraging the server libraries to facilitate this. This is something that could maybe be handled by an Apollo validationRule or definitely by an Apollo plugin. We're writing a spec for GraphQL, we should be using the tools our GraphQL servers provide to us.

Even if we're writing an implementation of this spec for a framework that gives 0 options to validate our GraphQL request, the current JS spec implementation has already defined code that would catch maxFiles as they were streaming through via Busboy: https://github.com/jaydenseric/graphql-upload/blob/2ee7685bd990260ee0981378496a8a5b90347fff/public/processRequest.js#L67


The point of this spec is to create a standard for interoperability between all GraphQL clients and servers, regardless of the languages or ecosystems

Exactly, this spec appears to be designed in order to run as a JS Server middleware. There is a good amount of indirection, implementation specific solutions, and dependencies on the implementing language/framework. This all creates more work for server implementers.

I did an audit of the various server implementations and all of the ones I looked at either depend on:

  • Their language being dynamic
  • Their language having a top Object type and casting (ignoring type safety)
  • Or they basically implement the proposed spec change internally

There doesn't seem to be a good way to add this specification to a typesafe language/framework without it devolving into the proposed spec change.

For Example async-graphql in Rust

https://github.com/async-graphql/async-graphql/blob/f62843cbd34ef9bf28f70f8df07d4f61f8038e0a/src/request.rs#L115

*variable = Value::String(format!("#__graphql_file__:{}", self.uploads.len() - 1));

we can see that internally, after the map is parsed, we replace the null inside the variables definition with a uid to reference the specific file.

https://github.com/async-graphql/async-graphql/blob/f62843cbd34ef9bf28f70f8df07d4f61f8038e0a/src/types/upload.rs#L99

/// Get the upload value.
pub fn value(&self, ctx: &Context<'_>) -> std::io::Result<UploadValue> {
   ctx.query_env.uploads[self.0].try_clone()
}

When we get the value out of the Scalar, we pull the actual file stream out of the context via that same uid.

For Example caliban in Scala

https://github.com/ghostdogpr/caliban/blob/660beeae538768d817a73cb4535e0e3bd1a8cb82/adapters/play/src/main/scala/caliban/uploads/Upload.scala#L89

// If we are out of values then we are at the end of the path, so we need to replace this current node
// with a string node containing the file name
StringValue(name)

We're setting our variable value to the filename in order to pull it out of the context later.

For Example sangria in Scala

(This isn't actually in the library but this gist describes how to implement it).

https://gist.github.com/dashared/474dc77beb67e00ed9da82ec653a6b05#file-graphqlaction-scala-L54

(GraphQLRequest(gql = gqlData, upload = Upload(mfd.file(mappedFile._1)), request = request))

we store our uploaded file separately from our GraphQL request.

https://gist.github.com/dashared/474dc77beb67e00ed9da82ec653a6b05#file-controller-scala-L68

userContext = SangriaContext(upload, maybeUser),

we pass our file through via the context.

https://gist.github.com/dashared/474dc77beb67e00ed9da82ec653a6b05#file-exampleschema-scala-L15

val maybeEggFile = sangriaContext.ctx.maybeUpload.file.map(file => Files.readAllBytes(file.ref))

Inside of our resolver we lookup the file in the context to actually use it.

All of these examples have implemented @enjoylife's proposal under the covers in order to preserve some form of type safety.

Note:

We can use these libraries as a guide to show us how to implement supporting both the current version of the spec and the proposed change in the same server with plenty of code sharing.

For Reference graphql-upload in JavaScript

This JavaScript implementation depends on the language being dynamic so that we can overwrite our variables with an Upload instance.

https://github.com/jaydenseric/graphql-upload/blob/60f428bafd85b93bc36524d1893aa39501c50da1/public/processRequest.js#L232

operationsPath.set(path, map.get(fieldName));

We assign the Upload instance to the specified location in our GraphQL Json Request

https://github.com/jaydenseric/graphql-upload/blob/60f428bafd85b93bc36524d1893aa39501c50da1/public/GraphQLUpload.js#L81

if (value instanceof Upload) return value.promise;

When parsing our Scalar value, we check and cast to make sure we found an Upload instance.

For Reference graphql-java-servlet in Java

This Java implementation depends on the top level Object type so that we can check and cast our variables on the fly.

https://github.com/graphql-java-kickstart/graphql-java-servlet/blob/eb4dfdb5c0198adc1b4d4466c3b4ea4a77def5d1/graphql-java-servlet/src/main/java/graphql/kickstart/servlet/GraphQLMultipartInvocationInputParser.java#L138

objectPaths.forEach(objectPath -> VariableMapper.mapVariable(objectPath, variables, part));

We set each http.Part in our variable map

https://github.com/graphql-java-kickstart/graphql-java-servlet/blob/eb4dfdb5c0198adc1b4d4466c3b4ea4a77def5d1/graphql-java-servlet/src/main/java/graphql/kickstart/servlet/apollo/ApolloScalars.java#L28

if (input instanceof Part) {
  return (Part) input;

When parsing our Scalar, we check and cast to make sure we found a http.Part instance.


This spec and the graphql-upload JS server-side implementation are not tied in any way to Apollo, or a "heavy js graphql abstraction"

I can't speak for @enjoylife but I don't believe the proposed changes to this spec are implying the code for graphql-upload is heavy. graphql-upload is quite elegant in its implementation. In fact, for my Apollo prototype I borrowed heavily from graphql-upload. The heavy parts are that:

  • We have multiple implementation specific details baked into the specification
  • Using null as a placeholder is really another server implementation detail, it doesn't make sense from the client perspective
  • There is a lot of indirection in the variables in order to support implementing GraphQL Upload libraries inside of JS middleware

In Summary

"The point of this spec is to create a standard for interoperability between all GraphQL clients and servers, regardless of the languages or ecosystems" and the current iteration of this specification constrains non-dynamic languages in order to be written inside of a JS Server Middleware. Evolving this specification will better fit the growing GraphQL ecosystem and make this specification future proof so that everybody can benefit from the work you've done here.

@dylanowen
Copy link

I've opened #55 to continue the discussion

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants