-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: allow customtypes ref in arguments/main #497
base: main
Are you sure you want to change the base?
Conversation
Co-authored-by: aws-amplify-bot <[email protected]>
…-ref-in-arguments
🦋 Changeset detectedLatest commit: f1b8e0e The changes in this PR will be included in the next version bump. This PR includes changesets to release 1 package
Not sure what this means? Click here to learn what changesets are. Click here if you're a maintainer who wants to add another changeset to this PR |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Spent an hour exploring this today and it works in the way I was hoping. I think I just negated my own testing ask, and we've reviewed all of this in smaller chunks.
This is a change in API, but I'm not sure what an API review would add here since we have followed the existing pattern consistently.
The SchemaProcessor logic is a bit hard to parse (as discussed elsewhere), but it works and the whole thing needs a refactor more than we need to do the refactor right now.
LGTM.
it('accepts CustomType in arguments', () => { | ||
const operation: CustomOperation<any, "arguments" | "for", "queryCustomOperation"> = a.query().arguments({ | ||
customArg: a.customType({ | ||
field1: a.string(), | ||
field2: a.integer() | ||
}) | ||
}); | ||
}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it be worth adding some type comparison testing around Schema['fcnCall']['functionHandler']
?
It appears to work as I would expect, but looking at this test, it covers the definition, but doesn't test the resulting types.
In my example app, I leveraged the types for the handler definition like:
export const handler: Schema['fcnCall']['functionHandler'] = async (x) => {
return {
todoCount: (await client.models.Todo.list()).data.length,
x: JSON.stringify(x.arguments),
}
}
// Picking into the type to inspect the arg types
type T = Parameters<Schema['fcnCall']['functionHandler']>[0]['arguments']['arg1']
packages/integration-tests/__tests__/defined-behavior/2-expected-use/custom-operations.ts
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall changes look good!
Seeing 2 things that are off from my perspective:
- If we have a nested CustomType in the args like
myMutation: a
.mutation()
.arguments({
loc: a.customType({
lat: a.float(),
long: a.float(),
meta: a.customType({
timestamp: a.timestamp(),
}),
}),
})
.returns(
a.customType({
lat: a.float(),
long: a.float(),
}),
),
We get the following GQL output:
type MyMutationReturnType
{
lat: Float
long: Float
}
type Mutation {
myMutation(loc: MyMutationLocInput): MyMutationReturnType
}
input MyMutationLocInput
{
lat: Float
long: Float
meta: MyMutationLocInputMetaInput
}
MyMutationLocInputMetaInput
is referenced in the top-level input, but does not get generated in the output itself.
- We're introducing quite a bit of unnecessary additional complexity to
SchemaProcessor.ts
. I'm seeing quite a bit of prop-drilling and new conditionals and iteration that should not be needed. Given the number of changes to this file and their interdependence, it's difficult to properly communicate this via specific comments, but maybe we can pair on it?
Ultimately, this part is non-blocking. If the functionality works, we can defer cleaning up the code to a broader refactor task, as Aaron suggested above.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One comment. I got pulled away while reviewing this today. Its at the top of my list for tomorrow morning.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This change is a bit large and hard to parse. Can we break some of the refactor changes out and get them merged ahead of the focused deliverable?
Thinking of:
- White space changes
- graphql render order changes
- passing auth down (maybe this can or can't be broken out)
Basically anything that isn't clearly pitched in the description.
There are enough moving parts here that anything we can do to focus/narrow the PR will help me get confidence we're doing the right thing. I would like to refactor the SchemaProcessor into something easier to work around, but that should follow this change, not block it.
const customOperationTypes = generateCustomOperationTypes(customOperations); | ||
|
||
const schemaComponents = [ | ||
...Array.from(enumTypes), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking over the snapshot tests, there's a bit of noise as the enum types move before the other definitions. Semantically, I think it makes sense that they would be defined first, but it doesn't appear to have made a difference in the graphql definitions.
- Why did this change include the reordering?
- As a pure refactor, could we make this change in separately to remove some of the less related diff from this change?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The reordering was an unintended side effect of implementing support for custom types and refs in arguments. To handle these new argument types, separate processing for different schema components was introduced. This led to enums being collected and added independently, resulting in their reordering relative to other definitions.
I've made adjustments to minimize this reordering. Many snapshots now match the original order, but some differences persist, particularly with types having specific directives. These changes don't affect the GraphQL definitions functionally.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't mind the reordering. I think it makes sense that things get accumulated and a logical order is applied when rendering the graphql document which is inconsistent with the order from before. Its a no-effect refactor that makes a good isolated change. Something easy to commit ahead of the effective change.
@@ -1401,6 +1399,28 @@ describe('custom operations + custom type auth inheritance', () => { | |||
); | |||
}); | |||
|
|||
test('inline custom type inherits auth rules from referencing op', () => { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The change description doesn't talk about inherited auth rules. I'm puzzled by the how the test changes in this file related back to the broader change.
Took a closer look here expecting to find arguments
snapshot tests for similar cases to the client type tests in the .t.test
file. It looks like we have some related tests in the e2e define_bahavior testing. Why are they there vs here?
My thinking is that we would have unit tests covering a many cases and basic behavior testing in the e2e, but I'm open to this being misaligned with guidance from others. Interested to understand.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have overlooked the connection between the test changes and the broader change in the description. The current tests, including those for inherited auth rules, are covering the functionality for custom types and refs in arguments. The e2e define_behavior tests complement the unit tests here by covering end-to-end scenarios.
throw new Error( | ||
`Invalid field definition for ${fieldName}. DB-generated fields are only supported with PostgreSQL data sources.`, | ||
); | ||
} | ||
|
||
if (isGenerated && !canGenerateFieldType(fieldType)) { | ||
throw new Error(`Incompatible field type. Field type ${fieldType} in field ${fieldName} cannot be configured as a DB-generated field.`); | ||
throw new Error( | ||
`Incompatible field type. Field type ${fieldType} in field ${fieldName} cannot be configured as a DB-generated field.`, | ||
); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's quite a bit of reformatting noise in this change. I'm not opposed to cleaning/standardizing things, but in a change this size it might be better to pull reformatting / no-op refactors out to their own change to keep the change more concise / easy to review.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You're right. I'll change this.
const schemaComponents = [ | ||
...gqlModels, | ||
...Array.from(enumTypes), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like that schemaComponents
replaces gqlModel
, could also be gqlComponents
.
Because the method is so long, I think I liked having the components added to the list as they where complete so that I know what to keep paying attention to in this too long method. Could we factor back to pushing onto the list.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've implemented the suggested changes. The schemaPreprocessor now uses a single gqlComponents array, and components are added to this array as they are processed throughout the method.
}); | ||
} | ||
} else if (typeDef.type === 'enum') { | ||
const enumDefinition = `enum ${typeName} {\n ${typeDef.values.join('\n ')}\n}`; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Commenting out this line doesn't break tests. Either we're missing a test or this line isn't doing anything valuable.
implicitTypes.forEach(([typeName, typeDef]) => { | ||
if (isEnumType(typeDef)) { | ||
const enumTypeDefinition = `enum ${typeName} {\n ${typeDef.values.join('\n ')}\n}`; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder how of this logic can live elsewhere if we are accruing components one by one in the pattern we where using with gqlModels
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've simplified the implicit type processing logic by removing redundant variables.
As for moving this logic elsewhere, the current placement allows for processing types at the earliest point where we have sufficient context, particularly information about return types from transformCustomOperations The full schema context is necessary to accurately determine whether a type should be an input type or a regular type.
field2: a.integer(), | ||
}), | ||
}); | ||
}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I might expect Equal<A,B>
to show up here. Could you use type equality instead of setting the type?
type A = CustomOperation<
any,
'arguments' | 'for',
'queryCustomOperation'
>;
const example = a.query().arguments({
customArg: a.customType({
field1: a.string(),
field2: a.integer(),
}),
});
type test = Expect<Equal<A, typeof example>>;
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Implemented checks as suggested.
@@ -1316,6 +1386,64 @@ const mergeCustomTypeAuthRules = ( | |||
} | |||
}; | |||
|
|||
function generateInputTypes( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we control the "InputTypes" naming that I think is introduced here, could we change it to "ArgumentTypes". Is there an important difference between arguments and inputs that I'm not seeing? If not, I think it would be better to stick to one name and I think arguments is the customer facing name.
const scalarDefinition = `scalar ${typeName}`; | ||
generatedTypes.add(scalarDefinition); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I could see removing the extra local variable and performing this on one line for cases like this.
Description of changes: This PR allows the use of CustomType and RefType in arguments for custom operations in the @aws-amplify/data-schema package. The changes span across three main areas, building upon the work done in PRs #402 and #412 :
Key Implementations:
Testing:
These changes enhance the flexibility of schema definitions by allowing more complex argument structures in custom operations.
This PR replaces #387 and includes the same changes, but with a branch name that matches the required pattern for the preid build process.
By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.