Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
One of the supported flows for storing API Gateway Access Logs, is by sending logs to an S3 bucket via Firehose Delivery Streams (https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_apigateway.FirehoseLogDestination.html). The name of the delivery stream always needs to start with amazon-apigateway- (https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-logging-to-kinesis.html), which means the object keys in S3 will also have this string.
Currently, the forwarder doesn't support API Gateway as an S3EventSource, which means the host and service for the logs will be set to s3.
What does this PR do?
Adding API GW as a valid event source for logs stored in S3.
Motivation
Having API GW logs properly parsed if they are stored in S3 (through Kinesis Firehose)
Testing Guidelines
Built my branch locally, and pushed the changes to an existing Forwarder Lambda (using the steps described in the readme). Sending the same S3 event for an API GW log, now correctly sets the source to
apigateway
.Additional Notes
Types of changes
Check all that apply