I have inherited an application running on AWS after the original developer left. I don't understand how part of it is set up.
The application is a Lambda (written in Python but I don't think that's significant here) that accepts events from a SQS queue and writes to a Kinesis Firehose that delivers data to an OpenSearch repository (the AWS clone of ElasticSearch). That part I understand.
The application also somehow writes all handled events to an S3 bucket. If the delivery to OpenSearch is successful the key is Data/%Y/%m/%d/%H/ where Y,m,d,H are the year, month, day, and hour of the request and guid is a globally unique ID string. If the delivery to OpenSearch fails the event is written to the same bucket with the key Data/elasticsearch-fail/%Y/%m/%d/%H/. This is not in the Python code for the Lambda.
I can't figure out where this write to S3 is configured. It isn't part of the lambda. I don't see anywhere in the set-up for either the OpenSearch or the Firehose delivery stream that would do it. But I am certain I am missing something.
To see the configuration for saving the stream data to S3 you do the following in the console:
and "S3 backup bucket error output prefix"
These are where the prefixes of the key names are defined for the data to be written out. The rest of the key name is provided by AWS and can't be defined by the user.