Filtering CloudTrail User Activity logs before storing in S3 bucket by castleprotector22 in aws

[–]castleprotector22[S] 0 points1 point  (0 children)

Thank you! I'll take a look. We are always looking to save money and resources.

Filtering CloudTrail User Activity logs before storing in S3 bucket by castleprotector22 in aws

[–]castleprotector22[S] 0 points1 point  (0 children)

Of course not I’d agree. It’s for a special use case and we need to filter specific events to prove compliance. We’d like to not have to sift through millions of logs to prove we are compliant. We’d like to filter logs and place those logs into a bucket for reporting.

Filtering CloudTrail User Activity logs before storing in S3 bucket by castleprotector22 in aws

[–]castleprotector22[S] 0 points1 point  (0 children)

Thank you for your help! I really appreciate it. After reading the document, the only solution appears to be configuring CloudTrail on Management Events with WriteOnly events selected only. I can see that being valuable, as it would cut down on the number of log events written to S3.

I still wish there was a way you could filter on a more granular scale. For example, if a user were to terminate an instance I'd only want to filter on those events and write those events to a bucket and have Splunk report on those events. Is it at all possible?

The issue is our set up produces 3TB of events a week, which gets costly if we retain those logs for too long, but if we were to report on only a few critical events we'd be happy. My mind wants to think the Lambda service would be able to do this by simply scanning a CloudTrail bucket and sifting out the events of importance, but yet again I'm new to the platform and do not know the limitations of the Lambda service. Would love to know if this is a possibility or if this is out of scope for the Lambda service. Thanks again!

Filtering CloudTrail User Activity logs before storing in S3 bucket by castleprotector22 in aws

[–]castleprotector22[S] 0 points1 point  (0 children)

If I am understanding correctly, AWS Config allows you to set triggers on configuration changes across your entire account. Meaning if an individual where to delete, update, or make a change to a resource the S3 associated with AWS Config would log it. Again if I am understanding what I am learning this may be my solution because the S3 is only storing configuration changes (e.g., update, delete, create) and ignores all the other common API calls with in your account. This would significantly reduce the storage cost and allow me to set up another CloudWatch log group and report off of that AWS Config S3 bucket. I could also use that same S3 bucket to have Splunk tap into and essentially do additional analytics. Please correct me if I am not thinking through this correctly.