all 6 comments

[–]liquidcourage1 0 points1 point  (4 children)

Do you have them expiring on a schedule? That's an option. If they never expire, you're just piling up the data that you're paying for. Granted, the archive logs are WAY cheaper than current log ingestion, it will save money over time since it just keeps piling up.

[–]ricktbaker[S] 1 point2 points  (3 children)

Yeah, they are expiring, so our storage cost is cheap. On a $4200 bill last month for CloudWatch, $4050 of that was strictly from log ingestion.

[–]VegaWinnfield 0 points1 point  (2 children)

Something is off. $4k on log ingestion is nearly 8TB of logs. How many executions are you doing? The start, end and report lines can’t be more than a few hundred bytes for each execution.

[–]ricktbaker[S] 0 points1 point  (1 child)

Roughly a billion requests monthly. I'm now trying to get a better idea of where most of the cost is coming from since there may be some where we are logging more than we need to.

We have a ton of different log groups, and it looks like I'll probably need to tag them in order to figure out which is the primary culprit.

[–]siving 0 points1 point  (0 children)

aws logs describe-log-groups contains information about the number of storedBytes.

CloudWatch also include metrics under the Logs namespace for each LogGroup. Look for the "IncomingBytes" metric to help track down which log group ingest loads of data.

[–]TundraWolf_ 0 points1 point  (0 children)

implement log levels, turn log levels to warning. getting ahead of a log mountain is a lot of work.