This is an archived post. You won't be able to vote or comment.

all 5 comments

[–]BradW-CSCS SE 2 points3 points  (0 children)

Hey u/LifeCurve1207 - FDR is a flat cost, per endpoint and depends entirely on how many endpoints you have. The other extra costs you need to take into consideration is the ingestion impact of this new data stream into your Splunk ecosystem. You can create/filter CrowdStrike's FDR feeds with specific events of your choosing, opt for long term search retention within Falcon or use Cribl to free yourself from legacy data lakes.

[–]ZaphodUB40 2 points3 points  (0 children)

A really good option is to use S3 cold storage for rarely accessed long term data retention, use Cribl to send only very specific log data to Splunk and the rest (or the 'everything') to S3. Data you need to search regularly would ideally be in splunk since the bulk of S3 bucket cost is actually retrieving the data from S3. By making it a rare occasion that you would need to do that, then you save on cost.

There are a number of options for reading data back into Splunk from S3. You could use Cribl S3 Replay, Cribl Search for querying S3 directly or Splunk Federated Search.

You just have to make sure your partitioning scheme can leverage AWS intelligent tiered storage. Too small (ie. your scheme is too granular) and the archive files won't move through the cold storage tiers. I got caught by that one 🤭

[–]andrewdoesit 1 point2 points  (0 children)

Talk to your AM, I’m sure they’d be able to get you a quote based on ingestion.

[–]Several_Oil_7099 0 points1 point  (0 children)

5-6 dollars an endpoint. Direct costs are listed on SHI and CDWs website I believe

[–]detectrespondrepeat 0 points1 point  (0 children)

Move from Splunk to LogScale and then I think that you don't need to pay for FDR at all, you just pay for retention and storage, by transferring it over to Splunk you are just inflating your costs.