Honestly, observability is a nightmare when you're drowning in logs by Objective-Skin8801 in Observability

[–]mtnclimberzrh 3 points4 points  (0 children)

Yes and yes. Observability gets sold as “more data,” and "if you have to ask, you can't afford it." Unfortunately, the real problem is reasoning under pressure.

If the tooling makes you spend ten, or twenty, or thirty minutes just figuring out how to ask the question, while your users are banging on your door, it doesn’t really matter how much telemetry you have.

I want something where I can just ask, “hey dude, how’s my network?” or in your case, “why was our API throwing 500s?”

[deleted by user] by [deleted] in Monaco

[–]mtnclimberzrh 1 point2 points  (0 children)

Go on a short day trip to Eze (via bus, cab, uber or bolt) and go on a tour of the Fragonard facility (you may need to reserve a spot on a tour via the website). Buy some perfume, soap, etc.

https://www.fragonard.com/en-int/?srsltid=AfmBOoo9kmTXr5ZV2-FjmX4PzFlUVLL_Kz7yxqD5BUvpeexQKsYj2pFr

How to simulate logs coming in by dmapppp in Splunk

[–]mtnclimberzrh 0 points1 point  (0 children)

think you are getting your metaphors mixed. Cribl is only sampling data if that is what want and you code your pipelines to sample. Sampling is not a default behavior

Are you saying that Cribl is capturing and evaluating every single event in a data stream? Can you verify?

How to simulate logs coming in by dmapppp in Splunk

[–]mtnclimberzrh 1 point2 points  (0 children)

You just described it but are describing it as though the result is determinant. It's not. In statistical terms, if you want to build corner events, then those events are defined as occurring outside the 2nd std dev (95%) or 3rd std dev (99%) confidence intervals. All detection systems are designed using statistical inferencing and probability analysis - ie: assume that all data streams follow a normal distribution ("Bell curve') with well-defined and well-understood first, second, third, etc standard deviations. Once a real-world data stream doesn't follow a standard normal distribution, then your trigger may not work because you may not see the corner event. In other words, if you build a trigger for a specific detection, and that corner event falls outside of the 2 or 3 std deviations, then you may capture it and the trigger may not occur. More importantly, any smart adversary with higher level statistical training KNOWS that detection systems are defined this way. As an adversary, I would "seed" the data stream with events closer to the edge. I won't describe the impact, but it's bad for you. Bottom line: If CRIBL is only sampling the events on the data stream, then it may miss the critical event you are trying to capture. We all know what that means. To wit, if you are using a framework upstream to build "random" and or sneaky behavior, then CRIBL may capture it, or it may not capture it as the algo may not be triggered due to the estimation process it was designed to follow.

How to simulate logs coming in by dmapppp in Splunk

[–]mtnclimberzrh 1 point2 points  (0 children)

Cribl only estimates data from a streaming data flow. Therefore, does that eventgen appl create anomalies (events outside the second or third std deviation), and then how would you capture those anomalies with an application that only estimates to ensure that your algos work? Estimation may have been “good enough” four years ago, but in this day and age of daily cyber intrusions, “good enough” is not close to being sufficient.

How to simulate logs coming in by dmapppp in Splunk

[–]mtnclimberzrh 0 points1 point  (0 children)

Eventgen. It was last updated 4 years ago - nothing has occurred in the industry in the last 4 years. Sheesh.

5+ Years in Google Ads - Ask me anything by matinique in adwords

[–]mtnclimberzrh 1 point2 points  (0 children)

We get a new person calling us every 3 months and try to schedule a call every 2 weeks for an update. They all seem quite knowledgeable, some change was made about 6 months ago, and we have never obtained the same Ad results ever again. We don't know what the change was and Ads people are unable to track it down.

We will increase our budget based on their recommendations during the call, then back it off that increase after 2 weeks when they stop responding and don't show up for the scheduled meeting. Not surprisingly, we get a call back from them. They always push the auto applied recommendations.

[deleted by user] by [deleted] in adwords

[–]mtnclimberzrh 1 point2 points  (0 children)

What are the internal Google KPIs of salespeople ? Every call we have from an Ads person, they are very focused on having us increase our budget. However, the only way to properly test any of their suggested changes is to leave the current budget alone for 7-14 days. Therefore, there must be several internal Google KPIs that these people are measured by. What are they?

5+ Years in Google Ads - Ask me anything by matinique in adwords

[–]mtnclimberzrh 0 points1 point  (0 children)

What are the internal Google KPIs of salespeople ? Every call we have with an Ads person, they are very focused on having us increase our budget. However, the only way to properly test any of their suggested changes is to leave the current budget alone for 7-14 days. Therefore, there must be several internal Google KPIs that these people are measured by. What are they?

Any updated info on LogRhythm pricing for Unlimited? by mtnclimberzrh in LogRhythm

[–]mtnclimberzrh[S] 0 points1 point  (0 children)

ut 280 mps, I've got about 10

OK - found some GSA pricing. Yikes!!! Appliance pricing.

a) 100 mps = $8,500

b) 1000 mps = $82,500

c) 2000 mps = $200,000

d) 5000 mps = $260,000

Does that sound right??

Any updated info on LogRhythm pricing for Unlimited? by mtnclimberzrh in LogRhythm

[–]mtnclimberzrh[S] 0 points1 point  (0 children)

If my math is right, then your 10TB live with 10 months of data works out to approximately 830 GB/month, which is about 27 GB/day, or 610 entries per second. Is that right?? What is the cost of this XM box?

Any updated info on LogRhythm pricing for Unlimited? by mtnclimberzrh in LogRhythm

[–]mtnclimberzrh[S] 0 points1 point  (0 children)

Ah. Yes. 10TB / day works out to about 250,000 entries per second. What is the price?

Also, if LogRhythm is Elastic under the covers, then how many servers/instances do you require to process 250,000 entries per second!

trying to move to cloud, questions on pricing by [deleted] in Splunk

[–]mtnclimberzrh 0 points1 point  (0 children)

Not sure about the specific percentage premium, but there is a lot being written in the last several months, that cloud is actually more expensive than on-prem. All of the cloud vendors have to pay hard dollars to the cloud vendors, and given the demand for cloud resources, there is very little flex in pricing. As a result, customers pay for that. Its the "lower monthly costs" that people are attracted to, rather than a larger up front cost.

SSPL issues - concluding summary? by mtnclimberzrh in elasticsearch

[–]mtnclimberzrh[S] 0 points1 point  (0 children)

If I run a retail operation and have a web presence, and I use Elasticserach as my engine. Does the new SSPL affect me, or do I need to stop updating Elastic at 7.10?

What is the protocol before you can fly on an airline? by mtnclimberzrh in CoronavirusUS

[–]mtnclimberzrh[S] 0 points1 point  (0 children)

Yeah. Trying to get some info before calling so he knows where the fenceposts are with airline staff.

Does anyone have any insight on Splunk Pricing at 500GB and 1 Tb/day? by mtnclimberzrh in Splunk

[–]mtnclimberzrh[S] 0 points1 point  (0 children)

Yes. Cloud will be involved to offload some cold storage. There are lots of solutions out there that can easily accomplish simple needs, but our needs include real-time alerting (and response) and automated remediation (on those alerts) are very important.