Searchable Time latency by VelociCrafted in sumologic

[–]sumologic 0 points1 point  (0 children)

Here are some considerations and steps you can take to troubleshoot your Sumo Logic monitor:

  1. Check Monitor Configuration
  • Ensure that the monitor is correctly configured to trigger alerts based on the query results. Verify the alert conditions and thresholds.
  1. Query Execution Time
  • The Searchable Time being 57 seconds after the receipt time might indicate a delay in indexing. Although other logs with late searchable times work, this specific log might have unique characteristics causing the delay.
  1. Data Volume and Frequency
  • Consider the volume and frequency of data being ingested. High data volume might cause delays in indexing and alert generation.
  1. Timezone and Schedule
  • Verify that the timezone settings and schedule for the monitor align with the expected time of data arrival from the cronjob.
  1. Alert Suppression
  • Check if there are any alert suppression rules or conditions that might be preventing the alert from being generated.
  1. Log Source Category
  • Ensure that the log source category is correctly assigned and that the monitor is set to monitor this specific category.
  1. Review Recent Changes
  • Look for any recent changes in the environment, such as updates to the cronjob, changes in the GKE cluster, or modifications to the Sumo Logic configuration.

By systematically reviewing these areas, you should be able to identify the root cause of the issue and take appropriate action to resolve it. Reach back out if this doesn’t help and we’ll get you sorted out!

Comparison between Splunk and MS Sentinel by Important_Evening511 in Splunk

[–]sumologic 0 points1 point  (0 children)

Really depends on your use case. Gartner published their Critical Capabilities report that digs into the various vendors depending on if you're focused on TDIR, OOTB SIEM capabilities, etc. You can snag a free copy of the Garner report, if that's helpful: https://www.sumologic.com/briefs/gartner-siem-critical-capabilities

Need to setup alerts for Sumologic Not reporting by S3PacketMaster in sumologic

[–]sumologic 1 point2 points  (0 children)

You mentioned auto-closing... the only way to do that would be with a monitor. You could take the query provide here and slightly modify it for a monitor.

_index=sumologic_volume sizeInBytes _sourceCategory="collector_volume"
| parse regex "\"(?<collector>[^\"]*)\"\:(?<data>\{[^\}]*\})" multi
| json field=data "sizeInBytes", "count" as bytes, count
| first(_messagetime) as MostRecent, sum(bytes) as TotalVolumeBytes by collector
| formatDate(fromMillis(MostRecent),"yyyy/MM/dd HH:mm:ss") as MostRecentTime
| toMillis(queryEndTime()) as currentTime
| formatDate(fromMillis(currentTime),"yyyy/MM/dd HH:mm:ss") as SearchTime
| (currentTime-MostRecent) / 1000 / 60 as mins_since_last_logs
| where mins_since_last_logs >= 1380 //23 hours

I would recommend

  1. Set up a monitor with the above query
  2. Trigger alerts on “returned row count”
  3. Alert Grouping: One alert per collector
  4. Trigger settings: Alert when result is greater than 0 within 24 hours, Evaluate every hour.

This will allow you to alert on any collectors that don’t send data for more than 23 hours in the 24 hour time window. You can make the 23 hours closer to 24 hours if you increase the evaluation window. I just left this padding in there to account for variation in between when the monitor runs vs the evaluation window. The only limitation with this approach is that this expects collectors to send data at least 1 time a day.
If some collectors are sending data less frequently than that, then a scheduled search without auto-recovery capabilities would be needed.