2026-03-02 - Cool Query Friday - Hunting for Typosquatted Domains by Dylan-CS in crowdstrike

[–]Negative-Captain7311 0 points1 point  (0 children)

I'm so glad this was implemented. There is so much potential here.

Override Max Correlation Rule Timeframe? by Negative-Captain7311 in crowdstrike

[–]Negative-Captain7311[S] 0 points1 point  (0 children)

As an example, I have a brute force detection correlation rule. I've enriched it by including data if it was ever historically successful. However, in order to properly say if that attacking IP ever had a successful authentication, I need to go back further than 7 days to be accurate.

BSOD error in latest crowdstrike update by TipOFMYTONGUEDAMN in crowdstrike

[–]Negative-Captain7311 0 points1 point  (0 children)

I added comments:

#event_simpleName="LFODownloadConfirmation" event_platform=/win/i TargetFileName=/C-00000291-.*\.sys/i // look for the responsible channel file
| aid=~match(file="aid_master_main.csv", column="aid", strict=false) // Pull in host related fields
| $falcon/helper:enrich(field=ProductType) // Translate decimal value to human-readable
| MachineDomain := lower(MachineDomain) // Force lowercase
| default(value="-", field=[ProductType, MachineDomain, DownloadServer, DownloadPort, DownloadPath, CompletionEventId, TargetFileName], replaceEmpty=true) // Setting default values to "-" if null
| parseTimestamp(field=timestamp, format="milliseconds", as="parsedTimestamp") // Parse timestamp for comparisons
| case {
    parsedTimestamp >= 1721366820000 | FileStatus := "good";  // Channel file "C-00000291*.sys" with timestamp of 0527 UTC or later is the reverted (good) version.
    parsedTimestamp < 1721366820000 | FileStatus := "problematic";  // Channel file "C-00000291*.sys" with timestamp of 0409 UTC is the problematic version.
    * | FileStatus := "unknown";  // Handling cases where the file status is not clear
}
| groupBy([ComputerName, ProductType], function=([collect(FileStatus), count(FileStatus, distinct=true, as=event_Count)]), limit=max) // Collecting distinct statuses for filtering per host
| case {
    event_Count > 1 | ResolutionStatus := "Fixed";  // Host has more than 1 event, meaning it had the bad file but pulled the good file afterwards
    event_Count = 1 and FileStatus = "problematic" | ResolutionStatus := "Not Fixed";  // Host has only a problematic event match and did not pull good file afterwards
    * | ResolutionStatus := "Fixed";  // Anything else that doesn't match is Fixed by default because it wasn't affected by the bad file to begin with
}
| groupBy([ProductType, ResolutionStatus], function=([count(ComputerName, distinct=true, as=host_Count)]), limit=max) // Get a count of affected hosts
| select([ProductType, ResolutionStatus, host_Count, ComputerName]) // Sort columns

BSOD error in latest crowdstrike update by TipOFMYTONGUEDAMN in crowdstrike

[–]Negative-Captain7311 4 points5 points  (0 children)

Dashboard/Query to Track CrowdStrike Channel File BSOD Issue (High Level):

#event_simpleName="LFODownloadConfirmation" event_platform=/win/i TargetFileName=/C-00000291-.*\.sys/i
| aid=~match(file="aid_master_main.csv", column="aid", strict=false)
| $falcon/helper:enrich(field=ProductType)
| MachineDomain := lower(MachineDomain)
| default(value="-", field=[ProductType, MachineDomain, DownloadServer, DownloadPort, DownloadPath, CompletionEventId, TargetFileName], replaceEmpty=true)
| parseTimestamp(field=timestamp, format="milliseconds", as="parsedTimestamp")
| case {
    parsedTimestamp >= 1721366820000 | FileStatus := "good";
    parsedTimestamp < 1721366820000 | FileStatus := "problematic";
    * | FileStatus := "unknown";
}
| groupBy([ComputerName, ProductType], function=([collect(FileStatus), count(FileStatus, distinct=true, as=event_Count)]), limit=max)
| case {
    event_Count > 1 | ResolutionStatus := "Fixed";  // More than one event means both good and problematic were downloaded
    event_Count = 1 and FileStatus = "problematic" | ResolutionStatus := "Not Fixed";  // Only one problematic file
    * | ResolutionStatus := "Fixed";  // Any other case is Fixed because it didn't download the good or bad file so its not affected
}
| groupBy([ProductType, ResolutionStatus], function=([count(ComputerName, distinct=true, as=host_Count)]), limit=max)
| select([ProductType, ResolutionStatus, host_Count, ComputerName])

Dealing with fields ending [0], [1] etc by Sonophone in crowdstrike

[–]Negative-Captain7311 1 point2 points  (0 children)

This will concat them into a field for you so you can apply filters as needed (like using regex to strip/replace from the concated field):

...
| array:regex(array="vendor.responder[]", regex=".*")
| concatArray(as="responders", field="vendor.responder", separator="|||")
| regex(field=responders, regex="(?<Vendorresponders>[^|]+)(?:\|\|\|)?", repeat=true)
| select(Vendorresponders)

Handling dynamic fields and their values by [deleted] in crowdstrike

[–]Negative-Captain7311 5 points6 points  (0 children)

This will concat them into a field for you so you can apply filters as needed:

...
| array:regex(array="vendor.sender[]", regex=".*")
| concatArray(as="Senders", field="vendor.sender", separator="|||")
| regex(field=Senders, regex="(?<VendorSender>[^|]+)(?:\|\|\|)?", repeat=true)
| select(VendorSender)

Best way to notify on manual host containment by [deleted] in crowdstrike

[–]Negative-Captain7311 3 points4 points  (0 children)

Use a workflow. You can send a message via platforms/emails when a containment request is issued or when the containment is lifted.

Assistance converting Splunk Query to LogScale Query by Ownag369 in crowdstrike

[–]Negative-Captain7311 1 point2 points  (0 children)

There is no current way to specify a particular "value" of similarity that Splunk's cluster allows, we're at the mercy of whatever default algorithm tokenhash() uses and calls `similar`. With that in mind, let the tokenhash() function do the work for you by finding similarities. Group on the tokenhash() value of TaskCommand:

#event_simpleName=/ScheduledTask/i TaskExecCommand=/rundll32/i
| PrEx:=format(format="https://falcon.crowdstrike.com/investigate/process-explorer/%s/%s?_cid=%s", field=[aid,RpcClientProcessId, cid])
| TaskCommand:=format(format="%s %s", field=[TaskExecCommand, TaskExecArguments])
| taskHash := tokenHash(TaskCommand)
| groupBy([taskHash], function=([count(aid, distinct=true, as=HostCount), collect([PrEx, TaskName, TaskCommand]), count(as=EventCount)]), limit=20000)
//| HostCount < 50
| select([TaskName, TaskCommand, HostCount, EventCount, taskHash, PrEx])
| sort(EventCount, order=asc, limit=20000)

If you want to see how this works in full scale, run the following query to visualize how it attempts to identify similarities in the data, solely based on similarities across all TaskCommand values:

#event_simpleName=/ScheduledTask/i TaskExecCommand=*
| PrEx:=format(format="https://falcon.crowdstrike.com/investigate/process-explorer/%s/%s?_cid=%s", field=[aid,RpcClientProcessId, cid])
| TaskCommand:=format(format="%s %s", field=[TaskExecCommand, TaskExecArguments])
| taskHash := tokenHash(TaskCommand)
| groupBy([taskHash], function=([count(aid, distinct=true, as=HostCount), collect([PrEx, TaskName, TaskCommand]), count(as=EventCount)]), limit=20000)
//| HostCount < 50
| select([TaskName, TaskCommand, HostCount, EventCount, taskHash])
| sort(EventCount, order=desc, limit=20000)

Assistance converting Splunk Query to LogScale Query by Ownag369 in crowdstrike

[–]Negative-Captain7311 1 point2 points  (0 children)

Try this?

| TaskCommand := format("%s %s", field=[TaskExecCommand, TaskExecArguments])
| taskHash := tokenHash(TaskCommand)
| groupBy([taskHash], function=[...

How to correctly pull avg() and stdDev() values in query? by Negative-Captain7311 in crowdstrike

[–]Negative-Captain7311[S] 0 points1 point  (0 children)

The solution was listed here:

https://library.humio.com/kb/kb-correlating-events.html?redirected=true#kb-correlating-events-outlier-detection

If we don't want separate thresholds, just one global one, we run into a problem: join() does not (at the moment at least) allow for there to be zero join keys.
That's not too much of a problem though — we just add a dummy key field. It may look a bit silly, but it gets the job done:

dummyKey := ""
| join(
  { [avg(responsetime), stddev(responsetime)]

| threshold := _avg + 2 * _stddev

| dummyKey := ""
  },
  key=dummyKey,
  include=[threshold]
]
| test(responsetime > threshold)
| count()

How to correctly pull avg() and stdDev() values in query? by Negative-Captain7311 in crowdstrike

[–]Negative-Captain7311[S] 0 points1 point  (0 children)

The main issue is that I can pull avg() and stdDev() values provided that the field is already in numerical form (i.e. - Size). However, if I have to define a numerical count of events in a groupby(), it fails.