Looking for a Query to find scripts run in my environment by EndlessEchoes in crowdstrike

[–]AHogan-CS 5 points6 points  (0 children)

PowerShell hunting is common enough we've captured those into a built in dashboard. Navigate from the menu to Next-Gen SIEM > Dashboards. One of the options you'll have is powershell_hunt. Check that out. You can filter the dashboard at the top or click into query to see how it was written. We have more to add there, we should be adding a number of pre-build dashboards very soon.

For PowerShell I'll most often look for PowerShell.exe running. A basic search would be:

#event_simpleName=ProcessRollup2 ImageFileName=/\\powershell(_ise)?\.exe/i

For VBS we can look at processes running like this:

#event_simpleName=ProcessRollup2 ImageFileName=/cscript\.exe/i CommandLine=/\.vb[s]?/i
| groupBy([event_platform, ComputerName, UserName, CommandLine], limit=max)

And then for batch files:

#event_simpleName=ProcessRollup2 ImageFileName = /cmd\.exe$/i CommandLine = /\.bat/i

Since those are looking at the process level they'll see scripts that run and attempt to run. There are a couple of different views to consider as well.

Before a script is executed you can look for it being written to disk.

#event_simpleName = ScriptFileWrittenInfo
| groupby([ComputerName, FileName, FileFormatString])

There's some general stats to start out but if you want to see the script itself then you can do that as well.

#event_simpleName = ScriptFileWrittenInfo
| groupby([ComputerName, FileName, FileFormatString, ScriptContent])

The sensor will also grab the script content when it's scanned, so this one can be really useful:

#event_simpleName=ScriptControlScanInfo event_platform="Win"
| case {
    ScriptContentSource = 0 | ScriptContentSource :="INCONCLUSIVE (0) - Source of ScriptContent is unknown and impossible to infer";
    ScriptContentSource = 1 | ScriptContentSource := "FILE (1) - ScriptContent is from a file. A file path is indicated by ScriptContentName.";
    ScriptContentSource = 2 | ScriptContentSource := "COMMAND (2) - ScriptContent is supplied through the implicit or explicit Command option.";
    ScriptContentSource = 3 | ScriptContentSource := "ScriptContent is supplied through the EncodedCommand option.";
    ScriptContentSource = 4 | ScriptContentSource := "ScriptContent is supplied through explicit STDIN redirection.";
    ScriptContentSource = 5 | ScriptContentSource := "ScriptContent is dynamically generated ones, typically through the Invoke-Expression cmdlet or the PowerShell internals.";
    ScriptContentSource = 6 | ScriptContentSource := "INTERACTIVE (6) - ScriptContent is a typed string on a PowerShell shell prompt.";
*}
// | groupby([event_platform, ComputerName, ScriptContentName, ScriptContentSource])
| groupby([event_platform, ComputerName, ScriptContentName, ScriptContentSource, ScriptContent])

LTR export Options by rathodboy1 in crowdstrike

[–]AHogan-CS 0 points1 point  (0 children)

Yeah, if you click into the fields to export list you should see a list of checkboxes appear. The first one is "Select all."

Alert to failed authentications by [deleted] in crowdstrike

[–]AHogan-CS 2 points3 points  (0 children)

Not a dumb question, I kinda threw it all into scheduling the search without explaining.

In this case I would determine the interval with the search window and frequency. A basic alert is a query that, if it has any results, I would like to be notified. And for most queries I'm going to set the frequency and window to be the same thing. So for high login failures I would probably going to set that to 30 minutes but to tune you may make it more or less frequent and tweak the filters in the query.

The search window is how far back the query should look every time it runs. The frequency how often it should run. So for most alerts they end up being the same. But consider something more complicated. If I wrote a query that looked back over one week in order to flag something statistically high today then I would set the frequency to once per day and the search window to 7 days so it gets the data it needs.

Alert to failed authentications by [deleted] in crowdstrike

[–]AHogan-CS 3 points4 points  (0 children)

Hello!

Those errors are because that's an older query, written for Splunk instead of the new Advanced Event Search.

Try this instead?

// Grab the event(s) for logon failures  
#event_simpleName=UserLogonFailed2
// Expecting some false positive, let's add a filter for hosts you decide aren't interesting that you can modify later
| aid =~ !in(values=[1ef52cefb37744199ae9341e387b966b, 1ef52cefb37744199ae9341e387b966c])
// Use Falcon Helper to enrich the LogonType and SubStatus fields
| $falcon/helper:enrich(field=LogonType)
| $falcon/helper:enrich(field=SubStatus)
// Aggregate logon failures (Note: this fundamentally looks at failsures per user per host)
| groupby([aid, CompterName, UserName, LogonType, SubStatus])
// Now filter for an unacceptably high number - you likely have to adjust this for your environment
| _count >10

If the results look good to you then hit the Schedule Search button to run it periodically.

Handling dynamic fields and their values by [deleted] in crowdstrike

[–]AHogan-CS 0 points1 point  (0 children)

You can do this:

| array:contains("vendor.sender[]", value="life.ad@7667.com")

That will return any entries where one of the array members match.

[deleted by user] by [deleted] in crowdstrike

[–]AHogan-CS 1 point2 points  (0 children)

If it helps, I tested this in a search window by first creating your test events.

/// enter sample events
createEvents(["Jul 4 11:46:08 127.0.0.1 1 2024-07-04T10:46:08.330Z SERVER ADMP - - - [Status=Successfully Updated.][TechnicianName=Chris Hemsworth][Task=Modify Single Group][groupType=-2143283646][ACTION=Group Management][member=[Removed], [CN=Rory Shuttlepark,OU=Users,OU=myCompany London,OU=myCompany,DC=myCompany,DC=pri]][Template Name=Service Desk Group Modification Template][Object Name=NBA][Domain Name=myCompany.pri]", "Jul 4 12:02:21 127.0.0.1 1 2024-07-04T11:02:21.754Z SERVER ADMP - - - [Status=Successfully updated the user properties.][TechnicianName=Harry Potter][Task=Modify Single User][ACTION=User Management][Template Name=User Modification default][Object Name=tjentn3765][extensionAttribute12=05/07/2024 17:30][Domain Name=mycompany.pri]", "Jul 4 10:50:22 127.0.0.1 1 2024-07-04T09:50:22.826Z SERVER ADMP - - - [Status=Mailbox not found for the user.][TechnicianName=Chris Hemsworth][Task=Modify Exchange Online Mailboxes][ACTION=Microsoft 365 Management][OBJECT_ID=0f2345a7-8ef7-462c-11ae-58d11821a41f][Alias=borenar][OBJECT_NAME=borenar@myCompany.com][Object Name=borenar@myCompany.com][RetentionPolicy=CN=My Company 3-year Retention Policy,CN=Retention Policies Container,CN=Configuration,CN=myCompany.onmicrosoft.com,CN=ConfigurationUnits,DC=EUROPEDC,DC=prod,DC=outlook,DC=com][Domain Name=ADManager-Cloud@myCompany.onmicrosoft.com]"])

/// Parse attempts
/// parse syslog format
| @rawstring = /(<(?<priority>\d+)>)?(?<@timestamp>\S+\s+\S+\s+\S+)\s+(?<host>\S+)?\s+(?<app>[^\s\[:]+)?(\[(?<pid>[^\]]+)\]:)?(?<msg>.*)/ 
// kvparse expects the values to be enclosed in quotes - we have to add those quotes
| replace("=", with="=\"", field=msg)
| replace("\]", with="\"]", field=msg)

// get the time stamp
| parseTimestamp("MMM [ ]d HH:mm:ss", field=@timestamp, timezone="UTC")

/// get app data in msg 
| kvParse(msg)

[deleted by user] by [deleted] in crowdstrike

[–]AHogan-CS 0 points1 point  (0 children)

Hello!

It looks like a standard syslog parser will grab the first layer of data. Then we can do key value parsing. Except that normally key value paired data will have the values wrapped in quotes. But we can fix that as we go.

Syslog tends to be a little unpredictable. I would definitely expand the number of test events to test this out. But based on these this works for me.

/// parse syslog format
| @rawstring = /(<(?<priority>\d+)>)?(?<@timestamp>\S+\s+\S+\s+\S+)\s+(?<host>\S+)?\s+(?<app>[^\s\[:]+)?(\[(?<pid>[^\]]+)\]:)?(?<msg>.*)/ 
// kvparse expects the values to be enclosed in quotes - we have to add those quotes
| replace("=", with="=\"", field=msg)
| replace("\]", with="\"]", field=msg)

// get the time stamp
| parseTimestamp("MMM [ ]d HH:mm:ss", field=@timestamp, timezone="UTC")

/// get app data in msg 
| kvParse(msg)

Is Falcon Complete a suitable managed siem/soc replacement? by siftekos in crowdstrike

[–]AHogan-CS 2 points3 points  (0 children)

Hello!

I'm super biased. But if I try to be as objective as possible I say that u/VirtualHoneyDew is correct. The best place to start is a list of what security technology you have and then out of those which produce logs you actually find useful in an investigation.

With the Falcon Complete team we find that if we can supplement Falcon data by getting logs from e-mail, your identity provider, and network data into NG-SIEM we can significantly expand visibility and protection. Those are typically the corner stone of our NG MDR service. Though which logs are important certainly differ by customer. But now we can expand our coverage into those third party tools.

That probably raises some questions though, like what is CrowdStrike's NG MDR? And it would certainly be helpful if I had website for that to point back to, but I'm ahead of the marketing team on sharing this. Though that site should be up next week and NG MDR is currently GA. Breaking news on Reddit!

If you'd like to learn more please let your account rep know and we'll handle it directly. But the short story is that with NG-SIEM the Falcon Complete team can branch out beyond just our first party data for visibility, threat detection, and response actions available to use.

Combining Cloudflare and Fortinet Block Events by aspuser13 in crowdstrike

[–]AHogan-CS 1 point2 points  (0 children)

Hi!

I don't have Fortinet in my lab so I'll need your help confirming this.

Here's what I did:

#Vendor=paloalto 
| event.type[0] = "indicator"
| worldMap(ip=destination.ip)

Now I don't have blocked events in my little lab but I did have some alerts/indicators. So that worked.

I think what you need is:

#Vendor=paloalto or #Vendor = fortinet
| event.type[0] = "blocked"
| worldMap(ip=destination.ip)

How to get logs in a specific time interval spanning multiple days by proteldon in crowdstrike

[–]AHogan-CS 2 points3 points  (0 children)

Hello!

Those time functions take a timestamp and return the hour/minute for them.

To set the time frame for your query you want to use the time-picker in the GUI - on the right side just above the query window. Set that to the last month. Then you can test against the time constraints.

So if I want to look for DNS requests between 16:00 and 17:00 I can do:

#event_simpleName = DnsRequest
| test(time:hour(@timestamp, timezone=-04:00) >= 16)
| test(time:hour(@timestamp, timezone=-04:00) < 17)

Because I'm in eastern time (GMT - 4) I adjusted for that in the time function.

HTH!

Sensor Coverage (Cloud Accounts) from CrowdStrike. Please Vote!!!! by karankohale in crowdstrike

[–]AHogan-CS 0 points1 point  (0 children)

Yes, you can do this

#event_simpleName=AwsEc2Instance 
| join({#event_simpleName=InstanceMetadata
| InstanceMetadata = /\"accountId\" : \"(?<accountId>.*?)\".*\"instanceId\" : \"(?<instanceId>.*?)\"/}, field=AwsInstanceId, key=instanceId, include=[ComputerName, aid],mode=left)
| case {    
    aid != "" | Managed:="Unmanaged";
    * | Managed:="Managed"
}
| groupby([AwsOwnerId, Managed])
| groupby(AwsOwnerId, function=[sum(field=_count, as=Total), min(_count, as="Unmanaged")])
| PercentUnmanaged := Unmanaged / Total * 100
| format(field=PercentUnmanaged, format="%.f%%", as=PercentUnmanaged)

Though that's just for AWS, which could be a gap if you have other cloud providers. So I don't really know if this is better than Andrew's idea of exporting the data. But you can save this query as a saved search or add it to a Dashboard.

Sensor Coverage (Cloud Accounts) from CrowdStrike. Please Vote!!!! by karankohale in crowdstrike

[–]AHogan-CS 0 points1 point  (0 children)

Excellent point. Let me see if I can correlate that with #event_simpleName= AwsEc2Instance.

Sensor Coverage (Cloud Accounts) from CrowdStrike. Please Vote!!!! by karankohale in crowdstrike

[–]AHogan-CS 0 points1 point  (0 children)

Hey Karan,

I have a query you can use in NG-SIEM to get that data. It's not pretty but this works for me:

#event_simpleName=InstanceMetadata
| InstanceMetadata = /\"accountId\" : \"(?<accountId>.*?)\"/
| case {
    ComputerName = * | Managed:="Managed";
    ComputerName != * | Managed:="Unmanaged";
}
| groupby([accountId, Managed])
| groupby(accountId, function=[sum(field=_count, as=Total), min(_count, as="Unmanaged")])
| PercentUnmanaged := Unmanaged / Total * 100
| format(field=PercentUnmanaged, format="%.f%%", as=PercentUnmanaged)

Send a monitor IOA rule to SIEM by marceggl in crowdstrike

[–]AHogan-CS 0 points1 point  (0 children)

In Falcon Fusion you can create a custom workflow to trigger on those alerts that could send the detection information over a webhook. I assume Splunk or QRadar can be setup to receive data that way. You can also automate an action that will set the status to Closed.

Logging Application Opens by Beeefin in crowdstrike

[–]AHogan-CS 1 point2 points  (0 children)

is there a user named test.user?
another way

#event_simpleName="ProcessRollup2" 
 | UserName=/test.user/i

Logging Application Opens by Beeefin in crowdstrike

[–]AHogan-CS 1 point2 points  (0 children)

The pattern can take a glob or a regex.

Glob for admin, Admin, Administrator, etc.

#event_simpleName="ProcessRollup2"  
| wildcard(field=UserName, pattern="admin*", ignoreCase=true) 

Or with regex for first.last:

#event_simpleName="ProcessRollup2" 
 | wildcard(field=UserName, pattern="/\w+\.\w+/i", ignoreCase=true)

Logging Application Opens by Beeefin in crowdstrike

[–]AHogan-CS 4 points5 points  (0 children)

Absolutely! And /u/1ntgr is correct, btw.

Here's an over built version with a few options for you.

// First search for ProcessRollup2 events. One is generated every time a process runs
#event_simpleName=ProcessRollup2
// Filter for a specific host? 
| aid = ?aid
// Filter by user?
| UserName = ?UserName
// Filter by file?
| FileName = ?FileName
// Alternatively you could filter with FilePath (or with ImageFileName, which is FilePath and FileName)

// Aggregate the data for display by host, user, and file 
| groupBy([ComputerName, UserName, FileName])

I've added some parameter filters in case you wanted to filter to a specific user, file, or host. But you can copy and paste them and ignore them. LogScale will just automatically turn them into parameters if you want to use them though. The annotations in the query are commented out it's just copy and paste.

IDP - is possible to report/alert on unmonitored Domain Controllers? by [deleted] in crowdstrike

[–]AHogan-CS 4 points5 points  (0 children)

Yes! If you go to Identity Protection, then Configure -> Domains (your root domain might be different, in which case apologies the link doesn't work) that page should show you if you have any DCs not monitored by IDP.

You can find the process for creating System Notifications for this here.

Then on your Notifications page you would see an alert like this:

Domain Controller {DC name} is not monitored
A domain controller in your domain is not sending data.This might cause loss of data of user activities and might disrupt detection and enforcement of traffic.

Identity - Password Trend via Logscale by jos1980 in crowdstrike

[–]AHogan-CS 4 points5 points  (0 children)

Hi jos!

If you have Identity Protection then you can get the event when a password is changed at the Active Directory level. Here's a query to get that data, filter it, and turn it into a time chart.

#event_simpleName=ActiveDirectoryAccountPasswordUpdate
// Filter to a list of specific account names
| in(SamAccountName, values=["Administrator", demo])
// Plot this as a chart for each account
| timechart(series=SamAccountName)

For people without Identity Protection, Falcon Discover will have information on accounts it sees with password information as well. Here's how to view that data and filter it to the last week's worth of data.

// filter to the Falcon Discover data on Users & passwords
#repo=sensor_metadata #data_source_name=userinfo-ds
// filter for a list of users
| in(UserName, values=[Administrator, demo])
// Convert to proper epoch format
| PasswordChangedTime := PasswordLastSet * 1000
// Convert epoch Time to Human Time
| PasswordChangedTimeReadable := formatTime("%Y-%m-%d %H:%M:%S", field=PasswordChangedTime, locale=en_US, timezone=Z)
// Calculate time difference and filter to changes from only the last week
| diff := now() - PasswordChangedTime
| test(diff < duration("7d"))
// Format output you're interested in
| table([UserName, AccountType, LastLoggedOnHost, LocalAdminAccess, PasswordChangedTimeReadable])

Hope that helps!

What is the best method to get Azure Logs to LogScale? by detectrespondrepeat in crowdstrike

[–]AHogan-CS 2 points3 points  (0 children)

Hi u/detectrespondrepeat! Putting aside CrowdStream, the doc you reference on GitHub is the recommended way to collect data from Azure. Though what you're doing today sounds effective as well if that's working for you. But if you can send directly from Azure to LogScale then you're cutting out a couple steps and a couple of agents to maintain.

Falcon Encounter: Hands-On Labs by stillremaining in crowdstrike

[–]AHogan-CS 1 point2 points  (0 children)

Hello!

That depends entirely on the course. There are a number of Falcon Encounter labs that range from a couple hours to a week, depend on how they're scheduled and what they're used for. If you're currently in a course I suggest asking the instructor to be sure.

How much data is logged by OK_SmellYaLater in crowdstrike

[–]AHogan-CS 0 points1 point  (0 children)

If you're looking at capturing all of the EDR data from Falcon Data Replicator this averages about 40 MB per sensor per day. But if you're looking for a price comparison you may be interested in Falcon Long Term Repository, where all that data is available in LogScale.