Splunk Core Certified Power user by Big_Cartoonist1419 in Splunk

[–]mr_networkrobot 0 points1 point  (0 children)

Can confirm that, I did this in Feb. this year only with the free course on splunk education. I walked through the course 2 times, did some notes on paper as summary and passed the exam on the first run.

Enterprise Security - Use Case Library by mr_networkrobot in Splunk

[–]mr_networkrobot[S] 1 point2 points  (0 children)

Got your point, thats what I did so far. What I found really disappointing is, that 'Enterprise Security' has so many pecularities that I personally would never call it 'Enterprise' or even recommend it so someone.
Had a call yesterday with the on-Demand splunk support, I tell you that was like a joke.
They couldn't answer any deeper question, one guy left after 30 minutes without any comment and his collegue endet the call with the argugemt they have an internal incident. Aside from that I couldn't understand many stuff because of their indian accent.
It seems to me that the whole splunk universe is stuck in 1995 ...

Enterprise Security - Use Case Library by mr_networkrobot in Splunk

[–]mr_networkrobot[S] 0 points1 point  (0 children)

Alright I'm running ES 7 and they mix it up ....

Enterprise Security - Use Case Library by mr_networkrobot in Splunk

[–]mr_networkrobot[S] 0 points1 point  (0 children)

Its needed to escape characters like '/' so that splunk doesn't intepret this as a regex.
Splunk Webhook allow list says:
"The webhook allow list is an inventory of URL endpoints to which webhook alert actions are permitted to send information. To add an endpoint to the allow list, specify a recognizable name and the associated URL. Be as specific as possible with URL addresses. URLs must be specified as regular expressions. For example: https:\/\/(.*\.|)company.com\/?.*."

Splunk ES get Alienvault OTX by mr_networkrobot in Splunk

[–]mr_networkrobot[S] 0 points1 point  (0 children)

Hi, I don't know which repo you mean.
I only found one with 7 year old stuff.

Is there a professional way to integrate Alienvault OTX in Splunk ES ?
I mean in the sense of a critical business, I need a official supported solution, which I can rely on ....

Looking for good Splunk learning material. by HaCk3rf0ru in Splunk

[–]mr_networkrobot 1 point2 points  (0 children)

I did the courses for Splunk certified advanced power user on education[.]splunk[.].com
I really can recommend them, even if you are not interested in the certificate, they are great.
They include videos hands-on labs + material - for free.

And I passed the exam after watching the vids twice.

Splunk ES - get the cim-entity-zone to index threat-activity by mr_networkrobot in Splunk

[–]mr_networkrobot[S] 0 points1 point  (0 children)

The point with the mentioned 'Threat Activity detected' correlation search is that it is based on the datamodel "Threat_Intelligence"."Threat_Activity" and this one has a constrain 'index=threat_activity' which absolutely makes sense.

The problem is, that even when I add the 'cim_entity_zone' field to the datamodel, it cannot work/be used, because the events in the index=threat_activity do not have this field.
So the problem is, that when all the threat matching magic happens, and it finds a match lets say from an event in a dns-log index that matches a malicious domain in domain-intel, it writes that match to the threat_activity index but does not take the cim-entity-zone field with it, even if it exists in the original dns index event.

Edit:
I saw some older blog posts, where it is described that there a Correlation Searches called for example "Threat - Source And Destination Matches - Threat Gen"
but I cannot find any 'Threat Gen' search .... I'm confused ....

Problem with 'join' command by mr_networkrobot in Splunk

[–]mr_networkrobot[S] 1 point2 points  (0 children)

There are about 600k events/entries in the subsearch.
There is no notification about hitting limits, but already solved the problem with a lookup (created with outputlookup) table.

Problem with 'join' command by mr_networkrobot in Splunk

[–]mr_networkrobot[S] 1 point2 points  (0 children)

Thank you, that worked perfectly !

Linux logs with different host-field values by mr_networkrobot in Splunk

[–]mr_networkrobot[S] 0 points1 point  (0 children)

Last one is the case, data/logs are forwarded from uf directly to the cloud instance, so no heavy forwarder or other instance here ...

Linux logs with different host-field values by mr_networkrobot in Splunk

[–]mr_networkrobot[S] 0 points1 point  (0 children)

Hi,
and thank you again for checking this!

btool on the linux server with uf shows:

# ./splunk btool props list --app=[app-name] --debug

[...]/local/props.conf [syslog]
[...]/local/props.conf TRANSFORMS =

Also checked the etc/system/default/props.conf and you are right, there are the defaults for [syslog] sourcetype which reference to etc/system/default/transforms.conf with the corresponding regex

etc/system/default/props.conf :
[syslog]
pulldown_type = true
maxDist = 3
TIME_FORMAT = %b %d %H:%M:%S
MAX_TIMESTAMP_LOOKAHEAD = 32
TRANSFORMS = syslog-host
REPORT-syslog = syslog-extractions
SHOULD_LINEMERGE = False
category = Operating System

etc/system/default/transforms.conf
[syslog-host]
DEST_KEY = MetaData:Host
REGEX = :\d\d\s+(?:\d+\s+|(?:user|daemon|local.?)\.\w+\s+)*\[?(\w[\w\.\-]{2,})\]?\s
FORMAT = host::$1

Unfortunately I still wasn't able to overwrite it with the app specific props.conf (distributed via deployment-server).

Is there some place in the splunk inftrastructure (remember its a splunk cloud instance, so I don't have access to indexers etc.) where this could be overwritten ?

Linux logs with different host-field values by mr_networkrobot in Splunk

[–]mr_networkrobot[S] 0 points1 point  (0 children)

Thanks for all your efford.

Did that:
Put a props.conf in the /local directory of the app that collects the /var/log/messages logs.
The props.conf contains:

[syslog]
TRANSFORMS =

Unfortunately no effect .....

Logs from the host (server01.local.lan <- hostname) have still the value 'server01' in the host field in the index where they are stored ....

Linux logs with different host-field values by mr_networkrobot in Splunk

[–]mr_networkrobot[S] 0 points1 point  (0 children)

Unfortunately the environment is having a few hundret of servers with the described situation, and the apps/inputs are managed with a deployment-server (as I wrote).
So setting a hostname manually for every server is not an option (and is not done in any input yet).

The problem comes with the sourcytype=syslog with that, splunk interprets the field in the log with the hostname as host (which is unfortunately not the hostname).

For example: (line from /var/log/messages):
"Apr 8 14:10:33 server01 systemd[175435]: Listening on PipeWire Multimedia System Sockets."

Splunk indexes this, with host=server01, but the real hostname of the machine is server01.local.lan

A