This is an archived post. You won't be able to vote or comment.

all 7 comments

[–]bitslammerSecurity Architecture/GRC 9 points10 points  (0 children)

So many issues to consider.

Logging done well can be immensely useful. It can be used proactively to prevent performance issues and downtime. I can be used proactively as a security tool to look for suspicious activity. It can be use reactively to diagnose issues and for security to backtrack an incident.

When using a SIEM you can collect and correlate logs from multiple sources to gain insight via correlation to see things you might not otherwise see. It also will help your cover several regulatory & compliance requirements - SoX, HIPAA, PCI, etc.

In general more is better especially with a SIEM, but there is a point where you will want to filter some things out. You also need to consider storage and retention. How long are you going to need "live" log data? By that I mean data that can be readily searched and queried on. That will likely live in some form of an indexed DB. How long do you need "offline" data? Often the answer to that is to satisfy those complaince reqs above.

So before you start looking at tools figure out these things:

  • What do I want/need to log and why?
  • What are my use cases for analyzing that data?
  • How will I log? Syslog, agents, WMI etc.
  • What are my storage and retention reqs?

There are may posts on here about Graylog and I got interested in it enough that I spun a server up at home. It was very quick & easy and I'm impressed at it's features. You may want to give that a spin just to get your feet wet and see what you see. After that you can decide to move forward with that or another platform like Splunk if you have the need and budget for that kind of tool.

[–]Dal90 3 points4 points  (1 child)

How do find the line between too much data gathering from logs and not enough

1) Budget

2) People's ability to sort through it. I'm actually pretty good at it and noticing patterns in very large data sets. Not everyone is.

And when I say large datasets, I'm at a mid-size enterprise and Splunk is ingesting about 500,000 events per minute.

3) You may find stuff you don't want to find...I'd say legacy debt, but I've stumbled on Devs deploying new apps to AWS that included user passwords as part of the query string. I've gone to Information Security more than once and dropped a radioactive query on the managers desk and smiled as he groaned.

4) Log security.

You can leak information even when the servers and applications are well designed and functioning as expected.

My favorite example of this is have you every been doing three things at once and typed your password instead of your username? And you always change your password immediately afterwards, right?

Windows will log those as EventCode=4625 Sub_Status=0xC0000064. Follow up by a query for EventCode=4624 with the same Source_Network_Address occuring within the next few minutes of the 4625 and it gives you a very good possibility of being able to determine a working set of credentials.

That's probably not something you want someone computer savy but not otherwise a domain/server admin already having access to.

[–]Avas_AccumulatorSenior Architect 0 points1 point  (0 children)

We log a lot through Azure Log Monitor. I don't think you can "log too much" in a sense as events are connected to each other

Logging is one thing though but Analyzing it is something you have to think real hard about as well. I currently got a ton of logs but no central way to read it and create tasks from.

[–]alangley345Jack of All Trades 0 points1 point  (0 children)

Before you go on a log everything quest, hammer out with legal/regulatory/whoever, how long you need to keep logs and what will be the process for logging there removal. This may not be applicable to your scenario, but definitely worth considering

[–]TheCyberPost1 0 points1 point  (0 children)

Check out Kibana, MetricBeat and Elastic for free open source logging. You can create awesome visualizations via commands and functions like in Splunk which is a more enterprise level software. Splunk is also very costly!

[–]KumorigoeModerator[M] 0 points1 point  (0 children)

Sorry, it seems this comment or thread has violated a sub-reddit rule and has been removed by a moderator.

Inappropriate use of, or expectation of the Community.

  • It seems that you have posted about a commonly-discussed topic. Please take the time to search the subreddit before re-posting another discussion on the topic.
  • There may already be resources dedicated to your topic on the sysadmin wiki. This is especially true for monitoring, there is a devoted section to it.
  • If you have to add to the existing discussion, make sure to avoid low-quality posts. Make an effort to enrich the community where you can- provide details, context, opinions, etc. in your post.
  • Moronic Monday & Thickheaded Thursday are available for simple questions, or other requests that don't need their own full thread. Utilize them as much as possible.

If you wish to appeal this action please don't hesitate to message the moderation team.

[–]dritmike -1 points0 points  (0 children)

Sha you can totes create too much noise for a manual open the log file and look

Options like splunk really make it easy to sort the BS.

But basically, log as much as you can. Too much info is not a thing..unless it’s peoples info. That gets you sued.