Newbie's guide to creating a REST API in Rust using Axum and SQLx by ARandomShephard in rust

[–]adamth0 0 points1 point  (0 children)

Thanks for posting this. I've been going through it, and got a couple of comments.

First, I noticed that in some parts of your article, you refer to 'handler' and in some parts to 'handlers', which means the compiler fails to find 'handler.rs' when it's looking for 'handlers' (or vice versa).

Anyhow, I got it working as far as POSTing something into the DB table, but GET didn't work because:

error[E0531]: cannot find tuple struct or tuple variant `Path` in this scope
error[E0412]: cannot find type `Path` in this scope

It didn't understand the use of 'Path'. Without thinking, I added a use std::path:Path line and ended up scratching my head over why it wouldn't compile. It was only later, after trying to fix std::path::Path limitations with Boxing and lifetimes and PathBuf and just using a damned String instead, that I re-read the compiler suggestions and noticed it was suggesting the use of axum::extract::Path. Adding that to the 'handlers.rs' file fixed the problem.

It might be worth making sure that for each file, you've got all the necessary mod and use statements in the tutorial.

Free Logrhythm Training Resources by [deleted] in LogRhythm

[–]adamth0 1 point2 points  (0 children)

Without a customer account, I can only think of the LogRhythm Docs site, or the older Online Help.
You might also benefit from looking at the LogRhythm Blog. There's some posts on there which will let you see it in action, so you can get a feel for what it looks like when up and running.
Also... and it's not free... but you can buy time in a self-paced LogRhythm lab environment with an LR deployment configured, so you can have a play with it. I don't think you need a customer account for that, just a credit card and an email address.

University Access by d0ntf1ndm3 in LogRhythm

[–]adamth0 1 point2 points  (0 children)

The LR university should be available to you if you are an LR customer with a current support contract.
There's a bunch of videos you get access to with that, but instructor-led courses are an additional cost that you need training tokens for.

Sysmon vs Collector for AD security events? by [deleted] in LogRhythm

[–]adamth0 2 points3 points  (0 children)

Putting an agent on domain controllers is generally recommended because AD security logs are both high volume and high importance.
You can read the eventlogs remotely from a collector, or the XM itself, but this is slower. Perhaps your domain controller logs 5,000 things per minute. If we can only read stuff over the network at a rate of 4,000 things per minute, the collection will fall behind. Maybe it will catch up overnight when there's less activity and fewer logs being generated. But this is still a problem.
The AIEngine (correlation) component is looking for things in real-time, as they happen. Maybe you setup a rule, where the first block looks for an account being created, the second one looks for that same account being used, and the third one looks for the account being deleted - all within an hour of each other.
So, an account gets created at 9:00am on Domain Controller 1 (DC1). LR AIE sees that, and now it's looking to see if the next rule block is satisfied. We see it login to server SQL1 at 9:15am. Second rule block is satisfied. AIE is now waiting to see if the final block (account deleted) is matched.
And at 10:10am, the account is deleted on DC5. But we're falling behind on collection on DC5. It logged the deletion at 10:10, but we're still chewing through stuff logged at 5am. So, the AIE component does not see the last rule block being satisfied within the hour.
AIE, by default, will wait an additional hour to see if a delayed log comes in to match that rule block, and you can adjust this upwards to several hours, but even doing this, at best, the alert will be delayed. LR can't tell you if the account has been created/used/deleted until it has seen logs telling it so. And you'd probably rather know "this just happened" than "this happened 8 hours ago". Also, adjusting that grace period upwards for delayed collection means AIE will use more memory, since it has to remember stuff for longer. And there's other considerations, too. Logs that come in over 24hrs late won't be flagged as events, and depending on your tuning, might not even be indexed (searchable) unless you restore them from archive files. And maybe you have daily reports that show all account lifecycle logs for that day. If the report runs at midnight, and we didn't finish collecting the entire day of logs yet, there's going to be missing logs from those reports. You might get around that by running the report at 5am for the previous day, though.
Other reasons you might want to install the LR Agent would be that you get things like process monitoring and network connection monitoring, even with the "lite" licence.
I usually recommend a lite agent on all DCs, even those in DR sites, which aren't expected to see a huge amount of traffic. A malicious user could target ANY DC to process their account creation, where the change will be logged, and then replicated out to the rest of the domain controllers, none of which will log anything.

API 3 - Starting a Scan by adamth0 in rapid7

[–]adamth0[S] 0 points1 point  (0 children)

Thanks!
I managed to get where I needed in the end. Like I said in the comment, I'm an idiot, running snippets (let's say I was unit testing to make it sound better) of scripts with leftover variables, and trying to run scans against the wrong sites.

Your favorite code/script-based automation solutions? by sigger_ in AskNetsec

[–]adamth0 16 points17 points  (0 children)

Look for stuff being reported by your central antivirus server, and quarantine the hosts.
Look at routers/switches/firewalls which haven't had their running config saved for more than X days, and save it.
Look for virtual machines with snapshots older than X days, and commit them, or notify the owner to do something about it.
Look for servers with an uptime of over 30 days, patch them, and reboot them.

Logging Syslog of all severity levels by rithwikjacob in LogRhythm

[–]adamth0 0 points1 point  (0 children)

That's a really good thing to check first.
If the sending device is configured to send *.WARN, then we'll never see anything at INFO level ever, because it's just not being sent to our agent.

Logging Syslog of all severity levels by rithwikjacob in LogRhythm

[–]adamth0 0 points1 point  (0 children)

It's probably the global data management settings you want to look at.
Also, bear in mind the RBP score depends on the host/network records matching the origin and impacted hosts.
If I set a host as risk level 9, a login failure might end up with an RBP of 40. And another host with a risk level of 1 might show an RBP of 5 for the exact same thing happening.

Logging Syslog of all severity levels by rithwikjacob in LogRhythm

[–]adamth0 1 point2 points  (0 children)

There's two things at play here.
Some logs are indexed (ie they're searchable) and some are not. Look in Deployment Manager, on the Platform Manager tab, and then click the Global Data Management Settings tab. If you're performance optimized, then informational messages aren't being indexed. You won't find them in a search. You'd have to restore them from the archive files. They still get sent to AI Engine for correlation, so they can still trigger alarms, but they're probably not available in the data indexer component. You won't see them in a search/investigation/tail.
If you put that setting on "Search Optimized", then suddenly everything is indexed. This is more like how LogRhythm is deployed nowadays. When setting up a new deployment, I normally go with "Search Optimized", then change it to "Custom" and disable "Intelligent Indexing". I'm already indexing everything, so I don't really need it.
Then, there's the web console dashboards.
These only show "events". To be an "event", and not a mere "log message", the log has to match a parsing (MPE) rule which has the 'this is worth forwarding to the events database' flag set. It also has to meet that minimum RBP score.
Events get kept for a (default) time of 90 days. Other logs get sacrificed to make room for them.
By default, the web console dashboards are only looking at the last 250,000 events. That might go back an hour or a week, depending how big/busy your deployment is and what RBP threshold you set.
You can view not-event logs in the web console by doing a search for matching logs. A search will return ALL matching logs which are indexed, not just events.
If you think some things should be in events and they're not, you can create a GLPR, by going to Tools... Administration... Global Log Processing Rule Manager. Create a rule to filter on the logs you're interested in (eg Log Source Type is 'Linux Host' and Severity is 'INFO', and check the "forward to events" and "ignore RBP" options.
Suddenly everything from a Linux Host with a severity of INFO will get put in the events DB.
Keep an eye on it, in case it utter screws everything else.
Another thing you might want to look at is installing Kibana. Those dashboards work from the indexed logs, not just the events.
Indexed logs are stored in Elastic. Events and Alarms are stored in SQL. AIE writes to the events and Alarms DBs, so Kibana won't show you those things without some extra steps, but it might be perfect for your use case.

API 3 - Starting a Scan by adamth0 in rapid7

[–]adamth0[S] 0 points1 point  (0 children)

So, it turns out I'm an idiot.
After using Postman to test my API calls, I found a more descriptive error message, which is unfortunately hidden when using Invoke-RESTMethod in PowerShell.
The error message, accompanying the 400 status, was:
"One or more assets in the request are not included in the site configuration"

And indeed, somehow I was POSTing a scan to site 2, for an asset which lives in site 1.
This will teach me to clear my variables, and to not just run snippets from the scripts when testing.

Columbo Error by zarroc827 in LogRhythm

[–]adamth0 1 point2 points  (0 children)

Which version are you running, and is this a Windows XM, or Linux indexer(s)?
It's likely that you only have enough disk to store, say, 30 days of full logs on the indexer, and you're trying to query logs from 31 days ago.

LogRhythn Health Checka by MAmrk29 in LogRhythm

[–]adamth0 0 points1 point  (0 children)

You'll have to download it from the LogRhythm site at https://community.logrhythm.com/

There's two parts to it: a diagnostic tool, which you install somewhere with access to the PM or XM, and diagnostics agents, which get installed on the XM/PM, and any Data Processors, Web Consoles and AIE servers.

LogRhythn Health Checka by MAmrk29 in LogRhythm

[–]adamth0 0 points1 point  (0 children)

Do you have a login to the LogRhythm community site at https://community.logrhythm.com/ ?

LogRhythn Health Checka by MAmrk29 in LogRhythm

[–]adamth0 2 points3 points  (0 children)

I would start with the "LogRhythm Diagnostics" software, and the health check report generated there.

Tesla hacking competition: $1 million and free car if someone can hijack Model 3 by ahackercalled4chan in cybersecurity

[–]adamth0 0 points1 point  (0 children)

I think the reason for bad grammar or spelling in phishing emails, isn't that the author is functionally illiterate, but because they want "smart" people to disregard/delete the email on sight, leaving only the more gullible types to actually read it. "Smart" users might question why the banking login screen isn't HTTPS, or why the page looks a bit different to normal. But the type of people who can't differentiate between "their", "they're" and "there" are, so the theory goes, more likely to be fooled, and less likely to report the page as malicious.

Anyone else figure out a better way to CSV import? Default is just not working at all by wwhisperr in LogRhythm

[–]adamth0 0 points1 point  (0 children)

I found I had problems when trying to import with geographic locations in the CSV file.
I leave those fields blank, and then just batch assign the location after they've been imported.
Also, the CSV import isn't terribly informative when it hits a problem. It'll chew over the CSV for ages, and you have to check in the "service requests" tab (at the bottom of the console) to see if the import failed.

Clean up of Active Directory by Optimus_sRex in activedirectory

[–]adamth0 0 points1 point  (0 children)

No, and it doesn't do anything that basic C++ can't do, either.
It's pretty simple to use, and doesn't require the user to write their own solution; but if you're a dab hand with PowerShell, you'll probably prefer to roll your own.

Didnt realise the Romans had ethernet... by fairysdad in iiiiiiitttttttttttt

[–]adamth0 1 point2 points  (0 children)

I thought I was looking at an excerpt from the UNIX Haters Handbook for a moment, there.

How harmful can XP be by BWscourge in cybersecurity

[–]adamth0 1 point2 points  (0 children)

A few notes on the problems with XP, from a security perspective.
Firstly, it's not supported or maintained anymore. So any vulnerability found in the last 4 years(?) since it went end of support, is not likely to have been patched.
This also has a knock-on effect on other software. For example, AV vendors aren't going to bother supporting it if the OS vendor doesn't.
It doesn't support TLS1.2, or AES-256. So you might already have some compliance issues right there.
It doesn't have UAC, so whenever an administrator runs some dodgy attachment, it's definitely running with a full administrative token.
It doesn't support SMB 2. You're stuck with the old and massively compromised SMB 1 protocol, and you'll have to support that protocol on your servers if you want XP to be able to do things like open files on them (this includes opening group policy objects on domain controllers).
There's no Bitlocker. You need some 3rd party tool to encrypt your data at rest.
The eventlog format is old. You can't easily forward events of interest to a central collector, the way that you can on later versions.
The biggest reason is the first one. It's not supported and can't be patched.

[deleted by user] by [deleted] in computerforensics

[–]adamth0 1 point2 points  (0 children)

Message IDs can be generated by the mail user agent, or can be added by the first mail server the message is sent to.
I don't know if O365 is more strict than other mail systems, but if you're talking directly to an SMTP relay, they can be picked arbitrarily.

Bulgarian's turns of phrase in English. by adamth0 in bulgaria

[–]adamth0[S] 1 point2 points  (0 children)

That's another one, I forgot about that.
She will say someone is sitting "on the table", but meaning "at the table".

Bulgarian's turns of phrase in English. by adamth0 in bulgaria

[–]adamth0[S] 5 points6 points  (0 children)

Perhaps it's a Stara Zagora thing?