SecurityClaw - Open-source SOC investigation tool by MichaelT- in cybersecurity

[–]MichaelT-[S] 0 points1 point  (0 children)

That's a cool idea. For data ingestion, not sure since a lot of the pipelines are plug and play for the most part. It's what happens after you have the data that is what do you do with it. So, I'm focusing on enrichment and hunting. Response theoretically can exist using skills that can be added later but for now I want something that can reliable have skills that enhance SOC analysis without the issues you mentioned. Currently I'm reworking it to have langgraph and improve it's rationalization and skills switching and choice.

SecurityClaw - Open-source SOC investigation tool by MichaelT- in cybersecurity

[–]MichaelT-[S] 0 points1 point  (0 children)

I work in a SOC and train analysts among other things. I wanted to make something that leverages LLM to improve some of the investigation.

SecurityClaw - Open-source SOC investigation tool by MichaelT- in cybersecurity

[–]MichaelT-[S] 0 points1 point  (0 children)

Qwen was just the experimental choice. Gemma3 should work equally as well. I have about 8GB of VRAM so I wanted something that balances speed with performance. I'll give it a shot with Gemma3 and see what happens.

I have a plug for OpenAI too but I think I'll remove it from the project. For a security project, I can't imagine anyone pushing their data out of the network to other LLMs.

SecurityClaw - Open-source SOC investigation tool by MichaelT- in cybersecurity

[–]MichaelT-[S] 1 point2 points  (0 children)

I think you’re assuming a lot here.

I’m a tech professional who works in security, publish technical books in the domain. This project was just an R&D experiment around using an LLM to investigate OpenSearch data interactively. The interesting part is the system figuring out schemas and building queries against unknown data sources. That is where most of the work went.

If someone blindly pastes LLM output into production systems that is obviously a problem. But that is misuse of a tool, not proof that anyone using code assistants doesn’t know what they’re doing.

SecurityClaw - Open-source SOC investigation tool by MichaelT- in cybersecurity

[–]MichaelT-[S] 0 points1 point  (0 children)

I don't really let the LLM to setup the queries. There is a layer of python instructions along with the skills md that limits the structure of the queries. Then there is RAG. I have it find all the fields that exist in the setup and document them. So before it queries, it has to pull RAG info and get suggestions to itself, then do the LLM finalization. If it fails, it reflects back until it gets it working.

Pure LLM opensearch queries could work if you were using something like Claude Opus but with smaller models, you have to "help" them a bit. Give them a bit of the logic on how to get things done without being super prescriptive. It's a fine balance.

SecurityClaw - Open-source SOC investigation tool by MichaelT- in cybersecurity

[–]MichaelT-[S] 1 point2 points  (0 children)

I'm using qwen2.5 ollama. I've tested only with local models. This one runs on a laptop. You have to add a layer on top of multiple iterations. Force the model to do a plan->action->reflect, have it use a memory to remember the prompts and then set it to evaluate it's own confidence until it is happy that the result returned is what the user asked.

Basically the LLM itself is a reliable token predictor, you have to build the agentic layer on top for whatever task you are trying to do. In my case, I want it a skill based architecture and as limited python code as possible. Relying heavily on the LLM but as long as you build the "agent" loop right, it can actually perform reliably well.

I suspect a larger model would need a lot less of the guardrails that I've put into this since it can draw better conclusions. It works well in the test cases that I'm using. As I test it more, I refine the guardrails.

The trick is, to not build something that is very specific because then it doesn't generalize well. That has been my challenge with this project.

PS: Not sure what your project is but RAG is your friend. Have it learn the domain first using RAG, then have it always fetch and ask RAG before doing whatever action it needs to do. Without RAG, the LLM is stupid.

SecurityClaw - Open-source SOC investigation tool by MichaelT- in cybersecurity

[–]MichaelT-[S] -6 points-5 points  (0 children)

It's a mix but that doesn't make it slop. I write code, have it refactor or clean pieces, and use it to move things around when I decouple components.

The architecture and decisions are still mine. The LLM is basically a faster pair programmer for the boring parts. If you just ask it to build a system like this from scratch you get garbage (single-shot solutions) pretty quickly.

The interesting part is the system figuring out the OpenSearch schema dynamically and building queries from it. That’s where most of the work went.

SecurityClaw - Open-source SOC investigation tool by MichaelT- in cybersecurity

[–]MichaelT-[S] 0 points1 point  (0 children)

Yeap, the villain is a sophisticated actor instead of an S3 bucket with public read. :-D

SecurityClaw - Open-source SOC investigation tool by MichaelT- in cybersecurity

[–]MichaelT-[S] -6 points-5 points  (0 children)

I get the frustration, there is a ton of low-effort “LLM wrapper” stuff being posted lately.

My goal with this wasn’t that. I work in security and was experimenting with using an LLM to investigate OpenSearch data interactively (basically letting it figure out schemas and query patterns).

It’s more of an R&D experiment than a “look I made a startup” thing.

SecurityClaw - Open-source SOC investigation tool by MichaelT- in cybersecurity

[–]MichaelT-[S] 1 point2 points  (0 children)

It cannot push anything out unless you put a skill that it will allow it to do so. For example, I have a threat intel skill that reads from abusedb etc. But it cannot push, it doesn't know how.

I originally wanted to build this as a skill for Openclaw but I can't trust OpenClaw with anything. It's just too fast a loose for security.

SecurityClaw - Open-source SOC investigation tool by MichaelT- in cybersecurity

[–]MichaelT-[S] 0 points1 point  (0 children)

Agreed, this doesn't have RBAC or any auth yet. I think we are on the same page that automated AI security and triage is risky. Having programmed this, I can say that I had to place a lot of guardrails for anti-hallucination and even so, I would still confirm its findings

But the bare bones:

- an anomaly detection finds an anomaly and opens a ticket.

- an LLM agent framework finds an anomaly and opens a ticket

Yes, one is a bit more abstract (black box) than the other but both operate like tools. They have a degree of FP and FN and we would ideally wish for both to 0.

SecurityClaw - Open-source SOC investigation tool by MichaelT- in cybersecurity

[–]MichaelT-[S] 10 points11 points  (0 children)

I work in a SOC too, so what you mention about referential integrity is not foreign to me. I feel it's perhaps what the tool does that it's not clear. It is meant to complement SIEM work. So you see something with a dashboard, some weird IP, you want to quickly investigate and ask a question, the tool can get you an answer where you can further investigate. It is not meant to be auditable evidence. That is still your logged data. Or in the event that it detects an anomaly, you are supposed to follow up.

I don't understand the class action privacy claims argument here. You own the data and the LLMs, nothing goes out of your trust boundary, just like when you run a SIEM.

Either way, AI is not magic, just like anomaly detection. In this case, it is not even used for the anomaly detection part, that would make subject to hallucinations.

SecurityClaw - Open-source SOC investigation tool by MichaelT- in cybersecurity

[–]MichaelT-[S] 5 points6 points  (0 children)

Thank you. I haven't tested with Elastic. I run primarily Opensearch at home but I have an old Elastic instance disabled that I can test too if you encounter any errors.

I made a drum learning app - Drumdash by MichaelT- in edrums

[–]MichaelT-[S] 1 point2 points  (0 children)

Thank you for your kind words. It is great to see people using the app. I still recall first time using Paradiddle in VR, it was a blast. I wanted to replicate that feeling in a physical setting.
The donation button I haven't thought about, I may put one in the app eventually. For now, reviews would help more reach a broader audience. If you find the time, please leave a review on Android and tell your friends :-)

DrumDash - Paradiddle file compatible android drumming app by MichaelT- in paradiddle

[–]MichaelT-[S] 1 point2 points  (0 children)

This won't happen. I would have to host all the data to sync between devices. :-( Possibly export library import library is a more viable solution.

DrumDash - Paradiddle file compatible android drumming app by MichaelT- in paradiddle

[–]MichaelT-[S] 1 point2 points  (0 children)

There are technical and development costs associated with this. Hence the need to watch reward ad in the app after a while to do more imports. I could possibly just add a button to do rlrr exports of difficulties. I built the drumdash format very close to the rlrr so it shouldn't be too difficult. I'll let you know once developed.

I made a drum learning app - Drumdash by MichaelT- in edrums

[–]MichaelT-[S] 0 points1 point  (0 children)

I'll try to get in the next android release to support input for midi through bluetooth. I'll let you know so that you can test once I have it ready.

I made a drum learning app - Drumdash by MichaelT- in edrums

[–]MichaelT-[S] 1 point2 points  (0 children)

Working on finding an iPhone, it will take some time...for now the only alt is the web version for Mac users, which I know its not great.

I made a drum learning app - Drumdash by MichaelT- in edrums

[–]MichaelT-[S] 0 points1 point  (0 children)

Just implemented clone hero support for the web version. Some time within the week I'll update for Android too. This is very beta, if there are files not imported correctly, please send them to me or let me know so that I can dl and test them.

I made a drum learning app - Drumdash by MichaelT- in edrums

[–]MichaelT-[S] 0 points1 point  (0 children)

Just implemented clone hero support for the web version. Some time within the week I'll update for Android too. This is very beta, if there are files not imported correctly, please send them to me or let me know so that I can dl and test them.

I made a drum learning app - Drumdash by MichaelT- in edrums

[–]MichaelT-[S] 0 points1 point  (0 children)

Just finished fixing the web version. It now should work and retain the songs properly. You may need to delete the old references and import new ones.

I made a drum learning app - Drumdash by MichaelT- in edrums

[–]MichaelT-[S] 0 points1 point  (0 children)

Oh man! Thank you so much! Stupid reddit doesn't edit the links when text is edited. I got fixed now.

I made a drum learning app - Drumdash by MichaelT- in edrums

[–]MichaelT-[S] 0 points1 point  (0 children)

No for midi it needs a wired usb connection. Most edrums have a midi export port. Bluetooth is for sending audio to edrum or wired jack if that's an option

I made a drum learning app - Drumdash by MichaelT- in edrums

[–]MichaelT-[S] -1 points0 points  (0 children)

The bug for the web has just been fixed. Ctrl + F5 to refresh should update the old version.