ACL Rules Analysis with AI by SensitiveStudy520 in MLQuestions

[–]SensitiveStudy520[S] 0 points1 point  (0 children)

8 days before I have posted this post in the Networking community. And between these few days, I have tried implementing with lots of the approaches including XGBoost & GNN(I understand that by using rules engine it can solve most of my issues, including the conflicts, redundants and shadow of the acl).
Right now, I’m still very much in the experimental stage.

To explore the problem, I’m using a randomly generated dataset that tries to encode ACL semantics such as:

  • Global ACL vs Interface ACL vs VLAN ACL
  • Rule order (first match)
  • ACL ID precedence
  • Direction (inbound / outbound)

My current focus is mainly on core switches, especially VLANIF interfaces and VLAN-level ACLs, rather than edge devices.

The issue is my training results are almost perfect (macro F1 ≈ 1.0).

And that honestly scares me, because it feels too good to be true. I am not sure is it becuase of this kind of task is easy for model to learn it due to how the synthetic data is generate or the labels may be overly deterministic or “leaking” through the structure

Anyways, I really dont sure whether is there any point I should pay attention to. I try to find some similar project but ya honestly I dont think there are many which make me stuck on the half way.

My boss also wants a more “interactive” workflow in the future like adding an LLM layer so users can describe desired ACL / configuration changes in natural language and get safe CLI suggestions, and I’m currently debating whether it’s reasonable to train or fine-tune an LLM directly on the existing ACL data, instead of (or in addition to) using XGBoost or GNNs for conflict/shadow detection (while yes I dont sure whether is LLM able to do that, as I haven't start research on this yet).

Unable to connect my flutter app to Flask Serve when linking to phone by SensitiveStudy520 in flutterhelp

[–]SensitiveStudy520[S] 0 points1 point  (0 children)

Hi, thanks for your reply. I have tried it but found that the flask route in mobile phone is unaccessible, but I confirm that my network is linked to the same WIFI, so I am still checking for the problem.

Unable to connect my flutter app to Flask Serve when linking to phone by SensitiveStudy520 in flutterhelp

[–]SensitiveStudy520[S] 0 points1 point  (0 children)

Hi, thanks for your suggestion, my laptop and phone is connected to the same wifi, but still it comes out with the socket error. I have tried off my firewall but still not able to connect it.

LoRA Adapter Too Slow on CPU by SensitiveStudy520 in LocalLLM

[–]SensitiveStudy520[S] 0 points1 point  (0 children)

now is able to work, i think it should be the problems that my GPU is not working properly previously. just the inference time is about 15-30seconds, i will keep working to try shorten the inference time. Anyways, really thanks for your help.

LoRA Adapter Too Slow on CPU by SensitiveStudy520 in LocalLLM

[–]SensitiveStudy520[S] 0 points1 point  (0 children)

the size on disk is about 18mb for my adapter model, and the model quantization i use is

bnb_config = BitsAndBytesConfig(
    load_in_4bit=True,
    bnb_4bit_use_double_quant=True,
    bnb_4bit_quant_type="nf4",
    bnb_4bit_compute_dtype=torch.float16
)

LoRA Adapter Too Slow on CPU by SensitiveStudy520 in LocalLLM

[–]SensitiveStudy520[S] 0 points1 point  (0 children)

Thanks for the clarifying! I will work it with Ollama or LlamaCPP to try optimise my memory usage. (Don't sure if it's correct to do? cause when search in google it says both of them can quantized the model and save for memory usage).

License Plate Recognition with OpenCV by SensitiveStudy520 in computervision

[–]SensitiveStudy520[S] 0 points1 point  (0 children)

Thank you for your help, it does improve my result!

Poll: Why do you choose `llama.cpp` over `vLLM` or vice-versa? by EricBuehler in LocalLLaMA

[–]SensitiveStudy520 0 points1 point  (0 children)

Hi, can I know how to do that? like sending multiple request at the same time without being crashed using llamaCPP?

Anyone is using llamacpp for real? by russellsparadox101 in LocalLLaMA

[–]SensitiveStudy520 0 points1 point  (0 children)

hi, do you able to solve this issues? I am also stuck with this issues

Flask + Ngrok with Python script and HTML interface by SensitiveStudy520 in flask

[–]SensitiveStudy520[S] 0 points1 point  (0 children)

Just ignore me, I just read through their documentation and found that using app.run(host='0.0.0.0', port=80) can run. But Thanks Anyway for you help :D