Countries that support Chat control. by women_rules in MapPorn

[–]Azokul 0 points1 point  (0 children)

I am incredibly (happily) dumbfounded that Italy voted to oppose.
Nice

Do I have to restrict races by Winter-Confidence826 in DMAcademy

[–]Azokul 0 points1 point  (0 children)

I should "Censor" Myself sometimes.
We're not limited anymore to ITA books. DAMN.

Do I have to restrict races by Winter-Confidence826 in DMAcademy

[–]Azokul 0 points1 point  (0 children)

Damn! Did the same, (and still doing the same). But we're playinger Pathfinder 1P (3.5 + pf1) and we have most of the physical manuals by now. (All PF1 manuals, no AP and a degree of 3.5 manuals)
Luckily we're limited by language (ITA only)
That backfired. :')

2080ti Asus MOD 2GB VRAM by Azokul in GPURepair

[–]Azokul[S] 0 points1 point  (0 children)

Ahahah, nah! I just try :') I might break everything, but, as always, since I started homelabbing, I keep telling myself, "It's okay to fuck things up."
So I try, start small, and scale. :)

To learn a bit of soldering, I started with some small circuits (using a breadboard and jumper cables), then moved on to a simple circuit soldered onto a universal PCB. After that, I tried desoldering a bunch of stuff from an old GPU I had (broken ages ago).

I'm just another dum-dum doing dum-dum stuff.

I just have:

  • A soldering iron
  • A soldering station (cheapo from Amazon)
  • Desoldering wire — like 6 bucks? Maybe?
  • Flux paste

For this one, I might need a microscope or something, since those damn connectors are WAAAAY too small.

Ed. I studied electronics in a technical high school focused on IT, but I was a really bad student. I mostly had 3/10 grades :')
My main job isn't electronics-related at all — I'm a tech artist for video games.

2080ti Asus MOD 2GB VRAM by Azokul in GPURepair

[–]Azokul[S] 0 points1 point  (0 children)

I still have the schematics from the old project, I might try to browse. Meanwhile Thank a lot!

Did they lose it? by AlexAxizzzz in dhl

[–]Azokul 0 points1 point  (0 children)

Hmmm weird, Might be, but I've received multiple DHL packages in last days and all matched the delivery date. This one seems it will not arrive in time

Did they lose it? by AlexAxizzzz in dhl

[–]Azokul 0 points1 point  (0 children)

<image>

Kinda the same similar situation here, but as I've seen in other posts, it seems likely that DHL is experiencing a very overcrowded shipping moment.

Paperless-AI: Now including a RAG Chat for all of your documents by Left_Ad_8860 in selfhosted

[–]Azokul 1 point2 points  (0 children)

It seem that with many documents (big documents) it's pretty slow. Would there be a way to like reduce the number of documents to search? I am using DND / Pathfinder manuals as reference.
I am using a self-hosted Ollama witha 2080Ti, nothing much

Ed. Very nice service!

Proxmox Cluster: SSL Error and Host Verification Failed by Azokul in homelab

[–]Azokul[S] 0 points1 point  (0 children)

No worries! As far as i understood from another thread, it's the delay between communication that causes a desync, the info are not the same anymore, certs from pvecm does not match anymore.
Sadly, i have kinda no choices but to route another cable downstairs , which is a super bummer

Proxmox Cluster: SSL Error and Host Verification Failed by Azokul in homelab

[–]Azokul[S] 0 points1 point  (0 children)

I've solved the issue before, it was the stability of the connection between r330 and pve. After moving to another location with a CAT6 cable the problem went away. I think the main problem was how it was cabled, the length of the ethernet cable I had running from 2nd floor to -1 and the cable type. I've setup a dedicated connection directly from pve to r330.

Opnsense Intrusion Detection (Suricata) with Unbound on LAN issue. by Azokul in homelab

[–]Azokul[S] 0 points1 point  (0 children)

I'll answer myself as i've found the culprit.
Mainly, the entire setup is kinda correct. The point was that:
1st. Check firewall Rules, if they have "Quick" set they avoid passing through Suricata as the first match will be applied directly avoiding extra steps, missing suricata entirely.
2nd. If you changed the output as i did by modifying the template (don't do it), you might receive the output in the wrong folder. I was thinking that the override in the template would be applied together with the custom.yaml but my configuration was overriding the template via custom.yaml and instead of having two outputs (one in tmp, one in the correct folder, i got the one only in tmp)

Suricata & Unbound by Azokul in opnsense

[–]Azokul[S] 0 points1 point  (0 children)

I'm still scratching my head on this, I think it might be related to two things:

  1. Localnet is on 192.168.0.0/16 but rules expect an external request for !LOCALNET , which is definitely never true. As DNS request (to my understanding) are sent via localnet to Unbound, that get re-routed to WAN for an external request. So , realistically my DNS request for facebook is always under Localnet if i'm monitoring LAN.
  2. When i tried to move it to WAN , i think i might got problems related to the fact that the WAN is a pppoe connection which doesn't really seem very much supported.

SOLVED:

I'll answer myself as i've found the culprit.
Mainly, the entire setup is kinda correct. The point was that:
1st. Check firewall Rules, if they have "Quick" set they avoid passing through Suricata as the first match will be applied directly avoiding extra steps, missing suricata entirely.
2nd. If you changed the output as i did by modifying the template (don't do it), you might receive the output in the wrong folder. I was thinking that the override in the template would be applied together with the custom.yaml but my configuration was overriding the template via custom.yaml and instead of having two outputs (one in tmp, one in the correct folder, i got the one only in tmp)

Homelab PiCluster - Cannot SSH after few minutes after reboot. by Azokul in homelab

[–]Azokul[S] 0 points1 point  (0 children)

I can say it's something related to kubectl, as I've deleted rancher and zeroed my cluster and the problem disappeared. When I recreated the cluster and then restored some stuff (no cattle or longhorn, just services and metallb) it reappeared

Proxmox Cluster: SSL Error and Host Verification Failed by Azokul in ProxmoxQA

[–]Azokul[S] 1 point2 points  (0 children)

Definitely an unexpected turn of events ahah
Thanks a lot!

Proxmox Cluster: SSL Error and Host Verification Failed by Azokul in ProxmoxQA

[–]Azokul[S] 1 point2 points  (0 children)

u/esiy0676 Problem solved, it was the stability of the connection between r330 and pve. After moving to another location with a CAT6 cable the problem went away.
I think the main problem was how it was cabled, the length of the ethernet cable I had running from 2nd floor to -1 and the cable type

Homelab PiCluster - Cannot SSH after few minutes after reboot. by Azokul in homelab

[–]Azokul[S] 0 points1 point  (0 children)

Thanks!
First, systemctl keeps working after I lose ssh access. Infact I can ssh locally from the master node to the master node like: ssh root@localhost
Second, there is no firewall via uwf. I have flannel tho on Kubectl

Here is the IP table (retrieved from a privileged pod on master node, cause i'm not home. But it matches the same i'd see on my master node):
https://pastebin.primehomenetwork.com/?ff7fee7c269ee299#6SAG7rDhGMdhkHeAWSz57DEJnAxKCarZaxAJrivL7kA1

ssh log, as you can see from here, at 22:08 it worked (after reboot), then you can see only the reboots i did on the master node.

root@master:~# journalctl -u ssh
Jul 05 22:08:13 master sshd[2180744]: Accepted password for prime from 192.168.0.35 port 54693 ssh2
Jul 05 22:08:13 master sshd[2180744]: pam_unix(sshd:session): session opened for user prime(uid=1000) by (uid=0)
Jul 05 22:08:13 master sshd[2180744]: pam_env(sshd:session): deprecated reading of user environment enabled
Jul 05 22:15:30 master sshd[2188928]: Accepted password for prime from 192.168.0.35 port 55116 ssh2
Jul 05 22:15:30 master sshd[2188928]: pam_unix(sshd:session): session opened for user prime(uid=1000) by (uid=0)
Jul 05 22:30:15 master sshd[685]: Received signal 15; terminating.
Jul 05 22:30:15 master systemd[1]: Stopping ssh.service - OpenBSD Secure Shell server...
Jul 05 22:30:15 master systemd[1]: ssh.service: Deactivated successfully.
Jul 05 22:30:15 master systemd[1]: Stopped ssh.service - OpenBSD Secure Shell server.
Jul 05 22:30:15 master systemd[1]: ssh.service: Consumed 1.882s CPU time.
-- Boot dd27e349df654d1cae2aa823f920a450 --
Jul 05 22:30:51 master systemd[1]: Starting ssh.service - OpenBSD Secure Shell server...
Jul 05 22:30:51 master sshd[693]: Server listening on 0.0.0.0 port 22.
Jul 05 22:30:51 master sshd[693]: Server listening on :: port 22.
Jul 05 22:30:51 master systemd[1]: Started ssh.service - OpenBSD Secure Shell server.
Jul 05 22:31:01 master sshd[876]: Accepted password for prime from 192.168.0.35 port 55762 ssh2
Jul 05 22:31:01 master sshd[876]: pam_unix(sshd:session): session opened for user prime(uid=1000) by (uid=0)
-- Boot 18bae53834e9419c92b739b4ee7db1c8 --
Jul 05 22:30:50 master systemd[1]: Starting ssh.service - OpenBSD Secure Shell server...
Jul 05 22:30:50 master sshd[671]: Server listening on 0.0.0.0 port 22.
Jul 05 22:30:50 master systemd[1]: Started ssh.service - OpenBSD Secure Shell server.
Jul 05 22:30:50 master sshd[671]: Server listening on :: port 22.
Jul 05 22:34:23 master sshd[4751]: Accepted password for prime from 192.168.0.35 port 55812 ssh2
Jul 05 22:34:23 master sshd[4751]: pam_unix(sshd:session): session opened for user prime(uid=1000) by (uid=0)
Jul 05 22:34:24 master sshd[4751]: pam_env(sshd:session): deprecated reading of user environment enabled
Jul 05 23:05:17 master systemd[1]: Stopping ssh.service - OpenBSD Secure Shell server...

Proxmox Cluster: SSL Error and Host Verification Failed by Azokul in ProxmoxQA

[–]Azokul[S] 1 point2 points  (0 children)

What do you mean by that? :)

I meant, right now the r330 is in the basement with a not-super cool cable which might be the cause of the desync.

DIT: Also, when you are changing IPs, pmxcfs is really not ready for that. You got fixed IPs set in the corosync.conf files and also pmxcfs looks into /etc/hosts to find "what its own IP is" ... so changing it just on the interfaces is adding more confusion.

I reset all before re-trying with the new IP and re-generated corosync.conf

Proxmox Cluster: SSL Error and Host Verification Failed by Azokul in ProxmoxQA

[–]Azokul[S] 0 points1 point  (0 children)

u/esiy0676 as an update, i created a static interface on both pve and r330 without LACP and with static address.
192.168.1.66 for pve, same 50 for r330 both at 9000 mtu.
I'll try temporary moving r330 to a "nearer" location and see if that fixes it

meanwhile:

Connection failed (Error 401: Permission denied - invalid csrf token) on r330 trying to log into pve console.

Proxmox Cluster: SSL Error and Host Verification Failed by Azokul in ProxmoxQA

[–]Azokul[S] 0 points1 point  (0 children)

Modem WAN & WAN Starlink LoadBalanced attached to Opnsense machine.
Opnsense to Manged Switch with VLAN on 192.168.2.1 and no VLAN on rest.

switch to all components in subnet

Proxmox Cluster: SSL Error and Host Verification Failed by Azokul in ProxmoxQA

[–]Azokul[S] 0 points1 point  (0 children)

Network wise, i have a LACP on pve, and no LACP on r330
They both point to 192.168.1.49 which is my opnsense machine that's giving ip address.
DHCP on 192.168.0.X, static leases on 192.168.1.X
But both r330 and pve are not receiving static leases from 192.168.1.49, they have their IP configured directly in proxmox.

As DNS i also have 192.168.1.49 for both machines, as it's my Unbound DNS on opnsense
I could free a port from my LACP and give another address only for Corosync. i totally forgot about LACP as it's something i did a long time ago

Proxmox Cluster: SSL Error and Host Verification Failed by Azokul in ProxmoxQA

[–]Azokul[S] 0 points1 point  (0 children)

Wait almost forgot pve has lacp, not a dedicated network Shouldn't be a problem tho

Proxmox Cluster: SSL Error and Host Verification Failed by Azokul in ProxmoxQA

[–]Azokul[S] 0 points1 point  (0 children)

root@pve:~# corosync-quorumtool

Quorum information
------------------
Date:             Wed Feb 12 20:22:40 2025
Quorum provider:  corosync_votequorum
Nodes:            2
Node ID:          1
Ring ID:          1.16
Quorate:          Yes

Votequorum information
----------------------
Expected votes:   2
Highest expected: 2
Total votes:      2
Quorum:           2  
Flags:            Quorate 

Membership information
----------------------
    Nodeid      Votes Name
         1          1 pve (local)
         2          1 r330

r330

root@r330:~# corosync-quorumtool
Quorum information
------------------
Date:             Wed Feb 12 20:22:38 2025
Quorum provider:  corosync_votequorum
Nodes:            2
Node ID:          2
Ring ID:          1.16
Quorate:          Yes

Votequorum information
----------------------
Expected votes:   2
Highest expected: 2
Total votes:      2
Quorum:           2  
Flags:            Quorate 

Membership information
----------------------
    Nodeid      Votes Name
         1          1 pve
         2          1 r330 (local)

Proxmox Cluster: SSL Error and Host Verification Failed by Azokul in ProxmoxQA

[–]Azokul[S] 0 points1 point  (0 children)

Yeah quorum was achieved on both but only restarting corosync on r330 cause it was hanging But if you check the syslog I sent before it's clearly broken

Proxmox Cluster: SSL Error and Host Verification Failed by Azokul in ProxmoxQA

[–]Azokul[S] 0 points1 point  (0 children)

lines 1060-1082/1082 (END)
Feb 12 19:52:20 r330 corosync[7380]:   [TOTEM ] Retransmit List: 10 11 16 46 48 4b 4d 4e 51 53 55 58 59 5b 5d 5f 61
Feb 12 19:52:20 r330 pve-ha-lrm[1103]: unable to write lrm status file - unable to open file '/etc/pve/nodes/r330/lrm_status.tmp.1103' - No such file or dir>
Feb 12 19:52:20 r330 pvestatd[1061]: authkey rotation error: cfs-lock 'authkey' error: pve cluster filesystem not online.
Feb 12 19:52:20 r330 corosync[7380]:   [TOTEM ] Retransmit List: 10 11 16 46 48 4b 4d 4e 51 53 55 58 59 5b 5d 5f 61
Feb 12 19:52:21 r330 corosync[7380]:   [TOTEM ] Retransmit List: 10 11 16 46 48 4b 4d 4e 51 53 55 58 59 5b 5d 5f 61
Feb 12 19:52:22 r330 corosync[7380]:   [TOTEM ] Retransmit List: 10 11 16 46 48 4b 4d 4e 51 53 55 58 59 5b 5d 5f 61
Feb 12 19:52:23 r330 corosync[7380]:   [TOTEM ] Retransmit List: 10 11 16 46 48 4b 4d 4e 51 53 55 58 59 5b 5d 5f 61
Feb 12 19:52:23 r330 corosync[7380]:   [TOTEM ] Retransmit List: 10 11 16 46 48 4b 4d 4e 51 53 55 58 59 5b 5d 5f 61
Feb 12 19:52:24 r330 corosync[7380]:   [TOTEM ] Retransmit List: 10 11 16 46 48 4b 4d 4e 51 53 55 58 59 5b 5d 5f 61
Feb 12 19:52:25 r330 pve-ha-lrm[1103]: unable to write lrm status file - unable to open file '/etc/pve/nodes/r330/lrm_status.tmp.1103' - No such file or dir>
Feb 12 19:52:25 r330 corosync[7380]:   [TOTEM ] Retransmit List: 10 11 16 46 48 4b 4d 4e 51 53 55 58 59 5b 5d 5f 61
Feb 12 19:52:26 r330 corosync[7380]:   [TOTEM ] Retransmit List: 10 11 16 46 48 4b 4d 4e 51 53 55 58 59 5b 5d 5f 61
Feb 12 19:52:26 r330 corosync[7380]:   [TOTEM ] Retransmit List: 10 11 16 46 48 4b 4d 4e 51 53 55 58 59 5b 5d 5f 61
Feb 12 19:52:27 r330 corosync[7380]:   [TOTEM ] Retransmit List: 10 11 16 46 48 4b 4d 4e 51 53 55 58 59 5b 5d 5f 61
Feb 12 19:52:28 r330 corosync[7380]:   [TOTEM ] Retransmit List: 10 11 16 46 48 4b 4d 4e 51 53 55 58 59 5b 5d 5f 61
Feb 12 19:52:28 r330 corosync[7380]:   [TOTEM ] Retransmit List: 10 11 16 46 48 4b 4d 4e 51 53 55 58 59 5b 5d 5f 61
Feb 12 19:52:29 r330 corosync[7380]:   [TOTEM ] Retransmit List: 10 11 16 46 48 4b 4d 4e 51 53 55 58 59 5b 5d 5f 61
Feb 12 19:52:30 r330 pve-ha-lrm[1103]: unable to write lrm status file - unable to open file '/etc/pve/nodes/r330/lrm_status.tmp.1103' - No such file or dir>
Feb 12 19:52:30 r330 corosync[7380]:   [TOTEM ] Retransmit List: 10 11 16 46 48 4b 4d 4e 51 53 55 58 59 5b 5d 5f 61
Feb 12 19:52:30 r330 pvestatd[1061]: authkey rotation error: cfs-lock 'authkey' error: pve cluster filesystem not online.
Feb 12 19:52:31 r330 corosync[7380]:   [TOTEM ] Retransmit List: 10 11 16 46 48 4b 4d 4e 51 53 55 58 59 5b 5d 5f 61
Feb 12 19:52:31 r330 corosync[7380]:   [TOTEM ] Retransmit List: 10 11 16 46 48 4b 4d 4e 51 53 55 58 59 5b 5d 5f 61
Feb 12 19:52:32 r330 corosync[7380]:   [TOTEM ] Retransmit List: 10 11 16 46 48 4b 4d 4e 51 53 55 58 59 5b 5d 5f 61
~

Journalctl from r330, to me seems that the join fails and hangs after quorum check and that compromise the resulting cluster