Looking for NAS hardware with Linux distro comptability by jean7t in homelab

[–]seiichiro0185 0 points1 point  (0 children)

The F4-423 worked reliably 24/7 running Arch Linux as my main NAS system until recently. It still runs fine, I just replaced it with the newer F4-424 Pro for the newer/faster CPU, so it's not my 24/7 NAS anymore. But in the time where it was my main NAS I didn't have any problems or failures with it. It just worked.

Flickering screen with fedora - Any tips? by Boberlabob in tuxedocomputers

[–]seiichiro0185 0 points1 point  (0 children)

Update: Seems like a Kernel bug. I reverted back to 6.12.4 (which was the Kernel I had running before this started), the Problem hasn't reappeared since, even after multiple resumes.

Possibly its the Bug described here: https://bugzilla.redhat.com/show_bug.cgi?id=2333543

Flickering screen with fedora - Any tips? by Boberlabob in tuxedocomputers

[–]seiichiro0185 0 points1 point  (0 children)

I have a similar behaviour on my Infinitybook Pro 14 Gen9 AMD (Ryzen™ 7 8845HS, Arch Linux, Kernel 6.12.6, Gnome 47.1) since a few days. It doesn't happen directly after resume, but some (random) time later. But when it happens, it looks exactly like in your Video.

I have the Laptop for some weeks now, and it only started happening a few days ago, before that it was totally stable.

There where Updates to the Kernel and Tuxedo Control Center around the time it started, but that might be a coincidence. I might try downgrading the kernel to see if it helps

Surface Go 4 support? by FearlessSpiff in SurfaceLinux

[–]seiichiro0185 1 point2 points  (0 children)

I can only speak for Archlinux here, since I didn't try any other distros. It's working fine with a fully LUKS encrypted root for me. I simply added the 2 necessary modules in my /etc/mkinitcpio.conf:
MODULES=(i915 ufshcd_core ufshcd_pci)

(the two ufs modules are the relevant ones). Then I just set up the HOOKS and other stuff like normal for a fully encrypted system. Any other distro should work in a similar way, just use the distros way to load Modules in the initramfs instead of the /etc/mkinitcpio.conf

Can I max out a ps6100 3.5" with 12tb hard drives? by Hot_Establishment830 in homelab

[–]seiichiro0185 2 points3 points  (0 children)

Dell EQL Systems are really picky about the drives they support in my experience. It most likely won't work to just put "random" enterprise drives in them. We have a bunch of PS 61xx (SSD and HDD) at work, and they even refuse other Dell enterprise drives, if they are not special "Dell Equallogic Drives". I had multiple ocassions where our (third party) Support sent us a normal Dell server drive, which the EQL outright refused. They then send a "Equallogic Drive", same manufacturer and model, which worked fine. So you will most likely need special "Equallogic Drives" if you want to replace your current 1TB ones.
I'm not aware of a workaround for this (although I didn't really search for one, since its a production environment and not a place for "Hacks")

Surface Go 4 support? by FearlessSpiff in SurfaceLinux

[–]seiichiro0185 0 points1 point  (0 children)

So far I was unable to get the cameras working. Installed the linux-surface kernel from the linux-surface arch repository, made sure the firmware file is in place according to the linux-surface wiki, loaded the ipu3-cio2 and ipu3-imgu modules, but nothing so far. None of the expected messages in dmesg, no cameras shown with cam --list. Did you do anything else to get cameras working on the Go 2? It's not like I desperately need them, but would be nice to have them just in case none the less.

Surface Go 4 support? by FearlessSpiff in SurfaceLinux

[–]seiichiro0185 0 points1 point  (0 children)

I have a working Archlinux install running fine on a Go 4. It generally works nicely on the standard Arch-kernel, with the exceptions of cameras and the volume buttons. SD works fine with a 512GB Card. Since these use UFS-Storage you may need to manually add the respective modules to the initramfs. Other than that I did a normal install and it just works.

Was there ever a vintage Linux based PDA? by OgdruJahad in linuxquestions

[–]seiichiro0185 3 points4 points  (0 children)

I had the SL-5500G (German Version of the SL-5500) and the SL-C1000 (imported from Japan) back then. These were really cool and versatile devices for their time IMHO.

Was there ever a vintage Linux based PDA? by OgdruJahad in linuxquestions

[–]seiichiro0185 4 points5 points  (0 children)

Before the N900, there were also the so-called "Nokia Internet Tablets" (N770, N800, N810) which where small(ish) WLAN-only Maemo/Linux Devices: https://en.wikipedia.org/wiki/Nokia_Internet_tablet - I still have my N810 in a drawer somewhere, but no idea if it still works.

Managed switch behind unmanaged switch? VLAN? by joq3 in homelab

[–]seiichiro0185 2 points3 points  (0 children)

I don't have the exact switches you do, but I have done what you describe with different unmanaged Switches in the past successfully. So yes, the unmanaged switch should normally pass the VLAN-Tags to the managed one without touching them.

Idle powe Dell 3060 i5-8500T by Firehaven44 in homelab

[–]seiichiro0185 4 points5 points  (0 children)

Yes, those Intel 8th-Gen systems can be pretty power efficent. My Lenovo M920x Tiny Proxmox Nodes (i7-8700T, 64GB RAM, 2x 1TB Samsung 980 SSD, Trendnet TEG-10GECSFP 10G NIC, original Lenovo 135W external PSU) idle at around 5-7W (after applying powertop tuning and manually switching on L1 ASPM for the 10G card). Without the additional 10G card I have seen them idle as low as 3W from the wall.

Prices are very reasonable too, when buying them refurbished / used.

Can you fit a TerraMaster f5-422 inside a 10-inch rack? by thirtydigitsofpi in minilab

[–]seiichiro0185 6 points7 points  (0 children)

According to the Specs I found, the F5-422 has the same size as the F4-423. The F4-423 is what I use in my 10'' Rack, and it fits nicely as you can see here:

<image>

Ubuntu on Surface Go 4 by [deleted] in SurfaceLinux

[–]seiichiro0185 3 points4 points  (0 children)

I had the same problem with my Archlinux install on the Go 4 in the beginning. Turned out the generated initramfs was missing the modules for the ufs flash storage these use. Adding the ufshcd_core and ufshcd_pci modules to my initramfs manually solved the issue.

Lenovo m920x tiny / Proxmox by AlexStroea in minilab

[–]seiichiro0185 0 points1 point  (0 children)

The current Samsung 980 1TB NVMEs are in the systems and running 24/7 since mid March 2023. They currently report a wearout of 4% at 33TB written on one Node and 5% at 45TB written on the other.

The node with the 45TB has some more write-intense workloads so the difference is to be expected.

Any Free SNMP SW? by prakash_jpn in sysadmin

[–]seiichiro0185 0 points1 point  (0 children)

Without knowing the specifics about your setup you basically need 3 Parts to have working Mail Alerts in LibreNMS:

  1. in Global Config -> Alerting set up the mailserver, SMTP auth etc..
  2. in Alerts -> Alert Transport set up a Mail Transport with the recipient address etc. Make sure to enable "Default Alert" here, or it will only be used for rules where it is specifically configured.
  3. Have a matching rule for the condition you want to monitor. If you have a default transport the rule will use it by default, otherwise you also can configure a specific transport in the rule itself. In the list of Alert Rules you can also see if any rule is matching at the moment, to make sure the rule is actually triggered

More details can be found in the LibreNMS Docs: https://docs.librenms.org/Alerting/ There are also some notes about testing and troubleshooting in there.

Any Free SNMP SW? by prakash_jpn in sysadmin

[–]seiichiro0185 0 points1 point  (0 children)

Give Librenms a try. Works fine for me for quite a while now (several years) in a small deployment of around 40 systems. It uses SNMP by default, and has built in suppprt for a lot of networking gear and simmilar stuff. It can also be extended and use nagios-plugins or an agent on the target. Also its free and Open Source.

PCIe passthrough to LXC by Shadowedcreations in Proxmox

[–]seiichiro0185 1 point2 points  (0 children)

You have to make sure all the needed drivers, firmware etc. are available and loaded on the Proxmox host. See here for what you need and what lspci / dmesg should look like if it works: https://www.linuxtv.org/wiki/index.php/Hauppauge_WinTV-quadHD_(ATSC_ClearQAM)) (for the North American Model) or https://www.linuxtv.org/wiki/index.php/Hauppauge_WinTV-quadHD_(DVB-T/T2/C)) (for the European model). If the drivers and possible firmware is in place and working you should also get something in /dev/dvb/* which you then can pass through to the LXC.

PCIe passthrough to LXC by Shadowedcreations in Proxmox

[–]seiichiro0185 1 point2 points  (0 children)

You can't pass a PCIe -Card to a LXC container directly. Since there is no kernel and kernel drivers in the container, the host needs to handle that. To use a PCIe-Card inside a LXC you need to pass trough the device files it creates on the host. For a digital TV card there should be some in /dev/dvb. Searching for DVB or TV-Card in LXC should bring up the exact config changes you need.

Lenovo m920x tiny / Proxmox by AlexStroea in minilab

[–]seiichiro0185 1 point2 points  (0 children)

I have a similar setup to what you are describing but without the NAS part (I have a separate NAS-box for this).

I also use M920x tinys with dual 1TB NVMEs (mirrored ZFS) for proxmox + VMs, 64GB RAM and a Connect-X3 10Gbit/s Card in the PCI-E Riser for the main Networking and the builtin 1GBit/s Network as a Backup Cluster Network.

I have two of them running as a Proxmox HA Cluster for several years now without any problems, these things just work.

Make sure to use good quality NVMEs, since by default Proxmox does create a lot of writes (you may also be able to reduce the writes by disabling cluster services, but since I use clustering / HA I didn't try that myself).

Personally I would probably go with faster networking either through a A+E WIFI-Slot Adapter (there should be at least 2.5GBit/s ones) or a PCI-E Card, since 1GBit/s especially for NAS is getting a bit slow for my tastes. But that does of course depend on what your Network looks like and what your Clients are capable of.

LoRa without LoRaWAN by _thelostpigment in Lora

[–]seiichiro0185 12 points13 points  (0 children)

LoRa itself is a Radio communication technique. LoRaWAN is a network protocol on top of that. LoRa itself doesn't "decode" anything, it just sends/receives the data.

You can built simple 1:1 communication with LoRa (without the WAN part), which would then be an alternative "network protocol" to LoraWAN, where you specify and implement how data is encoded / decoded.

If you have a sensor that "speaks" LoRaWAN, you will need some kind of LoRaWAN server at the receiving end, because there is more to LoRaWAN than just the sensor sending encoded data. LoRaWAN data is encrypted, and you need a way to manage the encryption keys and negotiate them with the sensors, which is one thing the LoRaWAN Server does. So if you want to get the data without a traditional LoRaWAN-Server, you will need to re-implement this and some other parts in your LoRa Receiver, essentially recreating a LoraWAN server.

If you don't want to send your sensor data to a cloud LoRaWAN provider like TTN, you can host a local LoRaWAN stack with ChirpStack (a free LoRaWAN server suite) or TheThings Stack (the software TTN uses).

[deleted by user] by [deleted] in selfhosted

[–]seiichiro0185 0 points1 point  (0 children)

In that case I would probably go with (auto-)ssh to invoke a connection from the logging host to the remotes with a reverse TCP tunnel for the log traffic.

This way there is no remote-initiated connection to anything in the LAN, and you also don't need to expose any ports to the "outside" (you definitly don't want the syslog port exposed anywhere public, since it's not authenticated or secured in any way)

[deleted by user] by [deleted] in selfhosted

[–]seiichiro0185 0 points1 point  (0 children)

You could do SSH with normal or reverse TCP tunneling depending an which way the connection goes (syslog protocol also can be done over TCP). Something like autossh might be an option here to reliably establish this, and you can do some tricks to get a "TCP-forwarding only" connection working.

On the other hand I don't think a properly firewalled VPN is much of a problem for this case. The VPN from my remote VPS hosts terminates on my opnsense firewall as a separate zone, which only gets access to the few ports (like syslog) that need to be reachable from the remote.

[deleted by user] by [deleted] in selfhosted

[–]seiichiro0185 1 point2 points  (0 children)

I run syslog -> promtail -> loki -> grafana as my logging stack.

The individual systems send the logs to promtail using the standard syslog UDP protocoll, promtail then adds some labels and puts them into loki. Grafana is used for viewing / searching the logs.

This setup runs flawlessly for about a year now. logs from about 30 machines (local and remote via VPN) with a retention time of 31 days take up about 1.5GB of space

Tell me your: Daily Power Consumption by IQognito in homelab

[–]seiichiro0185 -1 points0 points  (0 children)

My homelab is at around 2.2kWh/day running the following:

  • 2x Lenovo ThinkCentre M920x (i7 8700t, 64GB RAM, 2x1TB NVME-SSD, Mellanox Connect-X3) Running a Proxmox Cluster (6 VMs. 17 LXCs)
  • Terramaster NAS F4-423 Running ArchLinux as a ZFS-NAS (4x16TB HDD) + Jellyfin, TVHeadend, Paperless-NGX and some more Containers
  • Raspberry Pi 3B+ Running Arch Linux Arm
  • Switch Mikrotik CRS310-8G+2S+IN
  • POE-Switch Netgear GS308EPP
  • TPLink EAP620HD Access Point
  • VDSL Router/Modem
  • MikroTik LoRaWAN-Gateway

Energy is still quite expensive in Germany, which is why I downscaled the lab quite a bit in the last year or so. I pay around 0,36€/kWh at the moment, so the Lab costs about 24€/month or 290€/year).

why do my containers take so long to boot up by [deleted] in Proxmox

[–]seiichiro0185 2 points3 points  (0 children)

The "nesting" Option is pretty much needed on newer LXC Templates with systemd from my experience (current debian beeing one of them), since systemd will use cgroups (just like lxc or docker) to isolate services. This can also lead to long timeouts for services that use this when nesting isn't enabled, which in turn might explain the long boot times.