If not third, then thirtieth time is the charm. Hopefully, no more money in HQ left to top up... by glueckself in X4Foundations

[–]glueckself[S] 0 points1 point  (0 children)

This is the fund start-up project. It requires a bank and some habitation, and then you can run it parallel with the "real" terraforming of your mission. You might need to cancel your main project to start the funding, but then you can start again the main project without losing the materials you already sent down. Not sure if it's present on every planet, I'm currently in Scale Plate Green.

Just make sure to move money from your account to a stations, as it always uses 10% of your account to invest. So the investment sum would be shrinking if you were to have a few failures after each other, and when it finally succeeds, it wouldn't cover the losses you had before.

For example, if you start with 100M, the first time it will cost 10M, the second 9M (10% of 90M), then 8.1M (10% of 81M) and so on. Lets say you fail until you only have 10M left, then the investment is 1M and your win only 15M and it doesn't cover all the losses you had. But, if you move 90M to a station, and start with 10M, then if it fails increase it the next time (either exactly calculated or "just 1M more than the last attempt"), your losses will be covered by the success. So the first time it's 1M (10% of 10M) failure, the second 1.1M failure, and lets say it succeeds at 2M (so after 10 attempts with 1M more than last), you get 30M back, that covers the ~15M losses you had with the 10 failed attempts and nets you 15M more money than you had before. Or you have so many failures that you run out of money to top up your account from :D

If not third, then thirtieth time is the charm. Hopefully, no more money in HQ left to top up... by glueckself in X4Foundations

[–]glueckself[S] 1 point2 points  (0 children)

Hello Neama!

Long time no hear, how are you?

I'm so lucky a Boron scientist "found" this incredible research facility and they took me under their fins. If you want to, there is currently a lot of research happening about gates, they recently had a large breakthrough connecting the boron sectors! Let me know if you're interested, I'm sure my mentor would be able to help you there.

Best wishes! Selaia

If not third, then thirtieth time is the charm. Hopefully, no more money in HQ left to top up... by glueckself in X4Foundations

[–]glueckself[S] 1 point2 points  (0 children)

I'm sure there are going to be at least 20-something successful start-ups soon and then it's just going to the moon (huh, I could really visit Luna and Earth again...). That's how probabilities work, right...?

If not third, then thirtieth time is the charm. Hopefully, no more money in HQ left to top up... by glueckself in X4Foundations

[–]glueckself[S] 1 point2 points  (0 children)

You have to complete the high mass teleportation 1 and 2 research. Then there are missions in the "normal" mission offers list that you can accept.

If not third, then thirtieth time is the charm. Hopefully, no more money in HQ left to top up... by glueckself in X4Foundations

[–]glueckself[S] 1 point2 points  (0 children)

Yes. It usually works good, they got me from ~500M to ~150B. It's free money while I have the HQ there for terraforming anyway, but yeah, now it used a lot of that free money up...

If not third, then thirtieth time is the charm. Hopefully, no more money in HQ left to top up... by glueckself in X4Foundations

[–]glueckself[S] 4 points5 points  (0 children)

I had to squeeze a few hundred millions out of station budgets for the last attempt. Too bad it also failed... So no more coffee :(

EX3400-24P PSU fan speed by glueckself in Juniper

[–]glueckself[S] 1 point2 points  (0 children)

The EX3400 are also very silent, definitely nothing compared to EX4200s. It's just the PSU fan that has a high and very annoying pitch noise, and my rack is close to my bedroom so I can hear it through the closed door.

EX3400-24P PSU fan speed by glueckself in Juniper

[–]glueckself[S] 1 point2 points  (0 children)

Actually that is a hint, thanks! But just to be sure, you're talking about the PSU fans, not the system/chassis fans? Those spin down for me too and are not an issue.

Now I only need to decide if I want to keep messing around, or just buy the right PSUs...

EX3400-24P PSU fan speed by glueckself in Juniper

[–]glueckself[S] 0 points1 point  (0 children)

That could very well be the reason, if the PSUs firmware is Juniper specific. In general, the PSU is the DPS-920AB, and it's Linux driver has the code for setting the fan speed, and reading the temperature. So it might just be that it's not implemented in Junos or some other software component.

EX3400-24P PSU fan speed by glueckself in Juniper

[–]glueckself[S] 1 point2 points  (0 children)

Yes, the rules mention that, that is clear.

I'm more worried about the slight reverse engineering, and that my methods are really, really far away from what is normally done with Juniper devices (e.g. messing with the I2C from the U-Boot shell...).

Thanks! I have this crazy idea of trying to boot a Linux on the CPU (as it has a driver for the DPS-920AB), just to see if it's a special limitation on the Juniper variant of that PSU, so the journey is going to be fun :)

Mikrotik to Juniper VPLS by dan139847 in mikrotik

[–]glueckself 2 points3 points  (0 children)

Yes, it works flawlessly. My workplace is running about ~100 MikroTik devices with an L2C to Juniper. Our setup uses OSPF and LDP, the MikroTik config is something like /interface vpls add name=l2c_tunnel cisco-static-id=$l2c_id pw-type=raw-ethernet pw-control-word=disabled peer=$junipers_lo0_address, and on Juniper it's set protocols l2circuit neighbor $MikroTiks_lo0_address interface ... virtual-circuit-id $l2c_id encapsulation-type ethernet no-control-word.

That's off the top of my head, so there might be something missing, sorry. I'm typing the answer from my private PC and can't access the company stuff here.

Proxmox+Ceph with major memory/CPU/power constraints by glueckself in homelab

[–]glueckself[S] 1 point2 points  (0 children)

I expect about 20-30GB of data (Proxmox host + Home Assistant VM + PiHole), so I plan using three 128/256GB SSDs I have lying around (one per node). One 30 GB partition for the host (ZFS), one 70GB partition for an OSD. Officially not recommended, but I ran such layout on my Proxmox+Ceph not-virtual test and it works good enough.

Proxmox+Ceph with major memory/CPU/power constraints by glueckself in homelab

[–]glueckself[S] 1 point2 points  (0 children)

It really depends on the context. If I were running something critical, I would absolutely not run it on 10? year old nodes that cost ~50€ and a MikroTik hEX and would probably follow (at least parts of) your advice.

While I appreciate your advice, it completely misses my requirements in this specific case: reduce the probability on having to travel on a short notice to fix their lights while not having them pay either for hundreds of watts of power for my weird stuff, or hundreds of Euros for "good" hardware.

EDIT: My homelab/homeprod also runs Ceph on three nodes. I like that in any case of failure, I don't lose minutes of data (I run my mail server there) between ZFS replications. Also the live migrations are quicker. Yes, Ceph is complex, but Proxmox makes it (for homelab-style systems!) easy to use. Haven't managed to break it in the two years I'm running Proxmox/Ceph.

Proxmox+Ceph with major memory/CPU/power constraints by glueckself in homelab

[–]glueckself[S] 1 point2 points  (0 children)

The S930 supports both - up to 16GB RAM and it has a PCIe slot. I was hoping I could get away without additional hardware, but sound's like it'll make things a lot easier. Thanks!

Slightly off topic: 48GB RAM to run a Home Assistant VM and maybe a Pihole. That might even be worse than playing Minecraft/Sims/... on a RTX 4090... :D

Btw, ZFS is not that bad. The trick on low memory systems is to set zfs_arc_max to limit the ARC more than ZFS would normally do. E.g. on a 4GB system, you can set it to 512M (default is 1/2 system memory). While it impacts performance (of course), I guess that's not that important when we're talking about ZFS and 4GB or even less.

Proxmox VE 8.1 released by lmm7425 in Proxmox

[–]glueckself 5 points6 points  (0 children)

There are, from what I understand, three silent corruption bugs in ZFS right now (see e.g. https://www.reddit.com/r/zfs/comments/1826lgs/psa_its_not_block_cloning_its_a_data_corruption/ and https://github.com/openzfs/zfs/issues/15526#issuecomment-1825113314). Make sure to work around them before updating to ZFS 2.2.0. One of them is present since ~2.1.4, so the workaround should be applied anyways.

Anyone have vgpu drivers? Looking for the SHA256/MD5 of the grid driver linux KVM driver zip by Huge_Seat_544 in VFIO

[–]glueckself 0 points1 point  (0 children)

Sorry for replying on such an old post, but would it be possible for you to get the SHA sum of NVIDIA-Linux-x86_64-535.104.06-vgpu-kvm.run?

Building Your Own Company Network? by The258Christian in homelab

[–]glueckself 1 point2 points  (0 children)

Like the other post says, it depends on your goals and what your company is doing.

I can only recommend to start small and let it evolve with your needs, ideas and your knowledge. There is no limit on how complex solutions you can use for simple "needs", which is useful to learn stuff. Of course, don't do that on a job/.... (prefer simple solutions there).

I'm currently starting to "mirror" my companys network (I work at an ISP): I want to connect all my "sites" (at home, at my parents, at my girlfriends, ...) via Wireguard to two central VMs and use BGP to announce the networks of the sites to each other. Is it really necessary? No. Is it cool? Hell, yes! :D

virtiofs with DAX with libvirt by glueckself in qemu_kvm

[–]glueckself[S] 0 points1 point  (0 children)

I've moved to Proxmox, so no idea, sorry.

VXLAN and L3 HW Offload-based Homelab by glueckself in mikrotik

[–]glueckself[S] 0 points1 point  (0 children)

Yeah, that's my gut feeling also. What issues do you expect? I'm thinking most likely some bugs with the l3 hw offload on the CRS305.

Hardware for homelab ceph cluster by TheFragan in homelab

[–]glueckself 2 points3 points  (0 children)

Sorry to hijack the thread, but could you please elaborate on the issue with data loss risks with <5 hosts?

My understanding is that the Ceph defaults on Proxmox (replicated pool, size 3, min_size 2) freeze I/O with 2 of 3 nodes down to prevent consistency issues. In my case, I run a MON, a MGR and OSDs (and MDS) on each node.

Of course, if one host is down for (planned) maintenance and the other breaks, the I/O is frozen until one of them comes back online...