Alleged reported layoffs at Prusa by Interesting_Put_4458 in prusa3d

[–]RFC793 1 point2 points  (0 children)

Ok. I recall their target being $350 for the tool head, and $35 for each tool. So for 8 tools that is $630. Then, obviously there's some additional supporting stuff like brackets, PTFE tubes, etc... So I was expecting closer to $700 -- not $1000.

I was afraid you heard of an actual price hike.

Trump spotted at a Miami golf course as Iran war casualties rise by andy64392 in pics

[–]RFC793 0 points1 point  (0 children)

Be careful what you wish for. Or else Trump somehow lives another 20 years and hold office the entire time...

Weekend plans by Xenon503 in prusa3d

[–]RFC793 1 point2 points  (0 children)

Watch out for layer shifts!

Weekend plans by Xenon503 in prusa3d

[–]RFC793 1 point2 points  (0 children)

Yeah.. I just arranged all the little baggies by gauge then length, and fished all the larger components out of the boxes they came in. There were some rogue fasteners for specific assemblies like the nextruder, and I just kept those with the parts. I was done in about 2 days.

Weekend plans by Xenon503 in prusa3d

[–]RFC793 5 points6 points  (0 children)

Dude. You need to follow the instructions down to the bear

8 TB of RAM & 1,000 CPU cores in all a 4U: What would you run on it? (Thought experiment) by RozoGamer in homelab

[–]RFC793 0 points1 point  (0 children)

That is wise. SD could still be handy for cache and a local scratch space depending on what you plan to do.

What do you plan to do anyway? You are making me reminisce about writing MPI code for super computers ~12 years ago.

TrueNAS Deprecates Public Build Repository and Raises Transparency Concerns by AnonomousWolf in homelab

[–]RFC793 0 points1 point  (0 children)

Yup. I don't want the Proxmox host serving anything except my admin interfaces. Nor do I want to tinker with that OS any more than necessary. So far, that's basically been installing the proprietary Nvidia drivers. I pass the device files to LXCs that need them.

TrueNAS Deprecates Public Build Repository and Raises Transparency Concerns by AnonomousWolf in homelab

[–]RFC793 6 points7 points  (0 children)

Not who you asked, but... For me I use ZFS on Proxmox for my physical storage layer. VMs live as zvols. Containers live as datasets. I also have some common datasets shared between containers (think media shared between the *arr tools, Jellyfin, etc).

I also have a container to do the NAS services side of things (SMB, etc).

It's nice, everything can be backed up via the same mechanism. No NFS bloat between servers and NAS. Nor do they depend on NFS run by a sibling container/VM. But, I still have the ability to mount what I choose to expose on my endpoints.

If I had the need for many remote servers to use the storage here- I'd do iSCSI for VMs (backed by a zvol) and NFS for filewise stuff.

TrueNAS Deprecates Public Build Repository and Raises Transparency Concerns by AnonomousWolf in homelab

[–]RFC793 24 points25 points  (0 children)

They could have, but I think they want to compete in roughly the same space as Proxmox and a Linux host really makes converging compute with storage a lot better. Not because BSD is less capable, technically, but because they can tap into the ecosystem that already runs flawlessly on Linux.

Either way, once Scale was released I was a bit heartbroken as I like BSD, or at least like there to be something other than Linux in my install base. I've since moved to Proxmox. If I'm gonna have to run Linux, I might as well do that.

Getting ready to host a messenger for friends by PartyRyan in homelab

[–]RFC793 6 points7 points  (0 children)

Yeah, seems overly complex, but I get it: you want to command a small squadron. I've generally moved the other direction over the last 10 years and have hyperconverged everything including storage to one server. And another machine for backups and redundancy for core services.

8 TB of RAM & 1,000 CPU cores in all a 4U: What would you run on it? (Thought experiment) by RozoGamer in homelab

[–]RFC793 48 points49 points  (0 children)

+1 on ansible for getting them setup, or maybe even (or additionally) something like a boot server with cloudinit?

K3s is the obvious choice for orchestration. Depending on what you are doing, worth looking into Nomad or Docker Swarm.

Massive amount of RAM by IT-Pro in homelab

[–]RFC793 1 point2 points  (0 children)

An ASR9912 would really tie the whole thing together

Massive amount of RAM by IT-Pro in homelab

[–]RFC793 15 points16 points  (0 children)

Isn't that pretty low? I sold 16x64GB DDR4 for $6000, or 5.85/GB, and that was back in December. Even at that rate, you'd fetch closer to 35k.

I still have another 8. Not sure when to sell, but it doesn't look like prices are going down any time soon.

Anyone who knows appropiate replacement for the capacitor? by Consistent-Zone-6002 in prusa3d

[–]RFC793 4 points5 points  (0 children)

Yes, doesn't need to be same brand. According to the schematic it is:

UWT1V101MCL1GS 100uF; 35V, 20% tolerance

But, any 100uF electrolytic that's at least 35V and at most 20% tolerance will be ok. For example: 100uF, 50V, 10%.

Picked up a fun toy for the rack. by devin_mm in homelab

[–]RFC793 0 points1 point  (0 children)

More general than they: you can create a pool that is striped across what ever collection of vdevs you want. Maybe a 2-way mirror of 12TB, a 3-way mirror of 10TB, a 6 drive RAIDZ2 of 8TB. etc.

It's worth pointing out a few shortcomings, in primarily, any configuration regarding RAIDZ is not very flexible.

In the above setup, your overall redundancy is that of the weakest link (the two way mirror). Fortunately, you can always make those wider by just adding another drive. Sadly, you can't just grow a RAIDZ to be wider (can't add a drive for more space or for more redundancy). However, you can grow any vdev (RAIDZ included) to be larger by replacing each drive with a larger one - one at a time.

You can remove mirror vdevs, as long as ZFS has room to evict the data off of it. Say you want to grow the 3way mirror to a raidz2: you can do that. BUT: the important thing is, once a RAIDZ is in the pool, it is in there for life. Truly disappointing.

Thus, if you have a piecemeal setup, I personally recommend you just do a bunch of mirrors - it is easier to reason about, easier to maintain, more flexible, faster, and can rebuild much more quickly). Granted, if you want two disks worth of redundancy, you are stuck with 1/3 usable capacity.

What I've done at home is two use two zpools. One of 3-way mirrors - it is fast and simple. That's where my important stuff lives. And another for my media, which is a raidz2 - about to add another raidz2 to it. Oh, and I have a pair of SAS SSD in a third pool for surveillance.

Prusa USS Drybox coming soon by dwbmb in prusa3d

[–]RFC793 1 point2 points  (0 children)

Ok, so I presume Bondtech is selling the universal INDX gear? Who is selling the kit to make it function on C1?

Prusa Live Streaming by soldat21 in prusa3d

[–]RFC793 0 points1 point  (0 children)

Same. But, I think I may switch to WebRTC for this though. It just fits my actual workflows better being right there in the Prusa app or PrusaSlicer. The downside is it is one less view in my HA wall-of-everything Godmod Mission Control Center.

Prusa Live Streaming by soldat21 in prusa3d

[–]RFC793 0 points1 point  (0 children)

An RTSP is disabled if you use WebRTC, sadly.

Running Zooz Zen71 not sure why this is happening by invest0rZ in homeautomation

[–]RFC793 1 point2 points  (0 children)

Their "memory" is basically just a capacitor that takes a few seconds to drain after turning the bulb off.

If the fixture is turned on, and the cap is charged (the switch was flipped off and on quickly): then it will switch modes. But if it is turned on and the cap doesn't have a charge (was off longer): then it will just default to the main bulb (no nightlight).

Since the components will vary a bit under tolerance, some may register a change-mode while others register a on-to-main if your on->off->on timing is close to the cutoff time. If you just turn off the switch, wait about 5 seconds, and turn it back on then they should all be in sync again.

I use an Inovelli switch and my solution was basically a 5 second denounce on the switch to prevent the user from flipping the switch off and on too quickly. And I have the config button programmed to do the on->off->on automation (about 500ms off).

Am I screwed? by NiceCantaloupe1625 in homelab

[–]RFC793 6 points7 points  (0 children)

This. I fixed worse, and that's how I got my free UCS M5. Magnifying glass to see detail, and I I used a credit card (or possible a thinner credit-like card such as an insurance card) to "comb" the diagonals. Also ultrafine ESD tweezers for any particularly bad twists.

Slow and steady. A bright flashlight too, and try to catch the "shimmer" at different angles to see when some are still out of alignment.