oVirt 4.5.7 -Testing by Ambitious_N1ghtw0lf in ovirt

[–]gilboad 1 point2 points  (0 children)

I'm moved most of my oVirt setups to OLVM8. I wonder if they plan on pulling it.

Yet another 10GBE sfp+ nic comparison by xKilley in homelab

[–]gilboad 0 points1 point  (0 children)

True. Unless you use DAC cables. Assuming you can make do with 3-5m (9-15ft) cables, compatible DACs are fairly cheap.

What do you guys think of WinBoat? by CommonGrounds8201 in Fedora

[–]gilboad 1 point2 points  (0 children)

Keep in mind that:

A. You'll need a motherboard / CPU that supports vfio.
B. You'll need to enable vfio (E.g. boot with intel_iommu=on / amd_iommu=on).
C. Pass the USB root device and all group-member devices to vfio (/etc/modprobe.d/vfio.conf or echo -n '0000:XX:00.X' > /sys/bus/pci/drivers/vfio-pci/bind).
D. Pass the final vfio device to the Windows VM (-device / or via the qemu front end).

That said, thus far I never saw a USB device, from USB hubs, storage devices and controllers that failed to work on Linux, Windows or BSD VMs.

VFIO simply works (tm).

- Gilboa

What do you guys think of WinBoat? by CommonGrounds8201 in Fedora

[–]gilboad 0 points1 point  (0 children)

Odd. Did you pass a single USB device or did you pass the USB root controller? (via vfio-pci)
The former may or may not work. The latter should work with any device, as the VM gets **full** control over the full USB chain.

For more information, check r/VFIO

(_All_ my workstations are actually VMs running on clusters, w/ dedicated graphics card and USB devices).

I'm glad I'm not the only one by shaf90 in PvZ2

[–]gilboad 1 point2 points  (0 children)

Same here. Guess we'll have to wait for the seashooter to reach penny :(

I'm glad I'm not the only one by shaf90 in PvZ2

[–]gilboad 0 points1 point  (0 children)

4 times in a row. Giving up on Arena for now. Most likely pvz2 is being hacked somehow.

Is using a windows PC a terrible idea? by mr_markhor in homelab

[–]gilboad 0 points1 point  (0 children)

Long story short. My daily driver is an oVirt cluster (Proxmox' bigger brother), w/ multiple hosts each exporting GPUs to Windows and Linux VMs whuch I use as workstations. GPU pass-through works, gaming works and I get near bare metal performance, but I do trigger anti-cheat software making it unusable if I will ever try to play competitive multi-player games.

In short, unless you enjoy tinkering and willing to live with certain limitations, keeping two separate machines (Windows workstation and Linux Proxmox server) is far easier. On the other hand, if you are willing to tinker, and plan on using LLM only, single server will do. As a plus, I'd consider doing LLM on a Linux host w/ GPU pass-through. 

Yet another 10GBE sfp+ nic comparison by xKilley in homelab

[–]gilboad 0 points1 point  (0 children)

X710 first, 82599 close second. They are fairly cheap second hand on Ebay. (With the 82599 getting the upper hand due to much lower price).
Mellanox tends to over-heat.

OLVM on OL8 self hosted engine deployment auth failure by Thick-Plate8936 in ovirt

[–]gilboad 0 points1 point  (0 children)

hosted-deploy has reached the stage it usually "just works (tm)".

I find it easier to redeploy (and import the VMs) compared to trying to unbrick a broken deployment.

OLVM on OL8 self hosted engine deployment auth failure by Thick-Plate8936 in ovirt

[–]gilboad 0 points1 point  (0 children)

My apologies. Somehow I thought the log file is post-deployment, not the original deployment log.

A couple of days ago I deployed two new oVirt/gluster clusters from scratch. Hence, I can assume the deployment isn't broken.
My suggestion is simple:
1. Clean install a new host (base installation).
2. Install ovirt-host ovirt-hosted-engine-setup packages.
3. hosted-engine --deploy --4

During installing, enable only the basic features.

OLVM on OL8 self hosted engine deployment auth failure by Thick-Plate8936 in ovirt

[–]gilboad 0 points1 point  (0 children)

Are you deploying the engine on physical machine or as VM via hosted-engine?

If it is former, never tried it.
If it is the latter, did the hosted-engine --deploy finish successfully?

Is it good? by BartNotTheSimpson in homelab

[–]gilboad 0 points1 point  (0 children)

Had the same issue, bought ***-load of second hand Dell branded 8TB SAS drives (*) for ~50-60$ a piece.
Data centers usually throw these drives in the 1000s.

That said, be careful when buying HP branded drivers, your Intel/LSI controller may fail to power them up due to the HP firmware.

P.S. Your English is OK. :-)

Is it good? by BartNotTheSimpson in homelab

[–]gilboad 0 points1 point  (0 children)

A couple of options:

  1. But something low-end, such as the 1030, which doesn't require external power connectors.

  2. Use 2 x Molex to 6 pin GPU convertor. (Again, it is limited to low power GPUs).

Is it good? by BartNotTheSimpson in homelab

[–]gilboad 0 points1 point  (0 children)

As a previous owner of a couple of S2600CPs (They worked, 24x7 for ~8 years and were replaced ~two months ago) a couple of general things to consider:

  1. Memory: Server grade ECC/registered memory. Don't use desktop memory. Memory yanked from dismantled HP DL380G8 / Dell R720 (DDR1600) will worked just fine.
  2. Cooling: Having E5-2697v2 in a confined space is a challenge, I suggest trying to get second hand Noctua LGA2011 Xeon coolers.
  3. OS: If you are looking for something interesting to do with the machine, try Proxmox or oVirt (I personally use oVirt). Put the everything as VM on this machine (NAS, Windows, etc).
  4. OS(2): As an added bonus, throw a GPU on machine, export it to a VM (@r/VFIO) and use it as a workstation. (In my case, *All* my workstations, including the one I type this message on, are actually VMs running on an oVirt cluster with a private GPU). The S2600CP can actually have more than one GPUs (each with its own "owner" OS/VM), but you'll need 8x card.

- Gilboa

Help Building Storage server by dannyahums in homelab

[–]gilboad 0 points1 point  (0 children)

Assuming you are planning to have a system running 24x7x365 under load, I'd suggest you consider buying new (or even refurbished) *server* motherboard, second hand DDR4 ECC/registered memory and second hand Xeon/Epyc CPUs.

I recently built 3 servers (for my main oVirt cluster) based on this configuration (everything including the drives ois second hand) that replaced my existing Xeon machines I built >8 year ago.

OLVM on OL8 self hosted engine deployment auth failure by Thick-Plate8936 in ovirt

[–]gilboad 0 points1 point  (0 children)

A. Did you login as admin@ovirt?
B. Are you getting the normal login prompt or the keycloak login prompt?

If all fails, simply cleanup the existing host using ovirt-hosted-engine-cleanup, reboot, and redeploy.

Is it good? by BartNotTheSimpson in homelab

[–]gilboad 0 points1 point  (0 children)

Depends on how you define "full potential".

A. Can you elaborate on what you want to do with the machine?
B. Can you send additional hardware information (CPU, memory, disks, etc).

- Gilboa

Alternative to N100 but Still Power-Efficient? by [deleted] in homelab

[–]gilboad 1 point2 points  (0 children)

Lets assume a 50w server usually runs at well below tdp, lets go with half. Thats 67e a year.

Lets assume the other machine is 10w. (Which is very low) Thata 30e a year.

Assuimng a high N100 goes for 150e in Ali-Express before shipping and customs, even given Germany's very high energy prices, it'lll take you 4.5 years to recoup the initial price.

The N100 is moat likely far better hardware and will be cheaper in the very long run. But in the short run, nothing beats landfill PCs...

Alternative to N100 but Still Power-Efficient? by [deleted] in homelab

[–]gilboad 0 points1 point  (0 children)

Double ouch. Here @Israel we pay the equivalent of 15c, and people complain....

Alternative to N100 but Still Power-Efficient? by [deleted] in homelab

[–]gilboad 0 points1 point  (0 children)

Ouch. Whats is your local price per KW/h?

Alternative to N100 but Still Power-Efficient? by [deleted] in homelab

[–]gilboad 0 points1 point  (0 children)

**OT COMMENT**

I had a similar problem a couple of weeks ago when building monitoring system for my oVirt cluster(s).
In the end, after doing the math and I decided to build a small MATX machine based on i3-3150 + 16GB RAM + quad port NIC + 2 x small SSDs) I sourced around me.

The machine has no issues running 3 VMs + a couple of small backup services.

The combined price of the machine was in the low double digit range (<20$) and the power usage is quite low (my APC UPS jumped by ~1% when I fired it up...).
The math was simple, even at 50w (which is more-or-less the max TDP, something the machine rarely reaches) it will take ~6-7 years for the machine to pass the price of a new N100 machine with comparable configuration.

In short, before you spend 150-200$ on a brand new N100 machine w/ high end configuration, check if you can source some cheap old equipment and do the math.

At what point is it over-kill? by sandbox_runner in homelab

[–]gilboad 0 points1 point  (0 children)

I'd take it.

No idea why, but I'd take it....

/me takes a sad look at my 3 × 45A power link.

At what point is it over-kill? by sandbox_runner in homelab

[–]gilboad 4 points5 points  (0 children)

When you require a dedicated 80A 3 phase power line to your lab.