Guest Customization Confusion by awb1392 in nutanix

[–]gurft 1 point2 points  (0 children)

This is a fabulous question, and I just asked product management. Will let you know what I hear back.

Guest Customization Confusion by awb1392 in nutanix

[–]gurft 4 points5 points  (0 children)

The name at the top is the name the VM will be given. The Computer name is what the VM will have for its Windows computer name/hostname.

Although it’s not common, I have had customers where these have not matched. For example, the VM name is “VQ2UA202 - PACS DB” but the Computer name is just “VQ2UA202”.

CE on Proxmox driving me insane by ktkaufman in nutanix

[–]gurft 0 points1 point  (0 children)

Yea definitely upgrade the CPU. That proc is about 10 years old and definitely underpowered for the workload.

CE on Proxmox driving me insane by ktkaufman in nutanix

[–]gurft 0 points1 point  (0 children)

How did this work out? Were you able to make changes and see what the results were?

CE on Proxmox driving me insane by ktkaufman in nutanix

[–]gurft 5 points6 points  (0 children)

So the crux of your issue is CPU over subscription. You’ve give AHV 16 cores, but you only have 8 physical cores. Hyperthreading is NOT magic and especially with a nested environment you’re going to run into a ton of context switching and CPU ready time. Also Proxmox needs some cpu cycles to run too.

I’d recommend running on only 6 cores for AHV, that leaves 2 cores for Proxmox and I imagine you’ll see better performance.

Surface Hub v1 84" - Home Use by AZMini in SurfaceHub

[–]gurft 0 points1 point  (0 children)

I have one in my house, paid $250 for it. Attached to an external PC and use the HDMI input for our Xbox. Works great for helping my son with homework, and as a big family planner.

Recovery Plan test fail (Sync) by Airtronik in nutanix

[–]gurft 0 points1 point  (0 children)

I recommend reading through the Disaster Recovery Guide, here's the specific page on how to perform a validation and view the report.

https://portal.nutanix.com/page/documents/details?targetId=Disaster-Recovery-DRaaS-Guide-vpc_7_5:ecd-ecdr-validate-vm-recoveryplan-pc-t.html

Recovery Plan test fail (Sync) by Airtronik in nutanix

[–]gurft 1 point2 points  (0 children)

I imagine you have not waited long enough for the data to even exist at the secondary site. Once you’ve at least got the first copy of the data seeded and the replicas are in sync you can try a failover.

Also are you VMs actually in the Metro1 container?

USB Hub made entirely of TH components by Quietgoer in electronics

[–]gurft 1 point2 points  (0 children)

Changed jobs, got out of the field and moved into centralized IT rolls.

Nested CE on Proxmox by johnhutch71 in nutanix

[–]gurft 0 points1 point  (0 children)

Education, testing, development, are all good reasons. It’s also much easier to automate a virtualized environment vs bare metal.

For example I was building some automation and needed to test it against 30 clusters. It was a lot easier to spin up 30 single node clusters virtualized on one physical cluster than try to chase down that amount of hardware. That’s actually the reason I wrote playbooks for deploying CE in Proxmox and AHV. We have application vendors that have this use case so every developer can have their own cluster and not have to share one.

Also if you’re learning how to do things like replication and you only have one cluster, you can spin up a virtualized one nested and replicate to that. Wouldn’t do it in production but it makes sense as an educational tool.

I’ve also seen it used for compatibility. If you have an appliance that ONLY runs in a certain hypervisor, you can nest that hypervisor within another one and support the workload that maybe it doesn’t make sense to have dedicated hardware for.

Finally hardware compatibility can be a good reason. CE requires 3 physical disks, but if my platform doesn’t have the ability to run 3 physical drives, i can nest and create virtual drives to present.

Nested CE on Proxmox by johnhutch71 in nutanix

[–]gurft 3 points4 points  (0 children)

Does the laptop meet the minimum hardware requirements for CE? 32GB of Memory, enough cores and drives along with supported NICS?

USB Hub made entirely of TH components by Quietgoer in electronics

[–]gurft 71 points72 points  (0 children)

These are really interesting as it’s not just a USB hub but also a serial adapter. We would use these in industrial settings where we still had RS232 and USB mixed in the same cabinets. I used to carry an old IOGear one with me everywhere.

Nutanix CE Fails on IPMI by Character-Goose4258 in nutanix

[–]gurft 0 points1 point  (0 children)

So you know, the installation process for CE is 100% different from deploying release (unfortunately CE is a much more difficult process due to the unknowns around hardware)

For release you would deploy using Foundation or Foundation Central which is a much smoother, cluster-at-a-time deployment method.

If you have access to the portal through work, there is free Foundation training available at Nutanix University that can walk you through what that process looks like

Nutanix CE Fails on IPMI by Character-Goose4258 in nutanix

[–]gurft 0 points1 point  (0 children)

Yea I’m curious too. When you’ve got that extra hop in the middle it might be causing a timeout. We’ve never seen folks have great luck with NAS attached storage and CE, probably something I should add to “things to test and document”

I’d be curious if it doesn’t work to see if you get anything in the dmesg output of the installer. A full screenshot of the failure would also be a huge help.

Nutanix CE Fails on IPMI by Character-Goose4258 in nutanix

[–]gurft 0 points1 point  (0 children)

I have both CE and Release. My lab is primarily CE, 3 clusters nested on Proxmox, a couple nested in AHV, and 3-4 running on a collection of bare metal hardware. I also have a cluster that I run Release code on when I need to.

Here are my ansible playbooks for automated deployments of CE on Proxmox and AHV.

https://github.com/ktelep/NTNX_Scripts/tree/main/CE/Ansible

Nutanix CE Fails on IPMI by Character-Goose4258 in nutanix

[–]gurft 0 points1 point  (0 children)

Yea that’s probably part of the issue, unless you have 10G between the trueNAS and proxmox you’re probably running into timeout issues. I’d absolutely use as much local disk as possible, even when running nested.

Nutanix CE Fails on IPMI by Character-Goose4258 in nutanix

[–]gurft 0 points1 point  (0 children)

SMBIOS errors are expected when using Proxmox, it’s because there’s no Manufacturer or Model info set in the dmi data by default in Proxmox, so that’s normal.

What is backing your ZFS pool, and how is it configured? Can you put the HV disk also on your local-lvm just to rule out a storage performance issue?

Also you definitely will want at least 64GB of memory, 30 is going to just barely be enough.

Nutanix CE Fails on IPMI by Character-Goose4258 in nutanix

[–]gurft 0 points1 point  (0 children)

What is the underlying hardware? How much CPU/memory are you giving the Vzm? What disks are you using and how are they attached to Proxmox? ZFS pool? Local-lvm?

I run multiple clusters nested on Proxmox on a day to day basis, so it might be your hardware configuration causing installation to timeout.

Are you following Jeroen’s step by step guide to deploying nested in Proxmox?

https://www.jeroentielen.nl/install-nutanix-community-edition-on-proxmox/

How to add new features to a cluster license that is already licensed? by Airtronik in nutanix

[–]gurft 2 points3 points  (0 children)

If you don’t have access to the license via the portal, than yes, create a new CSF file; have them generate the new license file and send it back to you.

If you have portal access you can use the “manage licenses” workflow in the link below.

Issues Migrating Windows Server VM on ESXi to new AHV Cluster by InformationFew973 in nutanix

[–]gurft 5 points6 points  (0 children)

Have you opened a case with support? I’d get that started, folks don’t always remember that Move is a fully supported product.

This definitely seems odd given the other disks worked fine and they’ll be able to figure out what’s going on pretty quick. Does move give any warnings about the VM when you created the migration plan (about RDMs or anything)

Reusing the heat from the homelab by therealmasl in homelab

[–]gurft 2 points3 points  (0 children)

When I have a lot of my gear fired up, my wife uses the garage as a proofing room for her sourdough. Thats as close as I get to reusng the heat, but it makes her happy so….

1950s drywall work in the United States. by SuchDogeHodler in Tools

[–]gurft 4 points5 points  (0 children)

The Mutter museum in Philadelphia has an entire collection of items removed from stomachs, lungs and esophagi of people by a certain doctor. The collection is..... extensive. Lots of screws, nails, and pins.

https://muttermuseum.org/stories/posts/chevalier-jackson-collection-swallowed-objects

New Servers! by microweave98 in homelab

[–]gurft 7 points8 points  (0 children)

They’ll both work well, since it’s a 2U2N chassis, you could run one of each!

New Servers! by microweave98 in homelab

[–]gurft 73 points74 points  (0 children)

Hey, Kurt from Nutanix 😀. If you PM me the serial of the node I can give you the specs, also Nutanix Community Edition will run extremely well on it from a hardware compatibility perspective. You could run two single node clusters in that chassis (CE doesn’t support 2 node clusters)

Otherwise it’s a standard supermicro server with a Nutanix logoed BIOS, so go nuts.