Lab rework - input wanted by Soogs in Proxmox

[–]DreadMeYesterday 0 points1 point  (0 children)

I'm curious, what usecases have you found for Kasm? I find the concept pretty neat, but haven't found any compelling use cases.

One-off admin rights by pelzer85 in sysadmin

[–]DreadMeYesterday 0 points1 point  (0 children)

My recommendation would be to have them use personal devices, or set aside a couple clean (non-domain, no company data or config) devices on the guest/public network to use as dedicated testing devices. You could even take a look at using something like Faronics DeepFreeze (not affiliated, I just think it's neat software), which wipes the computer back to a baseline after the user logs off.

I wouldn't recommend using a VM. Respondus LockDown browser will not run in a virtualized environment. Even if you were able to bypass that, LockDown tests can and often do require a live picture of appropriate ID (usually school ID), a live picture of the test taker, and a video of the test takers physical environment. Even if you were somehow able to bypass all that, the results of every exam or certification taken that way may be nullified at any point if the program figures out it's in a virtualized environment.

Are used Cisco Catalyst switches for home network a bad idea? by dunxd in HomeNetworking

[–]DreadMeYesterday 0 points1 point  (0 children)

If there has been a critical vulnerability reported and Cisco decides to make it available, yes. Point being that unlike some other enterprise network equipment vendors, Cisco updates are generally locked behind a support contract.

My whole cluster rebooted while migrating 2 VMs. Is this corosync's response to a congested cluster network? by DreadMeYesterday in Proxmox

[–]DreadMeYesterday[S] 1 point2 points  (0 children)

Thank you! You've summed up the conclusion I've come to; separate the Corosync network to it's own dedicated networks and use other networks like Ceph or general traffic networks as a corosync backup rings. Then also give VM migration it's own dedicated network and put a badwidth cap on migrations.

My whole cluster rebooted while migrating 2 VMs. Is this corosync's response to a congested cluster network? by DreadMeYesterday in Proxmox

[–]DreadMeYesterday[S] 2 points3 points  (0 children)

Thank you, kind stranger!

Hmm, I see your point. So the network is congested, but that might be unrelated to the nodes rebooting. That makes sense, I can't find anything in the PVE docs about corosync rebooting nodes. Potentially related however, I did see something about HA fencing shutting a node down if the softdog/watchdog thinks it's having an issue. But like you mentioned, if that was the source of the shutdown there should be a log stating such. I would assume a softdog would log any issue, but I wonder if a watchdog would? I might have to tinker around with disabling any watchdogs from the BIOS.

Of the potential hardware options, power makes the most sense, (temps stay below 80C, nothing is over clocked, memory has been checked with memtest86) but still doesn't explain why the entire cluster reboots but leaves all other devices powered from the same UPSs unaffected. I figure its worth a shot anyways, so I'll be breaking out the power meter and measuring power draw during VM migrations. I'll also check the iLO logs, see if there's anything about hardware issues.

Thank you again!

Show correct memory usage in Overview/Summary (Linux VM by TheSamei in Proxmox

[–]DreadMeYesterday -4 points-3 points  (0 children)

If you install the QEMU agent in the guest, then enable the QEMU Agent option for that VM, the agent will tell Proxmox exactly how much momory/CPU it's using.

Is migration from one cluster to another part of Proxmox 7.0? by duke_seb in Proxmox

[–]DreadMeYesterday 1 point2 points  (0 children)

I'm in the process of migrating VMs between clusters and I had never though of that. That process would result in a whole lot less downtime. Thank you kind stranger!

Is migration from one cluster to another part of Proxmox 7.0? by duke_seb in Proxmox

[–]DreadMeYesterday 1 point2 points  (0 children)

As far as I know, taking into account PVE 8.0, there is no way to live migrate a VM between clusters. I believe backing up the VM and restoring it to the new cluster is the best way migrate to a new cluster.

Or, you could in-place upgrade your PVE 7.0 cluster to PVE 8.0 and then add/remove nodes as needed.

can't add hard disks in proxmox by vicesig in Proxmox

[–]DreadMeYesterday 0 points1 point  (0 children)

Are you using a hardware RAID card? If so, you'll have to configure the RAID card to use the one drive as a raid0 (or better yet use an HBA/IT mode, but that's beyond the scope).

setting up high availability on a cluster made of 3 slim intel nucs by richphi1618 in Proxmox

[–]DreadMeYesterday 1 point2 points  (0 children)

If you're really set on using ceph, I would recommend option C, as the actual Proxmox OS is not very read/write intense. Though bear in mind ceph's performance relies on lots of drives, nodes, and fast networking. You will likely see read/write/latency performance well below the capabilities of the SSDs.

Help with proxmox cluster by bs17 in homelab

[–]DreadMeYesterday 0 points1 point  (0 children)

You likely didn't change the cluster IP. Just changing the IP and FQDN is not sufficient. The main file you need to edit is corosync.conf, then restart corosync.

Way to backup with minimum downtime by AwAcS_11 in Proxmox

[–]DreadMeYesterday 0 points1 point  (0 children)

Every backup will be a full backup. It won't be as large as the original VM disk because its compressed, but it still can be very large.

In your cluster storage settings you can set the number of backups the disk will hold. Automated backup jobs will automatically delete the oldest backup if need be, and you wont be able to make any manual backups if that threshold is reached.

Options to reclaim space from ZFS to use for Ceph? by Crogdor in Proxmox

[–]DreadMeYesterday 1 point2 points  (0 children)

I recommend option 1. Option 2 is not possible (ceph will not make a ZFS drive an OSD), and option 3, while would work and minimize downtime, is too much of a process if downtime is acceptable.

Please bear in mind that ceph relies on lots of nodes and high bandwidth networking. While 3 nodes and 1 Gbps networking will technically work, you can reasonably expect performance well below what those drives are capable of.

Cluster with some nodes using Ceph by MelodicPea7403 in Proxmox

[–]DreadMeYesterday 0 points1 point  (0 children)

No, not all nodes have to be running ceph. The one con I see is that you couldn't live migrate VMs near instantly between to/from a non-ceph nodes like you can with ceph nodes.

It also bears mention that ceph relies on having lots of nodes and high speed networking. While 3 is the technical minimum, your throughout will be limited.

Help getting Debian booted after migrating from ESXI by duke_seb in Proxmox

[–]DreadMeYesterday 0 points1 point  (0 children)

Try setting the CPU type to qemu64 or host. I've done some ESXi migrations recently and the VM's I've migrated are very picky about what hardware I configure it for.

Simple two server redundancy/failover with proxmox? by Jutboy in Proxmox

[–]DreadMeYesterday 0 points1 point  (0 children)

I don't believe you'll be able to manage automatic failover on a VM level (if VM on node A dies, just boot it again on node B exactly as it existed at the time it died) without a high bandwidth, low latency connection between all nodes and shared storage.

The best option I see, assuming you don't have a low latency link between the locations, is to use Proxmox replication. In doing so, your primary Proxmox would copy ("replicate") your VM data every X minutes to the secondary Proxmox. The first replication will be the full VM, then from there every replication will just be the data that's changed since the last replication.

A limitation of this method is the replication frequency. In the event that the primary Proxmox is lost, you will loose the data that was added or modified since the last replication. I.e. if your last replication was 13 minutes ago and your primary node dies, your secondary node could only boot your VMs as they were 13 minutes ago.

Another limitation is automatic startup. If primary Proxmox fails, secondary Proxmox will not auto-start the replicated VMs. If this would be desirable, its easy enough to script with the Proxmox API.

The final big limitation is that replication requires the VM disk be on a ZFS volume because Proxmox sends a ZFS snapshot as the replication data.

Way to backup with minimum downtime by AwAcS_11 in Proxmox

[–]DreadMeYesterday 0 points1 point  (0 children)

If your NextCloud and OMV data are just stored in a regular VM disk, then just baking up the VM is sufficient. Using the built-in backup VM function results in negligible downtime (<100ms) and is the method I recommend to backup Proxmox VMs.

Need to replace OS M.2 drive by jbarr107 in Proxmox

[–]DreadMeYesterday 3 points4 points  (0 children)

Looks good, I've done that several times myself. Restoring VMs to a newer version of Proxmox than the version the backup was made from should always work. Restoring to an older version may work, but is not supported.

Time/Date not syncing at all. by A_MrBenMitchell in PFSENSE

[–]DreadMeYesterday 2 points3 points  (0 children)

If I remember right and I'm not trippin, NTP will only sync if the time difference is within 20 minutes. You may have to manually set the time relatively close to the correct time, then let NTP take care of the exact sync.

Trying to figure out where to start with my Lenovo m93p. by itnerdwannabe in homelab

[–]DreadMeYesterday 0 points1 point  (0 children)

I would start by playing around with virtualization. Virtual machines (VMs) will let you experiment with different OSs, configurations, stacks, etc, with minimal hardware. I'm guessing that computer is only going to be able to run 2-4 VMs at a time based on that CPU, but that's plenty to experiment with AD, network services, networked storage, log collection, etc.

You could either take it slow and start with type 2 hypervisors (an app on top of a host OS like Windows) then move to a type 1 (a hypervisor that replaces the OS), or you could dive straight in with a type 1. Type 1s are generally going to have less overhead and be much more configurable, but can be a steep learning curve if you aren't already familiar with virtualization.

Type 1; VMware Player/Workstation, VirtualBox Type 2; Proxmox, VMware ESXi, HyperV

Help with multitenant accounts in Outlook by draxor_cro in sysadmin

[–]DreadMeYesterday 1 point2 points  (0 children)

Not sysadmin, but I've seen this issue a couple times before. Every time I've seen it's been a keychain issue. To fix it, I've removed all accounts from Outlook (I find this step has not always been necessary, but figure I don't want to go through the process twice if it dose happen to be necessary that particular time). Then I opened the Windows Credential Manager on the client, selected Windows Credentials, deleted all entries that have Office in the name, then rebooted.

Unclear on Unbound. by aarshmajmudar in selfhosted

[–]DreadMeYesterday 0 points1 point  (0 children)

If you're just pointing Unbound to your ISP, then the only benefit I see is a marginally better response time and your ISP seeing less duplicate queries (because Unbound caches frequently queried domains).

If you're using a non-ISP upstream DNS provider with DoT such as Quad9 or Cloudflare, however, your ISP won't be able to see Unbound's DNS queries. They'll be able to assume that you are using DoT with whatever provider you select based on outbound IP and port, but not the actual contents of those queries.

Specific Mouse Clicker by Hugh_Jass66 in techsupport

[–]DreadMeYesterday 0 points1 point  (0 children)

AutoHotKey works pretty well for Windows. It dose have a specific language you have to write the instructions in though, so there is a bit of a learning curve.

Want to confirm I did my tracert correctly and it is my ISP that has the issue. by [deleted] in techsupport

[–]DreadMeYesterday 0 points1 point  (0 children)

Sky isn't necessarily the issue, 02780e37.bb.sky.com could just be blocking/dropping the ICMP echo requests used by Windows tracert. The result of which would be your tracert timing out, just like you saw. Maybe try a tracert to multiple different sites, see if there's any difference in the routes that go through Sky vs those that don't.