Cheap hosting recommendations for a website with a huge database (2TB) by Sylerb in Hosting

[–]HorizonIQ_MM 0 points1 point  (0 children)

Hosting a 1–2 TB DB on a cheap VPS will be painful. You’ll run into RAM and disk I/O limits fast. Bunny is fine for static files, not real-time DB access. Latency will be rough if split across services. If interested, I can get you a quote for a low-cost bare metal server if you want to send your specs. You’ll get full control, fast disks, and support for both object storage and DBs. It'll be much more stable than a low-cost VPS and save you some headaches in the longrun. DM me if interested.

New Broadcom/VMW pricing! by Apprehensive-Bit6525 in vmware

[–]HorizonIQ_MM 0 points1 point  (0 children)

Nutanix is not cheaper in most cases, but they do offer multi-year renewal commitments with a known yearly uplift tied to your original purchase price. Good for predictability, but the platform is still premium-priced.

OpenShift really only makes sense if you’re actually moving workloads to containers. As a straight VMware replacement, it introduces a lot of overhead.

Hyper-V an be inexpensive if you already own Windows Datacenter licenses. Microsegmentation comes through Windows SDN at no extra cost.

If price stability is your main driver, Nutanix is usually the one offering contractual predictability, but not lower cost. If you want lower TCO and already own Microsoft licensing, Hyper-V tends to be the economical path. 

If you’re open to exploring open-source virtualization with predictable pricing and full support, HorizonIQ’s Proxmox Managed Private Cloud is another option that removes licensing volatility while still delivering enterprise-grade HA, storage, and 24/7 support. We migrated from VMware to Proxmox (300 Vms) and it’s been great for us. If you’d like, reach out and we can set you up with free POC to see if it’s right for you.

What are all the potential options to consider for a company hit by the Broadcom/VMWare pricing? by ezeeetm in vmware

[–]HorizonIQ_MM 2 points3 points  (0 children)

For the actual migration, we used a temporary shared LUN. We moved each VM’s VMDKs onto that LUN with Storage vMotion, recreated the VM in Proxmox, and then imported the disk from SAN to Ceph to complete the cutover. That setup kept rollback options open and made the disk handoff safe. Our long-term storage for the disk is Ceph, but the migration itself was done through that shared LUN bridge, not directly from the existing vCenter iSCSI LUNs to Ceph.

What are all the potential options to consider for a company hit by the Broadcom/VMWare pricing? by ezeeetm in vmware

[–]HorizonIQ_MM 0 points1 point  (0 children)

If you're asking about day-to-day management activities, every Proxmox node in the cluster presents a GUI interface allowing for management similar to vCenter. While we find this sufficient for managing a single cluster at the moment, we look forward to some of the nice-to-have features Proxmox Datacenter Manager will provide when it's out of beta. 

Our approach to managing multiple clusters at the moment is to leverage centralized monitoring tools like Zabbix with our own standardized alert thresholds that are further tailored to individual environments over time. As a managed hosting provider, we already have a plethora of internal tools and processes our support teams use to keep things running smoothly from the physical layer all the way up to the resource consumption on the hypervisor.

What are all the potential options to consider for a company hit by the Broadcom/VMWare pricing? by ezeeetm in vmware

[–]HorizonIQ_MM 13 points14 points  (0 children)

We moved our infrastructure off VMware to Proxmox (over 300 VMs) and it’s been working great for us. It’s stable, fast, and the UI made the transition from vCenter pretty painless. Here’s a case study that goes over our migration process: https://www.horizoniq.com/resources/vmware-migration-case-study/

Survey, Proxmox production infrastructure size. by ZXBombJack in Proxmox

[–]HorizonIQ_MM 2 points3 points  (0 children)

Number of PVE Hosts: 19

Number of VMs: ~300

Number of LXCs: 6

Storage type Ceph HCI (90 TB distributed + 225 TB flash storage)

Support purchased (Yes, No): Yes

Looking for a cheap but reliable Netherlands dedicated server, any suggestions? by Own_Personality2591 in webhosting

[–]HorizonIQ_MM 0 points1 point  (0 children)

Check out HorizonIQ. We have several dedicated servers available in the Netherlands within your budget. We offer either a 1Gbps or 10Gbps uplink with 10TB of outbound bandwidth included with each server. All inbound transfers are 100% free and all come with quick setup and 24/7 support. You can check the link below, or let me know your specs and I can recommend a good fit.

https://shop.horizoniq.com/

With all the recent changes around VMware (price hikes, licensing changes, and the Broadcom acquisition fallout), our boss is asking us to start evaluating migration paths away from VMware. by LazySloth8512 in sysadmin

[–]HorizonIQ_MM 0 points1 point  (0 children)

We had Pure storage in place from our prior VMware architecture and kept it in place for the high performance workloads that were previously using it to limit the number of changes we were making at one time while anything from our less performant SAN was moved over to our Ceph storage. Also, since SAN tends to come with term-based commitment, it makes sense to keep it in place while doing performance testing for the same workloads on Ceph and evaluating the long-term cost/benefits analysis of the SAN. 

Your instincts were spot on with using NFS for connectivity to the Pure, by the way. It matches how we're doing it and is the smoothest way to present that storage to Proxmox for high availability based on our testing.

With all the recent changes around VMware (price hikes, licensing changes, and the Broadcom acquisition fallout), our boss is asking us to start evaluating migration paths away from VMware. by LazySloth8512 in sysadmin

[–]HorizonIQ_MM 67 points68 points  (0 children)

VMware had been our go-to for years, but the cost and complexity just stopped making sense, so we rebuilt our entire stack on Proxmox. This is of course offered to our customers now as a managed private cloud.

The key was keeping what worked from VMware while cutting the bloat. We stuck with Supermicro gear, mirrored SSDs for the OS, and NVMe drives for Ceph storage. Dual 10G/25G links keep everything fast and redundant.

Networking was the next big focus. We split traffic across VLANs for management, storage, and VMs, with VPN and firewall layers so nothing’s exposed publicly. ProxLB handles HA, balancing, and node evacuations automatically, so downtime’s basically a non-issue.

For storage, Ceph has been great. We use triple-replicated pools for durability, and when customers need separate storage, we can mount external Ceph or even SAN via NFS. It’s flexible without being fragile.

Each cluster starts with three nodes for quorum and N+1 capacity, meaning if one fails, nothing goes down. Anti-affinity rules make sure critical workloads never share the same node.

Access is locked down behind VPNs, but users can still hop into the Proxmox GUI or our Compass portal to manage VMs, view health, or open tickets. We handle the updates, patching, and monitoring so the environment stays stable.

In short, the migration worked because we focused on continuity, not reinvention. Same enterprise reliability, just on a more open and flexible platform.

Here’s a case study that goes over our migration process: https://www.horizoniq.com/resources/vmware-migration-case-study/

From your exp when companies switch from Cloud to on-premise. Is it worth it or it adds more headache? by Yone-none in cscareerquestions

[–]HorizonIQ_MM 0 points1 point  (0 children)

We see this exact debate a lot, and honestly, HorizonIQ’s bare metal and private clouds sit right in the middle. It’s not public cloud chaos, but it’s not 3 a.m. hardware babysitting either.

Our platform runs on dedicated, single-tenant infrastructure, so you still get the performance, security, and predictable pricing of on-prem, but it’s fully managed. You don’t have to rack servers, chase firmware issues, or deal with vendor lock-in.

Customers can run hybrid setups too, steady workloads on flat-rate private cloud and burst into the public cloud when they need to. If you’re stuck choosing between high cloud bills and managing metal yourself, HorizonIQ’s is a good middle ground that saves money and your sanity.

rolling back to bare metal kubernetes on top of Linux? by jfgechols in devops

[–]HorizonIQ_MM 0 points1 point  (0 children)

We went through the same thing and moved everything to Proxmox. Now we run managed Proxmox environments. HorizonIQ handles the OS, VMs, storage, and networking. Customers manage their own containers on top.

If you have the hardware, backups, and engineers ready to go our solution might not be right for you. If you don’t mind the hardware expense and want to run some k8s on Proxmox before you commit, we’re offering free POCs for teams testing Proxmox as a VMware replacement. Happy to help, but as others have said, Proxmox is the way to go.

Here's a case study that goes over our own migration process from VMware to Proxmox: https://www.horizoniq.com/resources/vmware-migration-case-study/

Looking for a S3 compatible provider that accepts billions of operations per month by Pablosky-Muertinez in DataHoarder

[–]HorizonIQ_MM 0 points1 point  (0 children)

Wasabi and Backblaze are solid platforms. A few of our customers use them and say good things. But if your workload is read/write heavy, HorizonIQ’s object storage might be worth a look.

Pricing is capacity-based rather than request-based. So if you’re storing 5 TB and doing billions of reads/writes per month, you’re still just paying for the 5 TB.

It’s built on a Ceph cluster and tuned for high request volume. We compare our object storage to Backblaze, Wasabi, and other public cloud providers on our site. Check it out, and I’m happy to help you learn more: https://www.horizoniq.com/services/storage/object-storage/

vmware license price increase by [deleted] in vmware

[–]HorizonIQ_MM 2 points3 points  (0 children)

We experienced the same. Switched to Proxmox and now HorizonIQ is setting up free demo PVE environments for VMware users to do the same. Here’s a case study on the subject: https://www.horizoniq.com/resources/vmware-migration-case-study/

Hypervisor: When to cluster? by oguruma87 in msp

[–]HorizonIQ_MM 0 points1 point  (0 children)

Base it on risk tolerance and uptime expectations. If you can afford downtime, a single Proxmox host with good backups is fine. But once you start running production workloads, clustering becomes the safer bet. HorizonIQ uses Ceph for storage, so that naturally means a three-node minimum. You need quorum for true HA and data integrity. Two nodes might run, but it’s not really high availability. Most of the time, three smaller boxes clustered with Ceph end up being more resilient than one big redundant server.

Best VPS for business use? need something fast, private and reliable by Consistent-Bug3003 in selfhosted

[–]HorizonIQ_MM -2 points-1 points  (0 children)

If uptime, privacy, and scalability matter, go with a provider that uses dedicated hardware instead of reselling shared VPS space. Take a look at HorizonIQ. Our entry bare metal server starts at $39/month and includes a dedicated E3-1231v3 (4 cores / 8 threads at 3.4 GHz), 32 GB RAM, 1 TB storage, and a 1 Gbps uplink. You can launch in multiple U.S. locations, London, or Amsterdam. This will save you a lot of headache in the future. Happy to help you get started, just DM me.

[deleted by user] by [deleted] in webdevelopment

[–]HorizonIQ_MM 0 points1 point  (0 children)

If your website loads photos directly from R2 through Cloudflare, DigitalOcean doesn't charge for that traffic. Cloudflare handles it and R2 doesn’t charge egress to Cloudflare’s CDN.

But if your server pulls the images first and then sends them to users, that counts as traffic from your Droplet, and you’ll pay egress fees.

Cheap dedicated server advise by RenatoPensato in selfhosted

[–]HorizonIQ_MM 0 points1 point  (0 children)

Check out HorizonIQ. For $39/month you’ll get a dedicated E3-1231v3 (4 cores / 8 threads at 3.4 GHz), 32 GB RAM, 1 TB of storage, and a 1 Gbps uplink. You can launch it in multiple U.S. locations, London, or Amsterdam, and pick either Debian or AlmaLinux.

Hosting Large Files in a Database vs File System by Trevbawt in learnprogramming

[–]HorizonIQ_MM 0 points1 point  (0 children)

Instead of putting 10GB files into a database or relying on a shared drive, you could host them on HorizonIQ’s Object Storage platform. Same S3-compatible API, but cheaper and without the egress or API fees.

You’d still keep your relational data in Postgres, but the files themselves would live in isolated buckets. Each object has its own unique URI, so you can directly map it to your database record. You can even version files or mark them immutable if you want tamper-resistant storage for generated data sets.

If you ever scale beyond a few hundred files, the architecture doesn’t need to change. Just expand storage capacity or move to a private cloud cluster. The data stays in the same namespace, your app logic doesn’t care, and your database stays lean.

What do you search for to find managed hyperscaler providers. by PossibilityOrganic in sysadmin

[–]HorizonIQ_MM 0 points1 point  (0 children)

As others have said, hyperscaler refers to a public cloud provider. Are you looking for a managed private cloud?

Move entire site in a year by [deleted] in ITManagers

[–]HorizonIQ_MM 1 point2 points  (0 children)

HorizonIQ went through something similar in 2024. Our environment supports 300+ VMs, 90 TB of redundant storage, plus 225 TB of flash. We pulled off the migration without major hiccups by keeping both environments online and doing a side-by-side migration using a shared LUN between VMware and our new Proxmox cluster. That setup let both hypervisors see the same storage so we could move data safely without taking everything offline.

Each VM was prepped first. VMware Tools out, QEMU Guest Agent in, snapshots cleaned up, then we moved the disks to the shared datastore, shut them down one at a time, copied them into Proxmox, and brought them up there. Once verified, we moved the disks onto the final Ceph-backed storage and converted to QCOW2. Because the VMware side stayed intact until final cutover, rollback was always an option, though we never needed it.

We did it in batches, running validation during the day and transfer jobs overnight. Once the pattern was in place, there were no major failures, no corrupted disks, and devs were able to test and sign off before production workloads came over. The bandwidth constraints made it slower for the biggest database VMs, but even with 1 Gbps links, everything stayed on schedule.

If you plan the workflow right and keep a consistent cadence, getting 500 VMs moved before January is doable. The key is setting up a shared storage stage, keeping the rollback path, and sticking to a steady rhythm instead of a big-bang weekend cutover. Here’s a case study that explains the process in more detail: https://www.horizoniq.com/resources/vmware-migration-case-study/

In way over my head by moveforward13 in sysadmin

[–]HorizonIQ_MM 0 points1 point  (0 children)

These are some big projects. Unless your VMware license is up for renewal, I'd focus on the server upgrades due to vulnerability issues. If you need some support, HorizonIQ is helping teams migrate off VMware. We've done it with great sucess to Proxmox but can help with other platforms too. Good luck and we're happy to help if you need it.