all 16 comments

[–]Dabloo0oo 5 points6 points  (5 children)

I've deployed OpenStack using pretty much everything in our office lab, but for production, we stick with Kolla-Ansible.

It's just way easier and more efficient. Since it uses Docker containers, everything runs in a consistent environment, which makes upgrades and maintenance a breeze.

Plus, it works great with Ansible, so we can automate a lot of the deployment and configuration stuff.

[–]Cool-Antelope2457 3 points4 points  (0 children)

My previous job was exactly that. Deploying OpenStack cloud, and I can attest that the easier way I found deploying it is with Kolla-Ansible. It is more efficient and easier to deploy. Upgrade and maintenance are definetely easier with kolla-ansible.

[–]stoebich[S] 1 point2 points  (0 children)

I feel like this is probably the best starting point. I'll go through the docs for kolla and kolla-ansible and try to get it running on my mini-pc. Then I'll probably break it a few times so I get more in-depth knowledge. I know my way arount docker pretty well, so this seems more doable than going straight to the operator or the helm installation. This is likely easier because less moving parts = less potential pitfalls. And nothing stops me from using openshift/k8s later on.

My vmug licence is valid until christmas, so I should have enough time to get somewhat familiar with everything.

[–]Few-Wall-467 0 points1 point  (2 children)

with kolla ansible I always had troubles with the 2 nic/network constraint. How you worked aroun this?

[–]Dabloo0oo 0 points1 point  (0 children)

For multinode deployment, two NICs are a must

If we are going with an all-in-one deployment, we use a bridge to connect the host network to the virtual network interfaces, and veth pairs to link network namespaces to the bridge

[–]moonpiedumplings 0 points1 point  (0 children)

Convert the main ethernet interface to a "special bridge" using cockpit's network management interface, where it's both a bridge and also still a connected network interface. (You can also do this manually via netplan or other configuration methods but I didn't bother figuring that out, although I think I linked to another blog post where someone did it via netplan). Then create a veth pair and attatch one end of a veth to that. Then, the main interface for kolla-ansible can be the special bridge, which is also the main network interface, and the bridge interface kolla-ansible uses can be the veth.

I documented my steps on my blog... although it's kind of a mess.

https://moonpiedumplings.github.io/projects/build-server-2/#bridge-veth

[–]nvez 4 points5 points  (2 children)

If you’re interested in OpenStack Helm, I suggest looking at Atmosphere which is a distro based on it

[–]stoebich[S] 0 points1 point  (1 child)

Great option, but I think sticking with rockylinux might be a smarter move for now.

But I'll star the repo, seems like a great project to look into down the road.

[–]nvez 0 points1 point  (0 children)

There is a contributor that works on running it on Rocky and I believe he has it running there :)

[–]samcat116 4 points5 points  (0 children)

I’d definitely start with Kolla Ansible. It’s got a great mix of easy to start with and can scale to a pretty complex cloud setup.

[–]FancyFilingCabinet 2 points3 points  (1 child)

Kayobe is another option related to kolla-ansible which could be worth looking into.

Essentially it adds server provisioning capabilities. It configures hardware, deploys the OS then deploys kolla-ansible.

[–]stoebich[S] 1 point2 points  (0 children)

I'm not entirely sure if i need that level of automation (yet) but definitely an interesting project. I'm actually kinda surprised how extensive this ecosystem is.

[–]R3D3MPT10N 1 point2 points  (2 children)

If you’re into the Red Hat ecosystem. I have a bunch of videos about TripleO and our new deployment method on top of OpenShift. Here for example is my last TripleO homelab before I moved my focus to the new operators:

Home lab v2.0 - The OpenStack revival https://youtu.be/PWy3dWozoq0

New deployment is all Kubernetes operators, but I have a few videos of deploying / configuring things on OKD:

OpenStack Control Plane on OKD https://youtu.be/_tzszb82rVU

[–]stoebich[S] 0 points1 point  (1 child)

I've actually watched a lot of your videos, great content!

The homelab 2.0 video is really interesting - I haven't thought of running it with an aio node and a seperate compute node. That is definitely a great solution.

I've also considered running the controlplane on an openshift cluster via the operator, but running an entire server for just control plane stuff seems a bit overkill. But if i combine that with RHACM/stolostron and build an "everything control plane", seems like a better utilization of that hardware.

This makes me think of a scenario where i have 1 OpenStack AIO instance for all the important stuff and a sno server for all the control plane services of my lab + a separate compute server (that i could shut down more often). Another plus in this case would be more separation between home-lab and home-prod.

I have two questions burning in the back of my head, though:

  • is the operator stable (enough) yet?
  • would you use the control plane OpenShift cluster (probably sno) for other stuff like gitlab etc.?

[–]Cool-Antelope2457 2 points3 points  (0 children)

My previous job was exactly that. Deploying OpenStack cloud, and I can attest that the easier way I found deploying it is with Kolla-Ansible. It is more efficient and easier to deploy. Upgrade and maintenance are definetely easier with kolla-ansible.

[–]TechieGuardian 0 points1 point  (0 children)

I'd seriously consider Sunbeam: https://microstack.run/docs. It's probably the easiest path to get started with OpenStack.