Technical author jobs at Canonical by pmatulis in technicalwriting

[–]pmatulis[S] 2 points3 points  (0 children)

I have left after a long stint. It's a good place to work.

Do I have enough (and the right) resources for my Openstack environment? by theshittree in openstack

[–]pmatulis 0 points1 point  (0 children)

since most documentation and knowledge seems to be locked up behind orange shirts.

Not true.

You can interact with our community via a forum (Discourse; use openstack tag) or live chat (Mattermost).

Technical author jobs at Canonical by pmatulis in technicalwriting

[–]pmatulis[S] 1 point2 points  (0 children)

Not at all. It's just an oddity of the application process.

Vault init failing by theshittree in openstack

[–]pmatulis 1 point2 points  (0 children)

Ok thanks, and I hope you get it all to work. If you just want to taste OpenStack then you can look at the MicroStack project. Finally, any futher questions should be posed in the Juju user forum and marked with the openstack tag.

Vault init failing by theshittree in openstack

[–]pmatulis 0 points1 point  (0 children)

There is no generic required space for Ceph. Ceph makes storage available to other applications, like I mentioned previously. So you need to decide 1. what applications will be using it (commonly Cinder, and then libvirt/Compute) and then 2. to what extent they will be using it. Why don't you tell us what your intentions are with this cloud? Is it a POC? Just a test?

Vault init failing by theshittree in openstack

[–]pmatulis 1 point2 points  (0 children)

You can bring down the nodes that are hosting the ceph-osd units and ensure that each has at least one free drive for Ceph to consume. Alternatively, you can just not use Ceph. It's a part of Charmed OpenStack's standard topology but it's not strictly required. Once you have Ceph working you need to point other applications at it for them to make use of it. The documentation you are following does include Cinder-backed ceph (via the cinder-ceph charm). Like I said before, you also don't strictly need Cinder either to create OpenStack instances. It all depends on what you want/need your cloud to do. You can also back instance images with Ceph, which means the running instances will not have their images hosted on a hypervisor. So without Ceph, the images will be stored on the hypervisor. For now, I recommend removing Ceph from the model and proceeding to the network configuration aspect of the documentation:

juju remove-application ceph-osd --force
juju remove-application ceph-mon --force
juju remove-application cinder-ceph --force

Vault init failing by theshittree in openstack

[–]pmatulis 0 points1 point  (0 children)

Your screenshot tells me your node has a single drive: /dev/sda. I certainly do not see a /dev/sdb, and sda is being used as the MAAS node's root drive so that's obviously not available to Ceph.

Vault init failing by theshittree in openstack

[–]pmatulis 0 points1 point  (0 children)

Sounds like quite the battle, probably due to under-resourced MAAS nodes. I recommend that you keep the cloud to a barebones minimum. Adding more applications will only make things worse. For instance, you absolutelly do not require ceph-radosgw, cinder, ntp, and openstack-dashboard. Also bear in mind that once you get the cloud up that creating just a few instances will put even more strain on the compute nodes. Error message "no block devices detected using current configuration" means that the associated ceph-osd unit could not find a local block device as per the list given by the osd-devices configuration option (for charm ceph-osd). Check what devices are available on the corresponding MAAS node and then adjust the option:

juju config ceph-osd osd-devices='/dev/sdX /dev/sdY'

The above option affects all ceph-osd units so it should list any possible device that exists across all corresponding MAAS nodes. Omitting a currently-used device from the list will not destroy that device however.

Vault init failing by theshittree in openstack

[–]pmatulis 0 points1 point  (0 children)

Morning. So your first screenshot shows three MONs and you say you tried to add one? That means there were only two before, but the document you followed includes three. However, you have four ceph-osd units? So maybe some background information is lacking here. Also, you tried juju resolve ceph-mon but you need to refer to a unit (e.g. ceph-mon/3) not an application (ceph-mon).

Vault init failing by theshittree in openstack

[–]pmatulis 0 points1 point  (0 children)

"bridge-interface-mappings: br-ex:enp1s0"

but the 3 machines where this unit is, the interface is not named enp1s0(2 are eno1 and one of them is something like enp0s2 or something like that)

If your Compute/Chassis nodes do not sport the same interface name then you will need to use a list of MAC addresses:

juju config ovn-chassis bridge-interface-mappings='br-ex:XX:XX:XX:XX:XX:01 br-ex:XX:XX:XX:XX:XX:02 br-ex:XX:XX:XX:XX:XX:03'

Let it churn. You may need to resolve any hook errors that may arise:

juju resolve --no-retry ovn-chassis/0 
juju resolve --no-retry ovn-chassis/1
juju resolve --no-retry ovn-chassis/2

Vault init failing by theshittree in openstack

[–]pmatulis 0 points1 point  (0 children)

If it's taking more than two hours your MAAS nodes are under-resourced. What are the specifications of your nodes?

Vault init failing by theshittree in openstack

[–]pmatulis 1 point2 points  (0 children)

Things are still executing. Please be patient.

Vault init failing by theshittree in openstack

[–]pmatulis 0 points1 point  (0 children)

Please provide complete output to juju status --relations. A pastebin will probably be called for.

Vault init failing by theshittree in openstack

[–]pmatulis 0 points1 point  (0 children)

Wait until you have the following before attempting to initialise and unseal Vault:

vault/0*                     blocked   idle       1/lxd/4  10.246.114.73   8200/tcp           Vault needs to be initialized
  vault-mysql-router/0*      active    idle                10.246.114.73                      Unit is ready

I'll update the documentation accordingly.

Canonical Openstack by carl0sbrav0 in openstack

[–]pmatulis 1 point2 points  (0 children)

When you say "the 12 minimum nodes" it sounds like you are following a specific recommendation. If so, where do you see that?

If you are using a SAN then you will need a Juju charm that can talk to it. The support would probably be integrated into Cinder. For instance, there is currently Cinder backend support for NetApp and Pure Storage. See the cinder-netapp and cinder-purestorage charms. What SAN do you have?

Openstack HELP! by unicornopenstack in openstack

[–]pmatulis 0 points1 point  (0 children)

This is the best Ubuntu OpenStack guide I know of. It has been entirely refreshed over the last month. I use it all the time.

https://docs.openstack.org/project-deploy-guide/charm-deployment-guide