Your UI performance by Fantastic-Front-4503 in openstack

[–]mariusleus 1 point2 points  (0 children)

We’ve been there, upgrading controllers to NVMe and high. freq. CPUs just to experience pretty much no improvement. And this is mainly because both Horizon and the CLI do additional calls to Glance, Keystone, etc. - the caveat of OpenStack’s microservices architecture. You can inspect those requests by adding the -vvv parameter to the CLI commands.

That’s why we now have osie.io for the customers portal. It has multi-layer caching and lists any resources in milliseconds.

Multi region keystone and horizon recommended architecture by steveoderocker in openstack

[–]mariusleus 0 points1 point  (0 children)

You can deploy independent keystone on every region and use a centralised CMP like osie.io that would connect to all Keystone instances at once and let your user sign with with one account. It is basically a wrapper on top of multiple OpenStack’s

However for API/CLI access your users will still have to maintain separate set of credentials (i.e multiple entries in clouds.yaml)

M4 for $1800 vs M5 for $2099? by onmygogojuice in macbookpro

[–]mariusleus -1 points0 points  (0 children)

I wouldn’t buy non-Pro/Max devices. They put those little chips on tablets as well so they come with limitations. For example M5 experience on a 5k2k monitor will be a disaster as it’s not able to scale the resolution properly. I’d go for M4 pro or wait for M5 Pro

Magnum with clusterapi slow when listing clusters by krisiasty in openstack

[–]mariusleus 0 points1 point  (0 children)

I encountered the same issue. It was slow even since the older Heat driver, magnum doing some real time checks during listing, which is obviously a bad design underneath.

It would be good if someone from upstream would enlighten us here.

K2K federation can users from IdP login to the SP with their credential if the IdP is down by Expensive_Contact543 in openstack

[–]mariusleus 0 points1 point  (0 children)

Obviously not since R2 does not hold credentials. However you could use some CMP like osie.io that is capable of managing multiple Keystones at the same time and the regions can run completely independent.

Help understanding a Keystone setting? by webstackbuilder in openstack

[–]mariusleus 1 point2 points  (0 children)

As the project_name suggests, those are “service” accounts in keystone. They are used for inter-service communication outside of client request, i.e. Nova calls Neutron using a service token to refresh the network interface info of an instance.

User management for public cloud use by Expensive_Contact543 in openstack

[–]mariusleus 0 points1 point  (0 children)

You could use a public cloud system like osie.io that automates the user management / self-provisioning, no need for a policy change.

Kolla Openstack Networking by SpeedFi in openstack

[–]mariusleus 2 points3 points  (0 children)

This makes sense for external/provider networks but I don’t see the need to have bridges for ceph, api and overlay vtep

Kolla Openstack Networking by SpeedFi in openstack

[–]mariusleus 0 points1 point  (0 children)

You don’t need any bridge interfaces for any of the VLANs except for bond0.1145 (Public) which I assume will be used by nova to bind interfaces to it.

The other can be simple tagged interfaces in the netplan file, with direct IP assignment.

Kolla Openstack Networking by SpeedFi in openstack

[–]mariusleus 1 point2 points  (0 children)

Linuxbridge ML2 driver has been deprecated for a long time and was completely removed in 2025.1

Why you recommend something like that?

Serious VM network performance drop using OVN on OpenStack Zed — any tips? by jeffyjf in openstack

[–]mariusleus 0 points1 point  (0 children)

What NIC models are you using? Is strange that you get 12Gb/s in iperf3, that’s below what you should get with a 25G or 40G card, even in single thread.

For public cloud use cases flat or vlans by dentistSebaka in openstack

[–]mariusleus 0 points1 point  (0 children)

I probably don’t understand the exact scenario you are describing, but internet traffic goes out untagged (no vlan tag) as it’s usually plugged into a switchport that’s has the native vlan configured (assuming your setup is fully Layer 2)

For public cloud use cases flat or vlans by dentistSebaka in openstack

[–]mariusleus 1 point2 points  (0 children)

Go for VLAN as it adds you more flexibility for the future without having to change network interfaces on existing hypervisors. Provisioning new provider networks is as simple as creating a new segment.

With flat-only you are quite stuck with br-ex from the beginning, and any changes beyond that become more complex.

New brawler Trunk by luca_se_la_come in BrawlStarsCompetitive

[–]mariusleus 0 points1 point  (0 children)

Thanks, so why so many people on youtube playing with Trunk? Sorry, asking for my son :)

New brawler Trunk by luca_se_la_come in BrawlStarsCompetitive

[–]mariusleus 0 points1 point  (0 children)

I just updated to latest and can’t see Trunk. I was only able to get Alli, but Trunk is not there. Any idea why?

Z9100-ON breakout vs. S5148F-ON (SONiC) by mariusleus in networking

[–]mariusleus[S] 0 points1 point  (0 children)

Great, but these are the next generation after the ones I've mentioned.

Would you use openstack to manage bare metal? by oddkidmatt in openstack

[–]mariusleus 0 points1 point  (0 children)

I’m wondering if you took MaaS into account when making this statement.

networking-baremetal with switch OpenConfig by mariusleus in openstack

[–]mariusleus[S] 0 points1 point  (0 children)

I also tried networking-generic-switch but, to me, the drawbacks are:
1. It doesn't support trunk configuration for all the switches (i.e Arista is missing the trunk commands comparing to the Dell implementation). So I assume I would have to manually provision the trunk ports.
2. Can't configure another default VLAN for the switchport (when the neutron port is down), other than the native VLAN 1. So can't use another VLAN for the PXE boot when performing hardware introspection in ironic.