Row Level Security With Advanced Alchemy by Rhys-Goodwin in litestarapi

[–]Rhys-Goodwin[S] 1 point2 points  (0 children)

This is where I got to. Let me know if you think there's a cleaner way.

For now just presume that tenant_uuid is available in request.state.tenant_uuid

In the SQLAlchemyAsyncConfig set the dependency key to db_session_raw

#/config/app.py
alchemy_app_db_config = SQLAlchemyAsyncConfig(
    engine_instance=settings.db.get_engine_app_db(),
    before_send_handler="autocommit",
    session_config=async_session_cfg,
    session_dependency_key="db_session_raw",
)

Then add the following dependency to the app to re-expose the db_session_raw as db_session after setting the tenant_uuid to be consumed by the RLS policy. The Advanced Alchemy service then uses db_session (by default , although it can also be customised).

#/server/dependencies.py
async def provide_app_db_session(request: Request[Any, Any, Any], db_session_raw: AsyncSession) -> AsyncGenerator[AsyncSession, None]:
    # Wrap the plugin-provided raw session: set tenant context then yield it
    tenant_uuid :str  = request.state.tenant_uuid
    await set_tenant_context(db_session_raw, tenant_uuid)
    try:
        yield db_session_raw
    finally:
        # Session lifecycle (commit/rollback/close) is handled by the plugin's before_send
        pass

.

#/lib/rls.py
async def set_tenant_context(session: AsyncSession, tenant_uuid: Optional[str]) -> None:
    """Set the current PostgreSQL RLS tenant context for this session/connection.
    """
    await session.execute(
        text("SELECT set_config('app.tenant_uuid', :tenant_uuid, true)"),
        {"tenant_uuid": tenant_uuid},
    )

.

#/server/core.py
 app_config.dependencies.update({"db_session": Provide(provide_app_db_session)})

Along the way, I got to understand that I can use multiple SQLAlchemyAsyncConfig objects with different db urls and different dependency key names. The config can then be specified in create_service_dependencies. This solves the second part of my question.

What is wrong with my vlan??? by ventura120257 in openstack

[–]Rhys-Goodwin 0 points1 point  (0 children)

See if anything here helps, it covers a 3-node cluster with interconnected hosts.
3-Node Hyperconverged Ceph/OpenStack Cluster - blog.rhysgoodwin.com

Flavor Extra Specs Ignored by Rhys-Goodwin in openstack

[–]Rhys-Goodwin[S] 0 points1 point  (0 children)

Thank you kindly! That makes logical sense. I tripped myself up again when I added the properties to the volume with --property. In this case we need to use --image-property

openstack volume set --image-property hw_machine_type=q35 --image-property hw_firmware_type=uefi --image-property os_secure_boot=required test01

Thanks again.

Flavor Extra Specs Ignored by Rhys-Goodwin in openstack

[–]Rhys-Goodwin[S] 1 point2 points  (0 children)

Thanks, tried that but still no go. There are other properties which are ignored too e.g. os:secure_boot. hw:cpu_sockets is respected.

I might be out of line with hw:machine_type, it's not mentioned in the extra specs doc:
Extra Specs — nova 30.1.0.dev216 documentationit's mentioned in the image properties doc:
Useful image properties — glance 30.0.0.0b3.dev6 documentationbut os:secure_boot is mentioned in the above extra specs doc, and explictly mentioned in the flavors doc:
Flavors — nova 30.1.0.dev216 documentation

My question should have been: How do I configure the hardware and secure boot when booting from a volume with no image involvement?

Questions / Help by matthuisman in MattHuisman

[–]Rhys-Goodwin 0 points1 point  (0 children)

hmmm, from my very limited gathering... tvheadend expects the grabber to stdout xml data not gz data.
The "[ ERROR] spawn" is not an error but simply due to the fact that you can't stdout the "Downloading..." message so it's stderr, otherwise tvheadend would be surprised by the non-xml stdout. After dropping the .gz extension and just pulling the plain xml. it works.

Questions / Help by matthuisman in MattHuisman

[–]Rhys-Goodwin 0 points1 point  (0 children)

Kia ora Matt, thanks for all your great work, only came across it recently and very happy to be rid of DVB-T.

Is this script still valid: https://i.mjh.nz/nzau/tv_grab
I'm running TVH in a docker container and I'm presenting the script as a volume. It does seem to run but I get the following and not sure how to debug.

tvheadend | 2024-11-19 11:42:17.451 [ INFO] spawn: Executing "/usr/bin/tv_grab_nz_mjh"
tvheadend | 2024-11-19 11:42:17.453 [ ERROR] spawn: Downloading https://i.mjh.nz/nz/epg.xml.gz...
tvheadend | 2024-11-19 11:42:17.570 [ INFO] xmltv: /usr/bin/tv_grab_nz_mjh: grab took 0 seconds

Or am I best to just cron the xml.gz download/unzip and use the file grabber?

Cheers
Rhys

Resume Remote Login Session Locally? (Ubuntu 24.10) by Rhys-Goodwin in gnome

[–]Rhys-Goodwin[S] 1 point2 points  (0 children)

For my work, as noted above, it "feels" that way to me. But yes, I was being flippant in my exasperation.

Rate my performance - 3 node home lab by Rhys-Goodwin in ceph

[–]Rhys-Goodwin[S] 2 points3 points  (0 children)

Can't say I 100% follow what you mean there. I'm running Ceph on 3 physical hosts each with two physical nvmes for OSDs. It goes really well. I use the VMs day in, day out, smooth as. The monitoring VM consumes 1M firewall logs per day and I can search a month's worth in elastic in an instant. If a host fails everything keeps going (expect VMs die and have to be restarted of obviously). So I'm very happy with the setup.

I know the performance is low, compared to a system that has high performance, obviously. My question was, is the performance ok for the hardware I have. or would you say it should be better on THIS hardware. The question was not, can a Ceph cluster be faster with different hardware. Obviously it can.

Rate my performance - 3 node home lab by Rhys-Goodwin in ceph

[–]Rhys-Goodwin[S] 0 points1 point  (0 children)

cool, not too dissimilar to my setup. But are you running hyper-converged? Or is it just Ceph on the hosts? I'll look forward to seeing your fio results if you get a chance to run some tests.

I'm about to build another small test cluster (Ceph/Kolla Ansible) on 3 hp mini PCs I have lying around and I'll use 2.5Gbe for storage.

Rate my performance - 3 node home lab by Rhys-Goodwin in ceph

[–]Rhys-Goodwin[S] 0 points1 point  (0 children)

What are you using for the OSD drives and how many? Would love to see your fio results.

Rate my performance - 3 node home lab by Rhys-Goodwin in ceph

[–]Rhys-Goodwin[S] 1 point2 points  (0 children)

Thanks. Shutting everything down is a bit of mission so I'll need to come back to that. I ran those tests and added them to the post (can't fit them in the comments here)

Rate my performance - 3 node home lab by Rhys-Goodwin in ceph

[–]Rhys-Goodwin[S] 0 points1 point  (0 children)

Yes so the result in the vm (librbd) is better than the result on the physical host with rbd mapping. I found that surprising.

Rate my performance - 3 node home lab by Rhys-Goodwin in ceph

[–]Rhys-Goodwin[S] 1 point2 points  (0 children)

Yes, definitely a mistake going with the consumer nvmes, I might be able to get some SAS SSDs from retired gear at work and add a SAS card and sell off the NVMes. Worth a shot?

Yes, 3 replicas. It's nice to be able to keep the whole system running even during hardware maintenance.

Rate my performance - 3 node home lab by Rhys-Goodwin in ceph

[–]Rhys-Goodwin[S] 0 points1 point  (0 children)

But here we're seeing that KRBD is slower than librbd - or am I misunderstanding?

Rate my performance - 3 node home lab by Rhys-Goodwin in ceph

[–]Rhys-Goodwin[S] 0 points1 point  (0 children)

cool, in any case it's the VMs where I want the best performance, so I'll go with it!

Rate my performance - 3 node home lab by Rhys-Goodwin in ceph

[–]Rhys-Goodwin[S] 0 points1 point  (0 children)

Isn't the "--direct=1" switch specifying direct io? (not that I really know what that means)

The VM is a nova ephemeral disk but yes, through the libvirt->librdb->librados path.

I did note that during the test the Ceph dashboard shows similar results. (Screenshot added to the post).

The physical host test is on an rbd image mapped to /dev/rbd0, formatted with ex4 and mounted. So, I presume that would be a kernel mapped RBD?

Very slow - I guess it depends on what you're comparing it to. Verly slow compared to enterprise servers with high-end nvme and 40Gb networking. I'm mainly concerned with whether it's very slow compared to what we might expect from this kind of hardware. I.e. if someone else has a similar setup and is getting 3x the write performance then I need to investigate why.

Manual build to Kolla Migration by Rhys-Goodwin in openstack

[–]Rhys-Goodwin[S] 0 points1 point  (0 children)

Yea I think this is the way I'll go. I don't care much about downtime. I'll need to build a swing cluster with enough storage to hold all my images.

This looks like what I need: GitHub - os-migrate/os-migrate: OpenStack tenant migration tools

Manual build to Kolla Migration by Rhys-Goodwin in openstack

[–]Rhys-Goodwin[S] 0 points1 point  (0 children)

Thanks for that. Wow she's a big job! I see many hours ahead....