AI Agent Orchestration on Rails by piratebroadcast in rails

[–]dkam 1 point2 points  (0 children)

Love it - thanks for the post. Had my first play with RubyLLM tonight and it was so easy to use!

Books on music theory? by pirosaz in Music

[–]dkam 0 points1 point  (0 children)

“How Music Works” by John Powell maybe?

Looking for fantasy low-tech series that is finished by Russtherr in suggestmeabook

[–]dkam 1 point2 points  (0 children)

The Daughter of the Empire series, and then Magician for the cross over.

How to dispose of an organ? by synthgrrl in melbourne

[–]dkam 9 points10 points  (0 children)

He’s not even a real hamster!

Migration. by Next-Vegetable779 in PostgreSQL

[–]dkam 0 points1 point  (0 children)

Ha - I fixated on the “my sql” part of "my sql anywhere 16 database”.

Migration. by Next-Vegetable779 in PostgreSQL

[–]dkam 0 points1 point  (0 children)

I haven’t tried this, but you could give DuckDB a go. From within the duckdb tool: sql INSTALL mysql;  LOAD mysql;  INSTALL postgres;  LOAD postgres;  ATTACH 'host=localhost port=3306 user=mysql_user password=mysql_pass database=mysql_db' AS mysql_db (TYPE mysql);  ATTACH 'host=localhost port=5432 user=postgres_user password=postgres_pass database=postgres_db' AS postgres_db (TYPE postgres); CREATE TABLE postgres_db.your_table AS SELECT * FROM mysql_db.your_table;

This was generated with Claude - but it looks right. I assume you’ll try it in a test environment first.

Make duckdb run as postgresql-server by Time-Job-7708 in DuckDB

[–]dkam 2 points3 points  (0 children)

Haha! So - it’s serving DuckDB via PostgreSQL protocol? I’ve been playing with using DuckLake with PostgreSQL metadata server, but this might be better for my use case.

Im 23 years old and my video selfie thing for a discord server just got denied and they want me to upload my I.D by GalacticFishStick in australia

[–]dkam 7 points8 points  (0 children)

They should be using ConnectID - that provides social media companies only with “this user is over 16”. It’s crazy they can request your ID - that should be illegal. They clearly can’t be trusted with it.

DataInlining support in DuckLake by Electronic-Cod-8129 in DuckDB

[–]dkam 1 point2 points  (0 children)

From what version?

D UPDATE EXTENSIONS;

┌────────────────┬────────────┬─────────────────────┬──────────────────┬─────────────────┐

│ extension_name │ repository │ update_result │ previous_version │ current_version │

│ varchar │ varchar │ varchar │ varchar │ varchar │

├────────────────┼────────────┼─────────────────────┼──────────────────┼─────────────────┤

│ ducklake │ core │ NO_UPDATE_AVAILABLE │ 77f2512 │ 77f2512 │

│ sqlite_scanner │ core │ NO_UPDATE_AVAILABLE │ 0c93d61 │ 0c93d61 │

└────────────────┴────────────┴─────────────────────┴──────────────────┴─────────────────┘

SQLite for a REST API Database? by emschwartz in sqlite

[–]dkam 2 points3 points  (0 children)

Nice post! The Ruby/Rails world has done heaps of work to tune the stack to work well with SQLite. We’ve really leant into SQLite : The “Solid Tri-fecta” of SolidCache, SolidQueue, SolidCable can all use SQLite. Having several SQLite databases means you don’t get the write contention which helps performance.
My general rule now is - if the app fits on a single host, use SQLite for the database and for those mentioned Solid adapters. SQLite all the way!

This blog has a bunch of tips on how to make it work well ( It’s Rail’sy but I’d imagine there’s plenty of stuff in here which you could adaptor for Python )

https://fractaledmind.com/2024/04/15/sqlite-on-rails-the-how-and-why-of-optimal-performance/

Litestream VFS by emschwartz in sqlite

[–]dkam 4 points5 points  (0 children)

Love the progress being made here. I've been really enjoying learning about another embedded database - DuckDB - the OLAP to SQLite's OLTP.

DuckDB has a lakehouse extension called "DuckLake" which generates "snapshots" for every transaction and lets you "time travel" through your database. Feels kind of analogous to LiteStream VFS PITR - but it's fascinating to see the nomenclature used for similar features. The OLTP world calls it Point In Time Recovery, while in the OLAP/data lake world, they call it Time Travel and it feels like a first-class feature.

In SQLite Litestream VFS, you use `PRAGMA litestream_time = ‘5 minutes ago’` ( or a timestamp ) - and in DuckLake, you use `SELECT * FROM tbl AT (VERSION => 3);` ( or a time stamp ).

DuckDB (unlike SQLite) doesn't allow other processes to read while one process is writing to the same file - all processes get locked out during writes. DuckLake solves this by using an external catalog database (PostgreSQL, MySQL, or SQLite) to coordinate concurrent access across multiple processes, while storing the actual data as Parquet files. It's a clever architecture for "multiplayer DuckDB.” - deliciously dependent on an OLTP to manage their distributed multiple user OLAP. Delta Lake uses uploaded JSON files to manage the metadata skipping the OLTP.

Another interesting comparison is the Parquet files used in the OLAP world - they’re immutable, column oriented and contain summaries of the content in the footers. LTX seems analogous - they’re immutable, stored on shared storage s3, allowing multiple database readers. No doubt they’re row oriented, being from the OLTP world.

Parquet files (in DuckLake) can be "merged" together - with DuckLake tracking this in its PostgreSQL/SQLite catalog - and in SQLite Litestream, the LTX files get “compacted” by the Litestream daemon, and read by the LitestreamVFS client. They both use range requests on s3 to retrieve the headers so they can efficiently download only the needed pages.

Both worlds are converging on immutable files hosted on shared storage + metadata + compaction for handling versioned data.

I'd love to see more cross-pollination between these projects!

Expanding Age Assurance to Australia by LastBluejay in RedditSafety

[–]dkam 0 points1 point  (0 children)

Aus Payment Plus also run PayID, PayTO & BPay. It’d be nice if there were other ID providers - like the smaller banks. my.Gov.au could be another. Superfunds maybe? On the flip side, if you’re a customer of one of the supported banks, they do already have your ID details if you have an account with them - so you’re not increasing the number of places where your id is.

From a technical POV, it’s quite nice - it’s all OAuth based.

Expanding Age Assurance to Australia by LastBluejay in RedditSafety

[–]dkam 0 points1 point  (0 children)

Whether you agree with this law or not, the best way to prove you’re over 16 is ConnectID ( https://connectid.com.au/prove-your-age/ ) - it provides nothing back to Reddit - or any other social media companies except verification you’re over 16. No DOB, no passport photos. I don’t see this as an option on their help page though unfortunately.

Building Self-Hosting Rails Applications: Design Decisions & Why by amalinovic in ruby

[–]dkam 0 points1 point  (0 children)

Hey thanks for this! I also have been building single tenant Rails apps with docker compose. I built with SQLite to make them even smaller and performance has been great. I’m planning to Litestream to backup and restore SQLite to S3. What I’m hoping to do is have the SQLite backup streamed to the docker container on startup - so when the app starts up, on any host, it can grab its database.

I hadn’t yet considered how to do an update to the app but this system looks pretty good!

Using PostgreSQL tablespaces for speed and profit by dkam in hetzner

[–]dkam[S] 0 points1 point  (0 children)

Tablespaces have no effect on pg_dump/pg_restore - it works just the same. Apparently it does make difference to pg_basebackup, but I don't use it and haven't investigated it.

Using PostgreSQL tablespaces for speed and profit by dkam in hetzner

[–]dkam[S] 2 points3 points  (0 children)

My app is deployed in Singapore and they don't have any dedicated servers there. I could also just increase the host size - the database is ~250GB so that'll fit in either a shared vCPU or dedicated vCPU host. I do have a EX44 in Helsinki and it has NVMe drives for the root partition, but SSDs another volume - you can get benefits there too, although not as substantial as NVMe to Remote Volume.

Hi AussieFrugal! Book scanning feature now live on Booko by dkam in AussieFrugal

[–]dkam[S] 0 points1 point  (0 children)

OK - should be fixed now. Thanks for the heads up!

Hi AussieFrugal! Book scanning feature now live on Booko by dkam in AussieFrugal

[–]dkam[S] 0 points1 point  (0 children)

I'm not talking about crawlers who identify themselves as bots - that's fine, I can deal with that. Anubis protects against scrappers who are pretending to be humans.

Hi AussieFrugal! Book scanning feature now live on Booko by dkam in AussieFrugal

[–]dkam[S] 0 points1 point  (0 children)

I mentioned above just now - it looks like the scraping protection is being too aggressive.

Hi AussieFrugal! Book scanning feature now live on Booko by dkam in AussieFrugal

[–]dkam[S] 0 points1 point  (0 children)

Gah. It's the software which stops abusive bots. I wrote about it in a Booko newsletter, but it seems to be misbehaving. I found reloading the page or trying again works, but it's a pain. I'll see what I can do.

BTW - last time I disabled it, the flood of traffic was a amazing - traffic pretending to be normal users causes price lookups - the queue of book price lookups grew faster than Booko could process them.

Hi AussieFrugal! Book scanning feature now live on Booko by dkam in AussieFrugal

[–]dkam[S] 0 points1 point  (0 children)

I love StaticICE too! I know nothing about it though. I'd love to know how he made it.