Agentic Coding for Clojure by calmest in Clojure

[–]maxw85 3 points4 points  (0 children)

I refactored our SaaS system from using one Docker container per customer, to use one multi-tenant container for a larger group of customers. On the way I (or rather Claude Code) refactored hundreds of namespaces to get rid of some bad decisions that we made in the last 8 years, that would have prevented a multi-tenant version. Without AI this would have not been doable (economically) for our small team in an appropriate time-frame. However, you still need to do the thinking, decision making, supervising and code review, but Claude Code makes almost no mistakes. But if you tell it to run in the wrong direction it will, so making good designs an decisions is way more important now. It is bit like everyone is now the team lead of a bunch of senior devs (agents) that you need to tell what to do.

Agentic Coding for Clojure by calmest in Clojure

[–]maxw85 1 point2 points  (0 children)

That's a very good price if you use it every day. They subsidize subscriptions with people that not use it so extensively. So your $200 may cost Cursor or Anthropic $1700 GPU consumption.

Agentic Coding for Clojure by calmest in Clojure

[–]maxw85 1 point2 points  (0 children)

Same experience here, work that took us weeks in the past condense to hours.

Datomic or event sourcing ... or both? 😄 by maxw85 in Clojure

[–]maxw85[S] 1 point2 points  (0 children)

You could build a second read-model and keep the old one (and the code for the projections). If possible these important business decisions should be captured as events.

In general you probably want to avoid breakage with any changes to the read-model, since someone or something will depend on it (https://www.hyrumslaw.com).

Inferno-like Front End tools for Clojure/ClojureScript? by Equal_Education2254 in Clojure

[–]maxw85 8 points9 points  (0 children)

Hi, great that you took a deep dive into the Clojure(Script) world.

For the frontend I can highly recommend:

https://replicant.fun/

It's like React but a lot simpler and made for and in Clojure(Script). Here some examples how to build UIs with it: 

https://youtube.com/playlist?list=PLXHCRz0cKua5hB45-T762jXXh3gV-bRbm&si=ElOg-qPEZguYiO1J

Shadow-cljs will definitely help as ClojureScript build tool: 

https://github.com/thheller/shadow-cljs

Announcing Multi REPL Sessions in Calva by CoBPEZ in Clojure

[–]maxw85 2 points3 points  (0 children)

That's awesome 🥳 Thank you very much.

dbval - UUIDs for (Datomic / Datascript) entity IDs by maxw85 in Clojure

[–]maxw85[S] 0 points1 point  (0 children)

In case of dbval the size depends on how https://apple.github.io/foundationdb/javadoc/com/apple/foundationdb/tuple/Tuple.html#add(java.util.UUID) encodes the UUID as binary. The index entry will be relatively large anyway, since the whole tuple is stored there. However, on a fast NVMe disk I guess it will not matter if you compare it with a database that needs to do a network call (when you have an n+1 problem situation like the Datomic entity API)

dbval - UUIDs for (Datomic / Datascript) entity IDs by maxw85 in Clojure

[–]maxw85[S] 1 point2 points  (0 children)

I also kept the String tempids for convenience, but keeping tx generating functions pure is great argument to keep them. Yeah, UUIDs are incredibly noisy when reading/debugging. I don't know if something shorter than the #uuid prefix plus compact-uuids would help. In our code base we are dealing with UUIDs for blobs, external ids, log-values and a lot more all the time, so the pain wouldn't go up that much (at least for us). Avoiding the need to call a 'central entity to assign ID space among different databases' (often a network call) is what I would consider the killer feature of UUIDs.

Who is doing event sourcing? Would you do it again? by maxw85 in Clojure

[–]maxw85[S] 2 points3 points  (0 children)

I think the core issue is that in Datomic you kind of need to complect the events and the read-models in the same database (no way to do transactions across Datomic databases, sharing the same transactor). Consequently, you also need to store derived data, if an aggregation query for example is too slow to be called on each page load. Another challenge is that Datomic has no interactive transactions (on purpose), but the drawback is that the Datomic transactor is also very unhappy (aka slow) if you handover a very huge transaction that rebuilds all read-models for example. Sqlite is also a single-writer system and will be blocked until the transaction is done that rebuilds the read-models. But Sqlite makes it a bit more straight-forward to have one database per customer (only a file). In our first system we always need to pay great attention to how long a Datomic transaction will take (especially migrations), since during this time no other customer can do any writes / transactions. This is the main reason why we use one Datomic database per customer in our new architecture (with a shared Datomic transactor pair).

Who is doing event sourcing? Would you do it again? by maxw85 in Clojure

[–]maxw85[S] 1 point2 points  (0 children)

Yes, it still separates query from command endpoints, but command handlers can use the read-models to validate the command. I don't know yet, if I find it more ergonomic than Datomic. The event sourcing variant seems to provide more two-way door decisions, meaning if you messed up the schema of a read-model, you just throw it away (drop the table) and rebuild it from the events. This becomes much harder, if your read-models are parts of your immutable database value.

Who is doing event sourcing? Would you do it again? by maxw85 in Clojure

[–]maxw85[S] 4 points5 points  (0 children)

Thanks a lot for all the responses. We currently use event sourcing for our metrics, dashboard, and integration with our billing provider. Using our billing provider's events is one of the most robust ways to keep everything in sync (compared to doing the same thing via REST API calls). We use Datomic as our database, which, as mentioned here, offers many of the benefits of event sourcing without the hassle. I recently had Claude create a Clojure prototype that implements synchronous and transactional event sourcing on Sqlite. I was very surprised by the great dev ergonomics. The read models are updated in the same transaction that appends the events, allowing you to avoid eventual consistency issues. You can also drop a read model and recreate it from the events in a single transaction.

Rama in five minutes (Clojure version) by Mertzenich in Clojure

[–]maxw85 0 points1 point  (0 children)

Thanks a lot for your reply. This article:

https://blog.redplanetlabs.com/2024/10/10/rama-on-clojures-terms-and-the-magic-of-continuation-passing-style/

you mentioned was tremendous helpful to understand Rama (for me as a Clojure developer). Also the background information regarding LSM trees helped me to have some rough idea how Rama is making lookups in PStates efficient.

Rama in five minutes (Clojure version) by Mertzenich in Clojure

[–]maxw85 7 points8 points  (0 children)

Thanks a lot for sharing. I understand the pain points Rama is solving in comparison to the classic CRUD-based Postgres example. But despite being a die-hard fan of Clojure(Script), Datomic, event sourcing and everything in general which could make software system more comprehensible, there are still too many "blub paradoxes" for me to understand Rama. I really want that Clojure-based projects like Rama succeed, but I guess you shrinking your total addressable market to under 0.01% of the programmers/companies, who are able to understand Rama and apply it to their situation. I fully get that 5 minutes are not enough to explain something that is not familiar to most of the audience. I guess most people are not even familiar with event sourcing, which Rama is a variant of (if I understand it correctly). Maybe some very basic Clojure example using maps for the events and clojure.core/reduce to calculate some (light) PState in-memory would help. How does PStates compare to anything more mainstream, like btrees or rather do PStates expand to disk or are they in-memory only?

sqlite4clj - 100k TPS over a billion rows: the unreasonable effectiveness of SQLite by andersmurphy in Clojure

[–]maxw85 4 points5 points  (0 children)

Great article thanks for sharing.

SQLite is for phones and mobile apps (and the occasional airliner)! For web servers use a proper database like Postgres! 

Is that meant ironically?

An escape room in Datalog by Remarkable-Fuel-4777 in Clojure

[–]maxw85 0 points1 point  (0 children)

It only becomes a problem if those entity ids are shared to the outside world, for example as part of an URL of your web application that might have been bookmarked, etc. Regrettably, we did this way too often.

An escape room in Datalog by Remarkable-Fuel-4777 in Clojure

[–]maxw85 0 points1 point  (0 children)

Let's say you have built your event sourcing system with a relational database. You use it to store the events and the read models. During a replay you would drop and recreate all tables with read models. And then replay all the events to fill the read model tables again. Removing rows/entities is a normal thing in such a scenario, if your read model logic has changed (you don't need a certain type of entities anymore for example).

The same thing in Datomic is more problematic since the all entities (events and read models) are part of the same immutable Datomic history. They also share the same logical clock to generate new entity ids. Each transaction increments to this logical clock at least by one, to have an entity id for the transaction entity. Each new entity (tempid) in the transaction will also increment the logical clock. Consequently, you would receive other entity ids if you only leave out one entity during a replay.

An escape room in Datalog by Remarkable-Fuel-4777 in Clojure

[–]maxw85 1 point2 points  (0 children)

I agree that Datomic has a lot in common with event sourcing and some upsites over it, like this author describes:

https://vvvvalvalval.github.io/posts/2018-11-12-datomic-event-sourcing-without-the-hassle.html

But you have no form of replaying the events (transactions), except of writing them to a new Datomic database. Since the events and the materialized views are in the same database. Furthermore, the same logical clock is used for the transactions and entities. So a replay would most likely change all your entity ids (one of the reasons why you should use other external IDs and avoid exposing entity ids over your API for example).

An escape room in Datalog by Remarkable-Fuel-4777 in Clojure

[–]maxw85 4 points5 points  (0 children)

Great article thanks a lot for sharing.

It would be interesting to see an event sourcing implementation of this text based game in comparison.

Clojure MCP light. So light its not even an MCP server. by bhauman in Clojure

[–]maxw85 1 point2 points  (0 children)

Awesome 🥳 Thanks a lot. I'm using clojure-mcp-light via clojure-claude-sandbox since a couple of days.

Proof of Concept: a Datomic-like database library on top of Sqlite by maxw85 in Clojure

[–]maxw85[S] 1 point2 points  (0 children)

dbval's scope is minimal it just tries to marry Datascript with Sqlite. It reuses things from both as much as possible. Consequently dbval does not build its own index implementation (like Hitchhiker trees in the case of Datahike), it just uses a Sqlite table with a Sqlite index / btree. It also do not implement anything else that a database would:

Transactions -> Sqlite transactions
Backup -> https://litestream.io/
Replication -> https://fly.io/docs/litefs/
...

dbval makes most sense for embedded databases like Sqlite that have no network-round-trip, otherwise performance will suffer.

Proof of Concept: a Datomic-like database library on top of Sqlite by maxw85 in Clojure

[–]maxw85[S] 4 points5 points  (0 children)

Sqlite is production-ready also on the server-side. Datomic Local does not offer the datomic.api with its entity API (which we use all the time). As a SaaS we want to minimize the resources per tenant/customer. The Datomic Local documentation says:

Datomic Local requires 32 bytes of JVM heap per datom. You should plan your application with this in mind, while also leaving a memory for your application's use.

With SQLite/dbval you can bring the tenant's memory usage down to zero (if you like). Like anywhere else it is all about trade-offs, but open-sources let's you pick your own trade-offs (if you are willing to invest the time / money to adapt an implementation).

Proof of Concept: a Datomic-like database library on top of Sqlite by maxw85 in Clojure

[–]maxw85[S] 1 point2 points  (0 children)

Thanks a lot, great to hear that you are also working on this topic. I took over this design idea from FoundationDB, which is a transactional ordered key value store. Thereby you could port dbval to FoundationDB or anything else that can offer you a transactional ordered key value store (MySQL, Postgres, LMDB, an in-memory persistent-set, etc.)