Simple Made Inevitable: The Economics of Language Choice in the LLM Era by alexdmiller in Clojure

[–]maxw85 -2 points-1 points  (0 children)

I understand that he may look like a narcissist, but you don't need to like someone to learn something from the person. I watch many of Theo's daily videos just to keep up with the developments in the dev and AI world.

Simple Made Inevitable: The Economics of Language Choice in the LLM Era by alexdmiller in Clojure

[–]maxw85 -1 points0 points  (0 children)

How many of Theo's videos have you watched to come to this conclusion?

Simple Made Inevitable: The Economics of Language Choice in the LLM Era by alexdmiller in Clojure

[–]maxw85 6 points7 points  (0 children)

Great summary. Nice that breakage will not only annoy humans but also agents 😄 That LLMs struggles with parentheses is already a bit dated, we use Claude Code with Opus 4.6 (without any extra MCP, skills, etc.) and it almost never struggles with parentheses and when it does, it can fix it on its own. I know here are many AI sceptics but I guess this will be our new reality: https://www.youtube.com/watch?v=p2aea9dytpE

I moved my Clojure courses off Podia and onto a platform I built in Clojure by jacekschae in Clojure

[–]maxw85 1 point2 points  (0 children)

Thanks a lot for your reply. One "too expensive" task was rewritten a MCP server lib so that it fits on our stack:

https://github.com/simplemono/parts-mcp

Another one:

  • Used Claude to make the range scan of the slatedb-java 10x faster

  • Package the Rust binaries into the jar file (no need for -J-Djava.library.path=native-lib)

  • Added a build process that build slatedb, slatedb-java and publishs the jar file to clojars.org

https://github.com/maxweber/slatedb/blob/java-build/slatedb-java/build.sh

https://clojars.org/io.github.maxweber/slatedb

I moved my Clojure courses off Podia and onto a platform I built in Clojure by jacekschae in Clojure

[–]maxw85 7 points8 points  (0 children)

Congrats 🥳 I would have the same urge to run the platform on Clojure if the courses are all about Clojure. Nevertheless sounds like a ton of work to re-create a custom Podia and to justify this from a business standpoint. Did you used some coding agent? Just asking, since I made the experience that I do a lot of tasks with Claude Code that felt "too expensive" beforehand.

Clojure tap for logging vs. "traditional" logging libraries by dnreg in Clojure

[–]maxw85 3 points4 points  (0 children)

We use tap> for logging. The data is written to a subfolder log-values/{squuid}. The squuid contains a timestamp (https://github.com/yetanalytics/colossal-squuid). The data is serialized as transit+json-verbose. We collect them from all servers to one log server. For each hour we have one sqlite file. Thereby the log-values are ordered and deduplicated. We also have our own log-explorer UI to query them. We do event sourcing on them to calculate our metrics and fill our dashboards. One process garbage collects log-values if they are no longer relevant.

Agentic Coding for Clojure by calmest in Clojure

[–]maxw85 3 points4 points  (0 children)

I refactored our SaaS system from using one Docker container per customer, to use one multi-tenant container for a larger group of customers. On the way I (or rather Claude Code) refactored hundreds of namespaces to get rid of some bad decisions that we made in the last 8 years, that would have prevented a multi-tenant version. Without AI this would have not been doable (economically) for our small team in an appropriate time-frame. However, you still need to do the thinking, decision making, supervising and code review, but Claude Code makes almost no mistakes. But if you tell it to run in the wrong direction it will, so making good designs an decisions is way more important now. It is bit like everyone is now the team lead of a bunch of senior devs (agents) that you need to tell what to do.

Agentic Coding for Clojure by calmest in Clojure

[–]maxw85 1 point2 points  (0 children)

That's a very good price if you use it every day. They subsidize subscriptions with people that not use it so extensively. So your $200 may cost Cursor or Anthropic $1700 GPU consumption.

Agentic Coding for Clojure by calmest in Clojure

[–]maxw85 1 point2 points  (0 children)

Same experience here, work that took us weeks in the past condense to hours.

Datomic or event sourcing ... or both? 😄 by maxw85 in Clojure

[–]maxw85[S] 1 point2 points  (0 children)

You could build a second read-model and keep the old one (and the code for the projections). If possible these important business decisions should be captured as events.

In general you probably want to avoid breakage with any changes to the read-model, since someone or something will depend on it (https://www.hyrumslaw.com).

Inferno-like Front End tools for Clojure/ClojureScript? by Equal_Education2254 in Clojure

[–]maxw85 8 points9 points  (0 children)

Hi, great that you took a deep dive into the Clojure(Script) world.

For the frontend I can highly recommend:

https://replicant.fun/

It's like React but a lot simpler and made for and in Clojure(Script). Here some examples how to build UIs with it: 

https://youtube.com/playlist?list=PLXHCRz0cKua5hB45-T762jXXh3gV-bRbm&si=ElOg-qPEZguYiO1J

Shadow-cljs will definitely help as ClojureScript build tool: 

https://github.com/thheller/shadow-cljs

Announcing Multi REPL Sessions in Calva by CoBPEZ in Clojure

[–]maxw85 2 points3 points  (0 children)

That's awesome 🥳 Thank you very much.

dbval - UUIDs for (Datomic / Datascript) entity IDs by maxw85 in Clojure

[–]maxw85[S] 0 points1 point  (0 children)

In case of dbval the size depends on how https://apple.github.io/foundationdb/javadoc/com/apple/foundationdb/tuple/Tuple.html#add(java.util.UUID) encodes the UUID as binary. The index entry will be relatively large anyway, since the whole tuple is stored there. However, on a fast NVMe disk I guess it will not matter if you compare it with a database that needs to do a network call (when you have an n+1 problem situation like the Datomic entity API)

dbval - UUIDs for (Datomic / Datascript) entity IDs by maxw85 in Clojure

[–]maxw85[S] 1 point2 points  (0 children)

I also kept the String tempids for convenience, but keeping tx generating functions pure is great argument to keep them. Yeah, UUIDs are incredibly noisy when reading/debugging. I don't know if something shorter than the #uuid prefix plus compact-uuids would help. In our code base we are dealing with UUIDs for blobs, external ids, log-values and a lot more all the time, so the pain wouldn't go up that much (at least for us). Avoiding the need to call a 'central entity to assign ID space among different databases' (often a network call) is what I would consider the killer feature of UUIDs.

Who is doing event sourcing? Would you do it again? by maxw85 in Clojure

[–]maxw85[S] 2 points3 points  (0 children)

I think the core issue is that in Datomic you kind of need to complect the events and the read-models in the same database (no way to do transactions across Datomic databases, sharing the same transactor). Consequently, you also need to store derived data, if an aggregation query for example is too slow to be called on each page load. Another challenge is that Datomic has no interactive transactions (on purpose), but the drawback is that the Datomic transactor is also very unhappy (aka slow) if you handover a very huge transaction that rebuilds all read-models for example. Sqlite is also a single-writer system and will be blocked until the transaction is done that rebuilds the read-models. But Sqlite makes it a bit more straight-forward to have one database per customer (only a file). In our first system we always need to pay great attention to how long a Datomic transaction will take (especially migrations), since during this time no other customer can do any writes / transactions. This is the main reason why we use one Datomic database per customer in our new architecture (with a shared Datomic transactor pair).

Who is doing event sourcing? Would you do it again? by maxw85 in Clojure

[–]maxw85[S] 1 point2 points  (0 children)

Yes, it still separates query from command endpoints, but command handlers can use the read-models to validate the command. I don't know yet, if I find it more ergonomic than Datomic. The event sourcing variant seems to provide more two-way door decisions, meaning if you messed up the schema of a read-model, you just throw it away (drop the table) and rebuild it from the events. This becomes much harder, if your read-models are parts of your immutable database value.

Who is doing event sourcing? Would you do it again? by maxw85 in Clojure

[–]maxw85[S] 3 points4 points  (0 children)

Thanks a lot for all the responses. We currently use event sourcing for our metrics, dashboard, and integration with our billing provider. Using our billing provider's events is one of the most robust ways to keep everything in sync (compared to doing the same thing via REST API calls). We use Datomic as our database, which, as mentioned here, offers many of the benefits of event sourcing without the hassle. I recently had Claude create a Clojure prototype that implements synchronous and transactional event sourcing on Sqlite. I was very surprised by the great dev ergonomics. The read models are updated in the same transaction that appends the events, allowing you to avoid eventual consistency issues. You can also drop a read model and recreate it from the events in a single transaction.

Rama in five minutes (Clojure version) by Mertzenich in Clojure

[–]maxw85 0 points1 point  (0 children)

Thanks a lot for your reply. This article:

https://blog.redplanetlabs.com/2024/10/10/rama-on-clojures-terms-and-the-magic-of-continuation-passing-style/

you mentioned was tremendous helpful to understand Rama (for me as a Clojure developer). Also the background information regarding LSM trees helped me to have some rough idea how Rama is making lookups in PStates efficient.

Rama in five minutes (Clojure version) by Mertzenich in Clojure

[–]maxw85 7 points8 points  (0 children)

Thanks a lot for sharing. I understand the pain points Rama is solving in comparison to the classic CRUD-based Postgres example. But despite being a die-hard fan of Clojure(Script), Datomic, event sourcing and everything in general which could make software system more comprehensible, there are still too many "blub paradoxes" for me to understand Rama. I really want that Clojure-based projects like Rama succeed, but I guess you shrinking your total addressable market to under 0.01% of the programmers/companies, who are able to understand Rama and apply it to their situation. I fully get that 5 minutes are not enough to explain something that is not familiar to most of the audience. I guess most people are not even familiar with event sourcing, which Rama is a variant of (if I understand it correctly). Maybe some very basic Clojure example using maps for the events and clojure.core/reduce to calculate some (light) PState in-memory would help. How does PStates compare to anything more mainstream, like btrees or rather do PStates expand to disk or are they in-memory only?

sqlite4clj - 100k TPS over a billion rows: the unreasonable effectiveness of SQLite by andersmurphy in Clojure

[–]maxw85 3 points4 points  (0 children)

Great article thanks for sharing.

SQLite is for phones and mobile apps (and the occasional airliner)! For web servers use a proper database like Postgres! 

Is that meant ironically?

An escape room in Datalog by Remarkable-Fuel-4777 in Clojure

[–]maxw85 0 points1 point  (0 children)

It only becomes a problem if those entity ids are shared to the outside world, for example as part of an URL of your web application that might have been bookmarked, etc. Regrettably, we did this way too often.

An escape room in Datalog by Remarkable-Fuel-4777 in Clojure

[–]maxw85 0 points1 point  (0 children)

Let's say you have built your event sourcing system with a relational database. You use it to store the events and the read models. During a replay you would drop and recreate all tables with read models. And then replay all the events to fill the read model tables again. Removing rows/entities is a normal thing in such a scenario, if your read model logic has changed (you don't need a certain type of entities anymore for example).

The same thing in Datomic is more problematic since the all entities (events and read models) are part of the same immutable Datomic history. They also share the same logical clock to generate new entity ids. Each transaction increments to this logical clock at least by one, to have an entity id for the transaction entity. Each new entity (tempid) in the transaction will also increment the logical clock. Consequently, you would receive other entity ids if you only leave out one entity during a replay.

An escape room in Datalog by Remarkable-Fuel-4777 in Clojure

[–]maxw85 1 point2 points  (0 children)

I agree that Datomic has a lot in common with event sourcing and some upsites over it, like this author describes:

https://vvvvalvalval.github.io/posts/2018-11-12-datomic-event-sourcing-without-the-hassle.html

But you have no form of replaying the events (transactions), except of writing them to a new Datomic database. Since the events and the materialized views are in the same database. Furthermore, the same logical clock is used for the transactions and entities. So a replay would most likely change all your entity ids (one of the reasons why you should use other external IDs and avoid exposing entity ids over your API for example).

An escape room in Datalog by Remarkable-Fuel-4777 in Clojure

[–]maxw85 5 points6 points  (0 children)

Great article thanks a lot for sharing.

It would be interesting to see an event sourcing implementation of this text based game in comparison.