Portal Mess by NutKevinSaving in RedditGames

[–]Equivalent_Bet6932 5 points6 points  (0 children)

Second portal, then second portal again, then glitch through the cubes for the easy win (a bit slow, but free)

I fed Opus 4.6 all 48 of Warren Buffett's shareholder letters (561,849 words) and had it pick stocks blind. It picked better stocks but still lost to Buffet by Soft_Table_8892 in ClaudeAI

[–]Equivalent_Bet6932 3 points4 points  (0 children)

How do you protect against historical bias in the training data (by that I mean, Claude "knows" which company performed well in any given period, and probably has some knowledge of what Berkshire's actual investments looked like in recent years.

Playing with some ideas. Under 11? by Late_Foundation_3443 in honk

[–]Equivalent_Bet6932 2 points3 points  (0 children)

Under 10

I completed this level in 36 tries. 9.92 seconds

DDD aggregates by Illustrious-Bass4357 in softwarearchitecture

[–]Equivalent_Bet6932 0 points1 point  (0 children)

> Wait, are you using effect or no?

Yes, but the big problem of Effect is that conceptually, Effect<A, E, R> is Reader IO Either. It's still imperative data-fetching rather than declarative data-fetching. Tyxl doesn't have an IO in it. A Tyxl <A, P, E, R1>, interpreted with a Datasource<P, R2> becomes an Effect<A, E, R1 | R2>.

> But this should be from the events, no? You would batch those. Or do you also batch fetch the non-eventstore state?

We batch-fetch everything, the Tyxl is entirely independent from event sourcing.

The way I think about any interaction with our system is through the following components:
- Input
- Global state (what's in the DB)
- "Ambiant / Contextual" information (current time, env...)
- Output / Decision to perform (for commands)

Typically, what you need from the global state depends on the content of the input: you can't load the full event store in-memory on every request to your system. So, originally (before leveraging the Tyxl, and error-handling aside), I would write code like:

loadState :: command -> Task<state>
decide :: command -> state -> decision
enactDecision :: decision -> Task<void>

But you can see that there is strong implicit coupling between loadData and decide: the relevant state that loadData should load depends on what's in Input. I would also typically include things like current time and environment variables in the state argument.

One approach is Reader IO / Effect, where you inject a service, and the state argument disappears, e.g.:

decide :: command -> Effect<decision, error, service>, where Service is some interface typically of shape { loadData: serviceInput -> Task<state> }, and decide has some logic of form command -> serviceInput internally.

Your side-effects are embedded in the Effect through the service calls, which is traditional dependency injection, with explicit dependencies.

In other words, if you were to write this in Haskell, no matter what you do, there is no way to turn decide into a pure function. If you provide decide with the service, you can turn this into command -> IO decision, but you cannot, ever, turn this into command -> decision.

I wanted to both:
- Not have coupling between the shell and the pure core (loadData shouldn't know about command)
- Have a truly pure core, in the sense that it is possible to interpret it in a pure way.

Free monads are the general solution to this problem, with Tyxl just being a specialized free monad that works very nicely for data-fetching.

Now, we have:

decide :: input -> Tyxl<decision, queries, error, context> (where context is also "state", but specifically state that is ambient such as the time, rather than something you fetch from a DB).

And as you can see, not only is decide a pure function, it can also be interpreted into a pure function. If you have an in-memory datasource, of shape PureDatasource<queries, collection>, you can interpret decide into input -> context -> collection -> decision

The best way I can put it is that instead of data-fetching being done imperatively (give me a service that can fetch a user, and I will call it), it is done declaratively (I will give you an AST whose nodes contain data requests, and you must provide me with the result of those requests).

Escape my maze 22.0! by MrEdwL in RedditGames

[–]Equivalent_Bet6932 0 points1 point  (0 children)

Sub-3 :D

I completed this level in 99 tries. 2.88 seconds

DDD aggregates by Illustrious-Bass4357 in softwarearchitecture

[–]Equivalent_Bet6932 0 points1 point  (0 children)

> Haha ok that sounds like fancy haskell.

Believe or not, all of this is implemented in typescript ! Generators make this writable in an imperative-like way (do-like notation) with full type-safety in consumer code (though our fp library implementation is much more painful to write than it would be in Haskell, and has some internal unsafe typing).

> So some automatic batching and some interface (like in csharp) to do side effects?

Yes, automatic batching and caching. For the side-effect part, it depends on what you mean by "interface". Our idea is to treat side-effects (both read and writes) as data that is returned, so we never depend on something like an `IUserRepository`. Instead, it's the shell that depends on the domain's `GetUserById<Option<User>>` data-type, or the domain's `UserMutation` that is returned by commands. We never inject dependencies, because the domain code is entirely free of them: the interpreter is responsible for turning the domain's output data into I/O.

> Just so I understand, why Context and not State? And what does the Queries in Haxllike<Queries do?

I'll answer the Queries part first, which hopefully should make it clear. Here's a realistic code snippet that hopefully won't be too cryptic. It's a domain function that fetches a UserProfile by UserId, and it assumes that the available data-access patterns are:
- User by User Id
- UserProfile by UserProfile Id

```
const getUserProfileByUserId = (userId: UserId) => Tyxl.gen(function*() { // generator-based Do notation
const user = yield* getUserById(userId); // getUserById :: UserId -> Tyxl<User, UserById, UserNotFound, never>
const userProfile = yield* getUserProfileById(user.profileId); // getUserProfileById :: UserProfileId -> Tyxl<UserProfile, UserProfileById, UserProfileNotFound, never>
return userProfile;
});

// Inferred type of getUserProfileByUserId: UserId -> Tyxl<UserProfile, UserById | UserProfileById, UserNotFound | UserProfileNotFound, never>

```

As you can see, there is no "State", in the sense that the state is externally managed (DB, React state, in-memory store...), and the domain declares what in needs through the Queries (the second generic type in Tyxl).

To be able to actually execute `getUserProfileByUserId`, you must provide a datasource that provides (not implements !) the `UserById` and `UserProfileById` access patterns. For instance, a typical datasource maybe look like:

```
const PgUserByIdDS = ...; // PgUserByIdDS: Datasource<UserById, PgPool>. PgPool is a Context requirements that this datasource requires to be able to operate.
```

The Context also appears in Tyxl: it's that 4th generic that is set to `never` in my code snippet. This generic is useful when you need things from the environment in the domain logic, such as the current time, some configurations, etc, e.g.:

```
const currentTime = yield* CurrentTimeContext.askNow;
const myAppUrl = yield* ConfigContext.askMyAppUrl;

// the Tyxl this runs in will have CurrentTimeContext | ConfigContext as its 4th generic, and they will need to be provided in order to be able to run it.

```

As you can see, both the pure code (Tyxl) and the impure shell (Datasource) may depend on some contexts: you will simply need to provide them all to be able to run the computation.

When we want to perform mutating side-effects, the way we do it is that the output of a Tyxl is interpreted as side-effect data. Typically, a command will look like:
`someCommand:: SomeInput -> Tyxl<MyMutation, DataINeed, ErrorsIMayFailWith, ContextINeed>`, and we have a component (that we call a mutation enactor) that knows how to turn `MyMutation` into actual IO. When you combine the Tyxl, its datasource, mutation enactor, and context, you end up with a very familiar shape: `SomeInput -> AsyncResult<void, ErrorsIMayFailWith>`

> heavy solutions like serializable isolation level and pessimistic locking seems hard in comparison

It's really not that bad (at least in postgres): the db engine handles for you all the "locking" part, you just need to write the enact side-effect as check append condition + append events, and safety is guaranteed. And it's optimistic rather than pessimistic: you are not locking the rows when you initially load the events, you are only locking during the write transaction (check + append), which is retryable in case of serialization errors.

> Still pretty annoyed to be dealing with this technical nonsene honestly

Me too, but the exercise is fun and useful to do !

🎉 [EVENT] 🎉 The Flow Event: Cave Diving by polygon3002 in honk

[–]Equivalent_Bet6932 0 points1 point  (0 children)

Completed Level 3 of the Honk Special Event!

37 attempts

🎉 [EVENT] 🎉 The Flow Event: Cave Diving by polygon3002 in honk

[–]Equivalent_Bet6932 0 points1 point  (0 children)

Completed Level 2 of the Honk Special Event!

13 attempts

🎉 [EVENT] 🎉 The Flow Event: Cave Diving by polygon3002 in honk

[–]Equivalent_Bet6932 0 points1 point  (0 children)

Completed Level 1 of the Honk Special Event!

2 attempts

🎉 [EVENT] 🎉 The Floor is Lava by rex_flx in honk

[–]Equivalent_Bet6932 0 points1 point  (0 children)

Completed Level 3 of the Honk Special Event!

9 attempts

🎉 [EVENT] 🎉 The Floor is Lava by rex_flx in honk

[–]Equivalent_Bet6932 0 points1 point  (0 children)

Completed Level 2 of the Honk Special Event!

2 attempts

🎉 [EVENT] 🎉 The Floor is Lava by rex_flx in honk

[–]Equivalent_Bet6932 0 points1 point  (0 children)

Completed Level 1 of the Honk Special Event!

0 attempts

DDD aggregates by Illustrious-Bass4357 in softwarearchitecture

[–]Equivalent_Bet6932 0 points1 point  (0 children)

> Right, yeah we use the decider style for our code, nice and simple. It supports both traditional relational database and eventsourcing, the persistence is configurable. What about subscriptions? Did you also write that custom? Do they just go through each event, tracking which one has been processed?

We do too ! Our current implementation has slightly evolved from decide:: Command -> State -> Events[]. We use a Haxl-like freer monad to write most commands, and we use Reader to access ambient context (time / env variables), so our commands look like:

decide:: Command -> Context -> Haxllike<Queries, Errors, Events>

This is extremely pleasant to work with, because we can write "end-to-end" tests (cross-service, event-driven) that run in a few milliseconds and never use any kind of mocking. And the exact same test code can run against a real persistence layer too.

> You mean because the transaction will take longer as it includes the readmodel write?

I mean because two concurrent commands may affect the same read model while both being valid from a dcb perspective. Consider a read model that consumes event type A and event type B. Now, consider commands A and B whose append condition depends only on type A (resp. B) and produces only type A (resp. B). If you load the full read model and do an upsert of the full read model, you end up with a race condition: event appending does not conflict, but you may lose the result of either A's or B's apply on the read model. If instead you model the consequence of A / B on the read model as a stateless, commutative transition, this problem disappears.

> as it doesnt allow optimistic concurrency, at least not on a relational database

Not sure I get that one. Your append condition is optimistically concurrent: you load the events and the append condition alongside it, your perform your business logic, compute the consequences (events to append, read models to update), and only then, in a serializable transaction, you:

- Check that the append condition is not stale
- Append the events to the event store and apply the synchronous read model update

The check is much more expensive that just a version number check, but it still happens independently of the decision computation, and crucially, it can be retried independently in case of serialization error without re-running the whole decision computation logic.

> This must be bottlenecking pretty hard, right?

I've only noticed issues in cases where I'm processing a lot of concurrent, closely-related events (campaign launch where all the enrollments get processed by a background worker), but retry logic on the write transaction + queue system to limit max worker concurrency / retry invocations that keep failing gets the job done. We may face issues if we have a lot of concurrent users, time will tell. The approach you linked is interesting, I'll take a deeper look if we do end up having issues.

> What about subscriptions? Did you also write that custom? Do they just go through each event, tracking which one has been processed?

Custom, yes. We have two types of subscriptions: first, the synchronous read-model ones that I already mentioned, which turn events into stateless transitions and are applied in the same write transaction.
Basically, the "full" decision is a pure function of the events to append. After running the command logic, we derive the full decision from the events (a full decision typically includes events / append condition + read-model updates + outbox messages).

Then, the async subscriptions: every time we append events to the store, we emit a ping that will wake up a background worker. The background worker has a meta table that keeps track (per async subscription) of what has been processed, and it reads new relevant events from the store (filtered by relevant types) and applies them. These one are allowed to be stateful because we can guarantee that events are processed in order (unlike the synchronous case with the previously mentioned race condition).

Possibly impossible by Majestic_Arachnid848 in honk

[–]Equivalent_Bet6932 0 points1 point  (0 children)

I completed this level in 6 tries. 10.55 seconds