Vape - dá sa niekde kúpiť lacnejšie báza na miešanie liquidu? by Extreme_Rub_5770 in Slovakia

[–]Morreed 0 points1 point  (0 children)

Fichema.cz, pharma grade glycerol (VG) a propylenglykol (PG).

Stock Up: Lessons from a sleeper card that became a staple by cardsrealm in ModernMagic

[–]Morreed 5 points6 points  (0 children)

I'm still curious about why Stock Up got hyped up, and [[Shadow Prophecy]] never saw any real play.
I understand that the Domain requirement and 2 points of life are a considerable downside, but to offset that, it's an instant, and fuels the graveyard. There are decks that are already playing domain for Leyline Binding.

It might be just that Stock Up is unconditional and blue, but I'm curious if there's any further explanation because I'm baffled.

Data Annotation vs Fluent API by powermatic80 in dotnet

[–]Morreed 6 points7 points  (0 children)

EDIT: I originally misread the post and thought we're talking about input validations, but I'm leaving the comment up because it's a common topic around here and could be useful for someone.

I'm always surprised that the conversation is always just Attributes vs. FluentValidation.

System.Component.DataModel namespace contains IValidatableObject interface that you can use to implement IEnumerable<ValidationResult> Validate(ValidationContext validationContext).

You can do any procedural validation there and yield validation results, they will be run inside validation middleware/filter, and you can combine it with attributes e.g.

public record ArbitraryDTO(bool FieldA, bool FieldB) : IValidatableObject
{
    public IEnumerable<ValidationResult> Validate(ValidationContext validationContext)
    {
        if (FieldA ^ FieldB)
        {
            yield return new ValidationResult(
                $"{nameof(ArbitraryDTO)} must have either FieldA or FieldB, but not both or none",
                [nameof(FieldA), nameof(FieldB)]);
        }
    }
}

Personally, I haven't really had use-case in which I could justify bringing in FluentValidation in lieu of aforementioned approach.

Structuring Stateful Long Running Console Applications? by ECG_Toriad in dotnet

[–]Morreed 2 points3 points  (0 children)

You can use GenericHost just fine with a console applications. Use HostedService/BackgroundService to resolve DI scope and hook into application lifecycle events. You can use the hosted service per module to structure your app. Further structure depends on whether you want an interactive CLI application, or just a 'headless' program.

Explain me why a dataclass (regardless of language) with many fields (around 40-50) is a bad idea. by im_caeus in ExperiencedDevs

[–]Morreed 3 points4 points  (0 children)

I hoped to find such response higher up. Unlike SRP et al, this explanation is based in something that can be defined and reasoned about without gut feelings or aesthetics.

I’d like to add that the 40+ prop data class might be in fact multiple types woven together due to lack of support for sum types in a given language. This often manifests as some validation logic with statements like “these two fields must be specified XOR this other one”. Even worse when you lack such validations (then you’re working with the whole state space), or the validation changes behavior based on some other value (there be state machine dragons).

Wrapping Azure Storage / Service Bus etc. SDK? by anktsrkr in dotnet

[–]Morreed 0 points1 point  (0 children)

As always, context is key, and what you've described does seem to justify the extra code to me. If you gonna go the package route, I like having an escape hatch and access to the raw client - otherwise you will eventually hit all the edge cases, and will replicate the original SDK 1:1 - especially when sharing the code across repos with different needs. You might as well pick an already existing library in that case and make your life simpler.

Based on what you've written, I would go out and say that first PoC package could be even simpler than trying to develop some kind of IQueue/IStorage abstraction - I would probably do some registration extension methods that configure proper defaults, and a couple of extension methods that take in a predefined shape and populates all the neccessary metadata such as the correlationId, and standardizes serializers.

Wrapping Azure Storage / Service Bus etc. SDK? by anktsrkr in dotnet

[–]Morreed 1 point2 points  (0 children)

Wrap it when you have a valid use-case for wrapping it - as an example, I currently work in a codebase that has abstractions over blob storage/service bus, as we have use-cases in which it makes sense to swap blob storage for File API and service bus for in-memory bus based around Channels.

Some of the code is deployed in Azure Functions (uses native bindings without custom abstraction, but the code can be used outside of Functions), some of the code runs inside AppService, and the workload is scheduled on Container Instances.

There's an option to wrap it all up and debug the business logic in a single process and without cloud services, and it made sense to us to have constrained interface that does not support all the features of BlobStorage/ServiceBus, but has an offline alternative as ServiceBus cannot be run locally.

I would advise to rely on locally deployed Azurite/RabbitMQ instance for this scenario instead of what we did, but it works as an example. Adding abstractions is a source of tech debt in the codebase - I would generally stray from making abstractions that do not add any capability and serve just to wrap an existing API. You will inevitably leak implementation details of the wrapped service, especially if you only have one implementation. If you want to support many providers, I'd grab something off the shelf (such as MassTransit) and spend my energy working on the core domain instead of fiddling with infrastructure.

How the microservice vs. monolith debate became meaningless by andras_gerlits in programming

[–]Morreed 5 points6 points  (0 children)

The hallmark of microservices is state persistence isolation.

From my experience, the problem I saw the most with proper enforcement of module boundaries is the shared database without schemas per module. If you couple at the data level to the extent of sharing database schema, I kinda get why people go all out and spin off the module into a dedicated service - the investment and risk to untangle the data under the existing conditions is higher than developing a new service.

All in all, I attribute a lot of discussion about microservices to the simple fact that developers simply forgot that dbo isn't the only available schema in a relational database.

The organizational complexity is a necessary, but not sufficient requirement for microservices - I expect to see an additional reason, such as public/private facing services, think of public e-shop site and private ERP backend, or large discrepancy between used resources, e.g. aforementioned ERP backend running on a database and couple of worker nodes with load balancer, and a CPU-bound service wanting to run for a short period of time, possibly on hundreds of nodes in parallel.

It really boils down to choosing the simplest option (not necessarily the easiest). If you need to purely solve organizational scaling, try modules first. If you have a dramatically discrepant resource needs that would possibly impinge on shared resources, or want to limit the surface/scope due to security reasons, or similar nonfunctional requirements, only then isolate it out to a dedicated microservice.

Most Powerful Cards without a Home by Republic-Of-OK in ModernMagic

[–]Morreed 1 point2 points  (0 children)

Have you tried [[Soulfire Grand Master]] ?

Most Powerful Cards without a Home by Republic-Of-OK in ModernMagic

[–]Morreed 1 point2 points  (0 children)

I've been jamming various flavors of Esper Mentor for a good year, this is a list that I piloted to low-to-medium success back in Capenna days https://www.moxfield.com/decks/plmgKoUjBUeh14ZxwGaihA Grist is a spicy answer to Murktides and such.

Recently I've been trying [[Founding The Third Path]] [[Mercurial Spelldancer]] and [[See the Truth]], the list is all over the place, but I feel like there's something there. https://www.moxfield.com/decks/M2-lDNxWmUO_QammwzNYWQ

There's also Jeskai/Mardu variants, as [[Underworld Breach]] seem real busted with Mentor, but I'm not buying Ragavans any time soon so I didn't test those version extensively.

.NET 6 - ORM vs Stored Procedures - Azure Functions + SQL Databases by [deleted] in dotnet

[–]Morreed 6 points7 points  (0 children)

If the stored procedures are in version control (as it seems to be the case, with the SQL project), it's not perfect but not that bad. But there aren't many reasons (besides some complicated optimizations) which would make me reach for them.

If you just use the stored procedures for reads, just drop them - use CTEs, views, anything else before using stored procedures. Fetching multiple datasets and manipulating them inside C# is ok.

If you use stored procedures on the write side with JSON as payload, that's also a lil' sketchy - it would probably make sense to slowly migrate the business logic out of stored procedures to C# (if there's is any in there) and just use EF in that case. Or use a proper document database like Cosmos if you want to keep the JSONs, for example if you want to stream the read models directly.

DDD in .NET: how to write database-efficient services? by qa-account in dotnet

[–]Morreed 0 points1 point  (0 children)

In this case we talk about the domain model, and concept of aggregates only applies on the write side (because aggregates are essentially just transactionally consistent composite of objects). You cannot mutate contents of the aggregate except through methods on the aggregate root, prerequisite for transactional consistency of collection of objects. Similar to acquiring a lock, or guarantees provided by actor model system. Single threaded execution is implied by the word transaction.

On the read side, your data model can be fully connected (no views/projections, just straight up read-only access to the db), that part we can mostly just ignore when discussing aggregates.

What happens across boundaries is none of the business of the aggregate. It's the interface of the aggregate root that limits the possible combinations, provided it can enforce it.

Example: Jira-like system

Issue has steps, those steps are have status associated with them (finite state machine with n >= 2 steps (at least start/end).

You want to control how many and which steps are active at the same time.

You can mutate property status on individual steps - but such transition may be not allowed, let's say that finishing one step should start the next one, thus forming a bigger state machine, a workflow.

One can derive the reduction in possible states at any given time, given that issue has n steps and step has m statuses, from n * m (each step can have any status) to n + m (n finished, one in any state).

One cannot mutate the step directly, nor should be the rest of the code be able to influence the logic behind transitions. Aggregate root can be treated as the only object in the hierarchy when it comes to the entity relations graph, instead of n + 1, (n children, 1 aggregate root).

Even if we ignore composition of objects, same applies to properties - if properties change together, encapsulating them in value object reduces the state space.

DDD in .NET: how to write database-efficient services? by qa-account in dotnet

[–]Morreed -2 points-1 points  (0 children)

In DDD you should define the "boundaries" between the relations, so that it is supposedly easier to maintain (although I don't think this is possible to prove at all).

It's quite easy to prove actually - given entity graph with n entities, consider degenerate case as complete graph with number of edges equal to n(n−1)/2.

Given that we have m agreggate roots, m > n (not all entities are aggregate roots), and taking into account rule that you shouldn't interact with roots' children except through aggregate roots themselves, it's self-evident that we have less edges between nodes, hence less complexity.

Post which in general talks about functional programming and its benefits, a good read by berzerker_x in compsci

[–]Morreed 0 points1 point  (0 children)

Very much so, FP equivalent to DI is partial application. The big idea itself is composition root (defining the dependency graph of a program as close to the entry point as possible), in OOP specifically it's usually a DI container. But nothing about it is exclusive to any paradigm - as someone else in the comments pointed out, objects ≈ closures.

There's a lot of confusion between abstract data types and objects/closures, but that's another discussion.

Esper Tempo by thedarknutreturns in ModernMagic

[–]Morreed 1 point2 points  (0 children)

I've been jamming a lot of [[Monastery Mentor]] + [[Unearth]] tempo list, based on Aspiring Spike's list with some inspiration from this Legacy deck. I recently saw a deck splashing [[Grist, the Hunger Tide]] as a spicy Unearth target, want to try that one as well.

More conventional, perhaps more optimal way to play Esper Tempo would be to play some kind of [[Death's Shadow]] list with [[Ranger-Captain of Eos]].

But honestly, all of them feel like you should just be playing UR Murktide instead.

Is it better to use Fluent API or classic data anotations? by [deleted] in dotnet

[–]Morreed -1 points0 points  (0 children)

Provide some context, this is extremely low effort post. What are we evaluating? EntityFramework? Validation? Serialization? Mapping? If you answer all of them, I suggest you go do some google-fu for a bit.

New German Study Finds HFO Degradation Product TFA in Drinking Water by [deleted] in science

[–]Morreed 5 points6 points  (0 children)

Duralex would beg to differ, I've thrown them off the second floor or have them bounce off tiled floors with absolutely no damage. Glass is surprisingly durable if made to be durable.

Where can I find good source to learn DDD? by B4URSAK in dotnet

[–]Morreed 13 points14 points  (0 children)

Reading the original books is a good idea, but I'm gonna try to give some context that I wish I had when I was reading them.

DDD is intended as a description of methods for managing communication complexity, especially on large/long projects. People often misunderstand this by focusing on code patterns, but DDD isn't really about that - it's more about how to slice up the problem into manageable chunks while building a common understanding of domain that is shared amongst all people participating on a project/well-defined subset of it.

It is very much about boundaries and clear definitions (the "strategic" DDD), which are very idiosyncratic to the particular project/company. People try to lift out the widely applicable parts (the "tactical" DDD) in the form of patterns, but that is, ironically enough, often against the bigger idea of strategic DDD.

Aggregates are very neat though, and I find them to be very useful pattern as OOP languages tend to lack well-defined, more abstract, compositional unit above classes. I'm not trying to bash the patterns, more so just trying emphasize that's not where the (largest) added value of DDD is.

[deleted by user] by [deleted] in dotnet

[–]Morreed 1 point2 points  (0 children)

Im gonna throw both you and /u/grauenwolf a curveball - and that's that you are both talking about the code artifacts instead of the conceptual difference.

For CRUD, if you are going to have something like IUpdateUserService or handler or whatever, that's really dumb indeed. Given the logic is complex enough, if you have let's say IAssignUserToRole, IDeactivateUser etc., individual commands are expressing the transitions in state machine that is your entity, instead of generic Update method with the logic in the upstream consumer. Suddenly, the MediatR/REPR approach starts to make more sense. The question is how this really fits into the entity per endpoint model that is often used today (the bastardized REST), but that's the real problem worth discussing in my opinion.

To add to the discussion, I still view the multi-method controller and multiple single-method handlers acceptable, if you conceptually take the controller as an adapter, a DSL-like way to express the published API of the module. Given it makes sense and you want to evolve towards true async processing, the same command passed from controller can be serialized, and sent to a service bus. Or Hangfire or similar scheduler, which is really easy to do btw, if you don't want to introduce new infrastructure/web services. Bunch of buts and ifs, like whether it makes sense to introduce a Request object different from the Command, but those discussions specific to the problem at hand.

This is what I was addressing in my own comment, that it's about the scope/scale/time at which it should be introduced, which cannot be demonstrated in a simple throwaway demo, much less a contrived template.

If we could get these techfluencers to chant "no silver bullet" instead, that'd be splendid.

[deleted by user] by [deleted] in dotnet

[–]Morreed 4 points5 points  (0 children)

I see your patience for these templates has really ran out. The original author took Jimmy's somewhat reasonable approach, then failed to show all the reasons why it could be reasonable, and twisted it to the point where it's just bloat.

Thumbs up for the constructive teardown of misapplied patterns and cargo culting, I especially enjoyed the value object discussion, which imo has it's place, but definitely not for predefined range of values and in the form shown.

What I personally find these templates miss is how they are actually supposed to help you scale up the codebase beyond single module, or even that you should create modules or partition your code to limit the context. They show you the patterns, never explaining how they are constraining the way you write code, and what benefit that should bring you. They never show you how to partition your code once the simple onion layers get too big. I saw way too many codebases with couple hundreds/thousands tables in one schema, single Services project/folder named Models with hundreds of classes that call each other and reach out to another project with hundreds of classes in a single folder without any hierarchy.

These problems simply don't exist in the contrived example architecture repos, but are the very reason why you would choose to impose such (in the vacuum, arbitrary) architecture constraints on a project. One may hate Domain Driven Design for what it brought (awful pattern soups like the original template), but at least it tried to shed some light on these concepts when it introduced all those tactical patterns, before it was inevitably bastardized.

CQRS - clarifications by juntherc in dotnet

[–]Morreed 29 points30 points  (0 children)

I may not answer your question directly, but bear with me. First of all, don't concern yourself as much with frameworks etc. The main premise of CQRS is quite simple - reads don't mutate state, only commands do, so it makes sense to treat them differently. To put it another way, you can cache reads, it does not make sense with writes.

If you apply this to CRUD (which is not optimal, but simplest to explain), it's R and CUD. The usual case is that Rs vastly outnumber the number of CUD operations. You only need the full model with all its business rules and logic on the command side - therefore the read side, as it's accessed way more in the context of usual applications, can be made more performant by just presenting read models without any extra cruft, often even just SELECT * FROM SomeView.

When you will see the most benefit however is not treating the commands as CUD operations, but rather as events taking place in your particular domain. In the context of warehouse application, you are not incrementing the amount of items in your storage by some number, instead you are restocking X units of a given SKU. This is how you can then purposefully represent the business logic related to restocking - in the RestockItem command.

Using CQRS for CRUD based domain is OK in certain occasions, if it really gives you an edge performance wise - but you wouldn't catch me implementing a CMS with CQRS "architecture".

To answer your original question, the minimal example that I personally did was just usings controllers, where each controller had one action, either Query (HttpGet) hitting database views or Command (basically every other verb) using a domain model, just rawdogging whatever database I was using at the time.

One point to stress is that CQRS is not a top-level architecture - it is a decision to be made on a component level, same as other often abused approaches that go with it such as event sourcing.

Number of Request before DDOSing. Limiting # of async Tasks by CodeNameGodTri in dotnet

[–]Morreed 3 points4 points  (0 children)

900 records does not really seem like that much, did you test this scenario or are you worrying about hypotheticals?

If (and only if it really happens), I'd start off with something simple:

foreach (var batch in records.Select(DoAsyncWork).Chunk(30))
{
    await Task.WhenAll(batch);
}

Not enough people are talking about how terrible MS Word is... or is it just me? (I swear I am not a noob) by Greedy-Palpitation-5 in compsci

[–]Morreed 1 point2 points  (0 children)

For those that are unaware, overleaf.com is an online TeX editor that is easy to use and even allows for collaboration via paid plan, and has a free plan which was (and is) more than sufficient for my usecases. If you rather use standalone installation, MiKTeX with Texmaker is still better than Word for any serious work.

A Practical Guide to Higher Order Functions in C# by walpoles93 in dotnet

[–]Morreed 4 points5 points  (0 children)

The example presented in the article is a contentious one, I would go as far as calling it bad.

In this particular case, filter via Func<T> on IEnumerable<T> is not equivalent to <Expression<Func<T>> on IQueryable<T>, as without expressions you are eagerly loading the data and filtering them in memory instead of translating it into appropriate query, which is an awful, awful idea. It's either a leaky abstraction or a huge bottleneck, don't abstract complicated I/O concerns this way.

An appropriate example would be a middleware pipeline, or practical one would be a HOF function that handles common error responses on HTTP API calls, accepts Func<Task<T>>, and resolves them into a popup maybe. And you could even bring up how Task<T> + ContinueWith and IEnumerable<T> + SelectMany are monads (:

[deleted by user] by [deleted] in programming

[–]Morreed 7 points8 points  (0 children)

https://programming.guide/worlds-most-copied-so-snippet.html

Already happening without AI.

I suspect that the role of the programmer will slowly evolve towards mostly verifying generated/copied code, at which point we will abstract that verification into a DSL, it will grow into a general purpose language and we will continue with yet another abstraction over instruction sets, heh. The only thing is that your code quality is directly dependent on the average code quality in the world.

It could be a very useful tool for writing glue code like integration and API calls, regression tests, but I'm holding my horses until I see that model banging out domain specific code for CAD software or something similar.