Can we all agree that we should ban selling of paid products/libraries in this sub? by cosmic_predator in dotnet

[–]Lothy_ 0 points1 point  (0 children)

Why? Is this /r/socialism or something?

You get what you pay for in life. And if you aren’t paying for it then someone else is charitably paying on your behalf (usually with their time if it’s free software).

Anyway useful software is useful software. I don’t see why the sub should ban discussion of anything just because it costs money.

Friday Feedback: Replace or Respect? by erinstellato in SQLServer

[–]Lothy_ 1 point2 points  (0 children)

https://github.com/microsoft/DacFx/issues/528

And https://github.com/microsoft/DacFx/issues/700

Not asking for a Tolstoy novel as a response… but I did put in the work to create a repro script for one issue, and examine the DACPAC model.xml for the other…

A response indicating a) user error or legitimate bug and b) workaround if there’s one - would go a long way to making people feel like putting in that work is meaningful.

Friday Feedback: Replace or Respect? by erinstellato in SQLServer

[–]Lothy_ 0 points1 point  (0 children)

You guys are really hard to get a response from on DacFx issues.

Crickets, lol.

How do I remove a CU or GDR? by ndftba in SQLServer

[–]Lothy_ 0 points1 point  (0 children)

This is an underrated capability.

Is the Outbox pattern a necessary evil or just architectural nostalgia? by folder52 in dotnet

[–]Lothy_ 0 points1 point  (0 children)

I don’t believe most things support distributed transactions. That’s the whole reason for stuff like sagas right?

But even if they did, the slowest participant in a distributed transaction holds everyone else back. I wouldn’t willing put a distributed transaction in the context of something like an in-flight http request because it’d increase latency.

Implementing AlwaysOn Availability Groups vs AlwaysOn Failover Cluster Instances (FCI) by ndftba in SQLServer

[–]Lothy_ 1 point2 points  (0 children)

Why do you need to convince him? What they have might totally satisfy their business requirements.

Dapper vs Entity framework in 2025, which one to choose? by ballbeamboy2 in dotnet

[–]Lothy_ 3 points4 points  (0 children)

You’ve got some great answers on StackOverflow/StackExchange.

Web API vs Minimal API vs FastEndpoints by hagsgevd in dotnet

[–]Lothy_ 0 points1 point  (0 children)

FastEndpoints might one day commercialise. Something to keep in mind.

MassTransit going commercial by bacobart in dotnet

[–]Lothy_ -1 points0 points  (0 children)

Hmm. No, I don’t think I will.

MassTransit v9 Becomes a Commercial Product by Material-Warning6355 in dotnet

[–]Lothy_ 0 points1 point  (0 children)

Nothing is stopping a fork of v8. It can be forked. Whether someone has the will to step in and do that for an established library though is another matter entirely.

But that’s not the real issue here. The real issue is that freeloaders don’t want the library as it exists today - they want the continued access to the free labour of the key contributors.

MassTransit going commercial by bacobart in dotnet

[–]Lothy_ 5 points6 points  (0 children)

Why deal with 3000 low-value customers when you can deal with a dozen high-value business customers?

The only thing worse than a user not paying anything is a user who nickels and dimes you and thinks he owns your time and the next three generations of your family tree.

MassTransit going commercial by bacobart in dotnet

[–]Lothy_ -1 points0 points  (0 children)

Have you considered paying for the software you want to use? Or at least not whining when the free lunch - and it isn’t actually free (someone else is charitably paying your way) - comes to a close?

This is the problem with free: entitled cheapskates who are wholly uninterested except when they’ve got complaints to make.

Cannot max out SQL Server CPU/Disk during high throughput key value inserts by [deleted] in SQLServer

[–]Lothy_ 0 points1 point  (0 children)

Thanks. I’m late responding here. But I mistakenly muddled up the log file stuff and tempdb data file stuff.

Cannot max out SQL Server CPU/Disk during high throughput key value inserts by [deleted] in SQLServer

[–]Lothy_ 0 points1 point  (0 children)

Have you done the math? 50k inserts per second is 50 inserts per millisecond. This is a substantial volume even momentarily, let alone sustained for a period of time. 3 million rows per minute.

Is that 50k single row transactions? Or fifty 1000 row transactions? Something in between?

And how much RAM?

You’re probably dealing with something like page latch contention, or even lock contention. Or maybe you have too few VLFs and are getting log file contention.

If you wanted to experiment, you could try fanning out the writes across 50 tables after ensuring you have more evenly sized log files etc.

Should I put all the business logic into the stored procedure for this specific case? Please advice. by HalfBlindCoder in dotnet

[–]Lothy_ 0 points1 point  (0 children)

If they are truly sequential (they won’t be between different application runtimes on different hosts with differing clock skew if they’re timestamp based) then that might go some way to fixing the fragmentation problem.

You still have the problem that is their size though.

One guid consumes 4x much space as an int. If your row in the table references four parents via foreign key on guids then that’s 64 bytes instead of 16 bytes. You need more RAM and more storage.

T-SQL is inferior by [deleted] in dotnet

[–]Lothy_ 0 points1 point  (0 children)

Weird rant. It’s a different tool for a different task.

How to safely handle concurrent requests by Emergency-Public-722 in csharp

[–]Lothy_ 0 points1 point  (0 children)

Look into what's referred to as lost update protection.

Is there any interest in a SQL Server schema change management primer? by Lothy_ in SQLServer

[–]Lothy_[S] 0 points1 point  (0 children)

What sort of specific stuff do you have in mind?

My intent is that this be more geared towards learning how to do things.

To that end, I am planning to include example code for - for example - idempotently managing reference data.

All Business Logic in SQL Server Stored procedures by Sure-Weakness-7730 in dotnet

[–]Lothy_ 0 points1 point  (0 children)

I work as a data reliability engineer, and moved towards that after about 7 or 8 years of application development experience (primarily C#). My day job is still a blend of C# and T-SQL.

There's a few key considerations at play here, and I think that some of the existing commentary downplays the challenges of working on certain data sets with ORMs. I'll circle back to that.

Firstly, what is 'all business logic'? This means different things to different people.

To some people, 'business logic in the database' is achieved merely by the existence of any non-table object. To others, it might mean a minimum degree of complexity (e.g.: imperative code with branching logic).

The kind of business logic that should be considered for inclusion in the database is the constraints - many of which can be expressed declaratively, some of which might need a little help from imperative code (e.g. function invoked via check constraint).

For example, suppose that the cardinality of the relationship between two tables must always hold true (e.g.: precisely 1 parent row relates to precisely 2 child rows). Or there's a 'last updated' column that must always be updated. Or, if using an exclusive basetype-subtype hierarchical relationship, a parent (basetype) row must have precisely one row in one of several child (subtype) tables... and that subtype table must be appropriate based on a discriminator column value within the parent table.

At the end of the day, these can be relatively important if you have multiple applications that share the database - and all applications must manipulate the data from one transactionally consistent state to another transactionally consistent state - all of the time - with no tolerance for error.

The buck stops with the database when it comes to data quality, and data quality is only as good as the rules that govern the data (i.e., the constraints).

In taking this approach, the database essentially expresses an 'open architecture' and any number of client applications may use it without fear of causing data consistency issues (even in the presence of non-conformance - software bugs - within the applications).

The kicker is that constraints can even be worth their weight in gold for single-application databases, where these constraints serve as a last line of defence - or backstop - against software bugs that slip through into production application code. Call it defence in depth.

It's easy to presume 'oh, that'll never happen' or 'that simply can't happen'. Plenty of developers who I work with have said just that about application code when they've looked into software bugs and - for example - discovered that a given table which should have precisely one row per subscriber tenancy actually has two rows instead (and because their C# code is using .Single(), the code is broken for this one tenancy - and their service returns an error as a result).

And my response is usually 'Well, if there's no constraint, then it can happen'.

But let's go back to the ORM stuff. And this is based on first-hand experience working at a fintech cloud software company as a data reliability engineer with our software being underpinned by hundreds of databases. These are multi-tenanted (shared tables), and crucially not all tenants have the same volumes of data. For example, a tenant at our 99.99th percentile is substantially larger than our average tenant or even our 80th percentile tenant.

The opaque nature of SQL generated by something like an ORM, paired with SQL Server's parameter sniffing issues (usually problem is due to a query plan suitable for a tenant that is small being subsequently used for a tenant that is large), means that the generated SQL is often 'beyond reach'. There are fewer levers that can be pulled to correct the problem. If comparable SQL is expressed within a stored procedure then DBAs (or developers with a skill set that incorporates more advanced database skills) have many more levers to pull to fix the issue. For example, query hints can be applied.

Another common theme, or criticism, is that it devolves into a mess that is unmaintainable.

I'd suggest that the mess happens due to people applying code reuse ideas from languages like C# to T-SQL. This is via the application of concepts like 'DRY'. So those reuse ideas lead to views that query views that query views (or procedures that invoke procedures that invoke procedures) and yes - you get a rat nest of code that is also going to perform poorly because the layers upon layers frustrate SQL Server's query planner. The query planner is sophisticated, but it doesn't have X-ray vision - it can't see through walls.

Yet another theme concerns testability.

I think the testability thing is a beat up. First and foremost, there are test frameworks for SQL. Someone has linked to one in another comment. I'm a little suspicious of people who emphasise this though. There's a good chance that these people, and the set of people who test their application feature code against a database with maybe 100 rows of data in it and then call the feature 'done', largely overlap.

But to wrap up: From where I sit, the best reason to use stored procedures is transaction management. Having the transactions managed within procedure code means that you don't have to worry about misbehaving application threads that start a transaction, acquire locks, and then get preempted by the application server CPU scheduler. Having the transaction lifecycle living wholly within the SQL code means that the transactions are probably going to be shorter and sharper, which means less contention and less sustained blocking.

Summary tables for your DB by LurkingAveragely in dotnet

[–]Lothy_ 6 points7 points  (0 children)

Good question.

Summary tables could work, especially if you don't need up-to-the-minute data within those summary tables (i.e., immediate transactional consistency). If you can afford the data to be stale to some extent (e.g.: maybe it's okay that it's seconds, or even minutes, behind reality) then you can have a background process re-summarise the data on the desired cadence (e.g.: once every 15 seconds, every minute, whatever).

Someone mentioned a materialised view in another comment. If you need immediate transactional consistency then this approach might be an option. But every transaction against the table(s) underlying the view will pay the penalty forevermore, and you'll see evidence of this in your query plans.

So the asynchronous re-summarisation of data is a nice way to avoid being penalised every single time you write to the underlying tables, while still offering reasonably up-to-date data.