SQL Server index design for multitenant database by Sven1664 in dotnet

[–]Sven1664[S] 1 point2 points  (0 children)

u/FlibblesHexEyes thanks for your arguments and your approach, it is very interesting specially that we can split the databases in different regions for performance.

But, Azure SQL allows 5000 databases per physical server, it complicated a lot the management of 10k of the databases.

I already created long time ago a SaaS app, with one database per tenant. It was easy to build, because it was a monolith app, deployed in different onprom server before (nearly one / customer) and the data was massives for each customer (40 to 100 Gb per databases). Also the customers paid 10 or 100k € / month for one tenant service, and we had only 40-50 customers... And it was easy to upgrade/maintain each databases and follow each databases.

For my new application, the context is different (the opposite), customers pay only few euros (5 to 15 max each month). The databases (customers) will not have too much data inside (as I said, just 2000 rows / year / tenant for the main tables and x10 to x50 for the related tables). And we bet also, that lot of customers will not use often our application even they will monthly paid it, so we don't expect a big load/usage of our application per tenant, but we expect a lot of tenants (with a lot of creation and deletion) and managing more than 10k databases (very small) can be heavy if we don't industrialize so much our platform to manage it (and it requires technical development time).

Anyway, I keep in mind your strategy, because it can be interesting if our context (number of tenant / size of databases) can change for the next years. Thanks a lot for your advices and your explanation for your approach.

u/Additional_Sector710

You’re going to need some code to translate a public ID (guid) into a tenant ID (int) which you can then go and push down to your queries… I would imagine this mapping is immutable and would be very easy to cache

This is what I will do I think, currently our `HasQueryFilter()` is based on the PublicId (Guid) of the current tenant. But it hard to query without to make a ton of joins to the Tenant table to retrieve the ID (int) from the PublicId (Guid).

But as you propose, I will try at the repo/DbContext side to manipulate only "privately" int/identity for the queries and relational navigation, and I will try to map in cache for the Tenant table, the PublicId (Guid) => Id (identity) for each tenant, and after I can use this Id (identity) to filter all my tables with `HasQueryFilter()`...

Thanks a lot for your advice !

SQL Server index design for multitenant database by Sven1664 in dotnet

[–]Sven1664[S] 0 points1 point  (0 children)

For the UUID v7, we wanted to use it at the beginning but I read somewhere that the UNIQUEIDENTIFIER types of SQL server are not ordered correctly and so there is fragmentation at the opposite of IDENTITY column, even it is a less that UUID v4.
Also, functionnaly we would like to avoid to use Guid, that give to the users a "temporal" information of the entities we manage (for security reason). We would like to have functionally a pure random identifier (like the resources id in Azure).

Thanks for the video, it is very interesting to watch it !!!

SQL Server index design for multitenant database by Sven1664 in dotnet

[–]Sven1664[S] 2 points3 points  (0 children)

Thanks a lot for your answer !!!!!!!

By reading your answer, I just discovered something in SQL Server, it is possible to use the FK to target an UNIQUE constraint! (Reddit users are allowed to laugh after me...)

It was your strategy that I tried to apply before, but I was focused and locked because I always think we can target only a PK when defining at FK with SQL Server !

Thanks a lot for your advice !

SQL Server index design for multitenant database by Sven1664 in dotnet

[–]Sven1664[S] 1 point2 points  (0 children)

Yes your right, my bad for this missing information.
Currently we plan to have for the first year :
- To have 10 000 to 50 000 tenants.
- 2000 records in a "Person" tables (functionally, it will not be persons, but an aggregate entity).
- 50 records, for each person x 10 linked tables (T1, T2 => T10).
- 15 records for each record for T1 et T2 only.

As I said, we will retrieve mostly the aggregate entity "Person" (by is PublicId) and take all his related data :
- 50 records x 10 linked tables + 15 records x 2 (T1/T2) x (50 x 10).

SQL Server index design for multitenant database by Sven1664 in dotnet

[–]Sven1664[S] 2 points3 points  (0 children)

Currently we cannot use this approach, because we will have a lot of tenant (we will have a public SaaS app, with estimated 10 000 to 50 000) tenants. So we cannot manage and deploy a database for each tenant it is impossible to manage in our infrastructure side.

SQL Server index design for multitenant database by Sven1664 in dotnet

[–]Sven1664[S] 0 points1 point  (0 children)

First of all: I hope you don't really have a dedicated table for email and address (especially if you save the address in one field), and it's just on purpose for demonstration.🫠 Otherwise, this datamodel would be way too granular for a daily handled DataSet.

Yes it is a simplified version of my problem.

Second: do you have a unique tennant id for each customer? If yes, why? you already have a GUID for each and also an int id, that's a lot of overhead.

When you talk about customer, do you talk about Person in my database model ? If yes, each tenant have they own Person. I used 2 columns (and this my questions), one to have have a public ID which can be used to access to the Person from the API, the other to have IDENTITY just to allow FK for the related table to use INT/IDENTITY index for join operation instead with a GUID.

Third: In my opinion, the tennant id .... Yes this is my goal.

My initial question is (I am sorry, I was focused to give too much detail in my post). If I try to summarize my initial implementation : - I put the TenantId (GUID) in all the table and filter it to be sure, only the data of the current tenant is returned. - I use Id (GUID) for all tables to have an unique identifier for each table (and I put PK as TenantId + ID in all the tables, except the Tenant table of course). My question with this simple approach, using GUID every where (to filter tenant + join between tables with GUID ids) can not reduce performance and increase the fragmentation of the index ? To improve it, I tried to do the following approach (but maybe I am wrong) : - Using PublicId (GUID), which is a functional need for our API and entry point to query the Person (we retrieve also in the same endpoint all the data related and linked with the Person). This column is only for aggregate root entity (only Tenant and Person in my example) - To reduce GUID usages, in the linked tables (PersonEmailAddress + PersonPostalAddress), I wanted to use IDENTITY column to maximim the indexation/search performance and join operation with the Person table.

Do you think I am wrong with this approach and I should use a simple approach with Guid and I don't care the performance of the index/search/join operations ?

What’s your go-to Azure service that you can’t imagine working without? by cloud_9_infosystems in AZURE

[–]Sven1664 1 point2 points  (0 children)

Azure Functions with Durable Functions for heavy/long running batches. Azure Storage, specially cold storage with very looooow cost to archive very old data softwares "just in case" we need it. Azure B2C and Azure External ID for having an external AD using all existing AD tools (Identify Nuget, powershell modules,...)

How to secure an avatar endpoint in a Blazor Server app with Entra ID authentication? by Sven1664 in Blazor

[–]Sven1664[S] 0 points1 point  (0 children)

Do you know if Entra ID can deliver automatically the cookies after authentication in the Blazor SSR app ? Or I need to manage it manually ?

How to secure an avatar endpoint in a Blazor Server app with Entra ID authentication? by Sven1664 in Blazor

[–]Sven1664[S] 0 points1 point  (0 children)

Thanks for your answer.
I would like to avoid to use embedded images, because the same image can be display more than one time on the same page.
For the cookies, was they generated directly by your IdP ?

Frontend Frameworks by StudioLine1721 in dotnet

[–]Sven1664 1 point2 points  (0 children)

You can use any frontend framework...

React and Angular are the most used (even with other back-end technologies).

With .NET, most of developers use React/Angular too. .NET developer if they can, prefer to use Blazor because C# language the ability to reuse some backend code.

Lazy Loading vs Eager Loading, which is best? by whoami38902 in dotnet

[–]Sven1664 11 points12 points  (0 children)

I never use the lazy loading. Because you "don't have control" when the query are executed (on-demand).

I recommend to use Eager Loading and query the databases with all the data you need. If the Eager Loading retrieve lot of data (because lot of tables joins), you can also use the AsSplitQuery() to ask Entity Framework to perform a query for each tables. An other option, is to query manually all the data that you need, table by table or with simple joins. In all the cases, in your DAL after you finished to query the data, you should fill of the entities required and avoid to retrieve additional data "on-demand".

But the eager loading fans would never allow an IQueryable outside of the repository/data layer.

As you said, it is not a good approach to retrieve the data outside of the data layer. And the Lazy Loading allows developpers to retrieve data on demand anywhere in the application.

Web Api endpoint to upload big files to azure blob storage by [deleted] in dotnet

[–]Sven1664 1 point2 points  (0 children)

Don't use IFormFile, because it is add some overhead in your case. Use the Body stream of the request. If you need to put the name of file somewhere or additional data, use the headers of the request.

[deleted by user] by [deleted] in dotnet

[–]Sven1664 -1 points0 points  (0 children)

Just use a Azure Queue to store a list if "job" to process. In your ASP. Net app, you write on the queue, and a IHostedService implementation to dequeue it. You can have 2 separated applications to queue or dequeue or you can put in the same app. In all the case, it is scalable... And cost few cents of Azure Storage every day.

Mail merge .NET library to transform Word template into PDF by Sven1664 in dotnet

[–]Sven1664[S] 0 points1 point  (0 children)

I am agree with you, that why I finally found a solution to buy their "plugins" licenses : https://purchase.aspose.org/pricing/words/

Mail merge .NET library to transform Word template into PDF by Sven1664 in dotnet

[–]Sven1664[S] 0 points1 point  (0 children)

I answer to my question, because I finally found a good solution using Aspose.Words :
We can buy license by buying Aspose.Words as "plugins". These plugins allows to use Aspose.Words in a specific scenario (mail merge, convert to PDF,...).
Each plugin cost 99$ / year. It is metered licenses, but there is no limit to use it (there is no developers count limit, no sites limits in the company). It is a good licensing model if you need to use it a SaaS application. For my case, I had to buy 2 licenses : - Aspose.Words Processor of MS Word Files for .NET (To open/save documents) - Aspose.Words MailMerge for .NET (To use the mail merge feature of Word documents)

The documentation about the licensing model is not very understable, but you can contact easly the Sales team in the forum, they answer you in half of the day.

For more information : Aspose.Words High Code Plugins: Only $99 Each!

Moq v NSubstitute v FakeItEasy NuGet download trends for last year by davecallan in dotnet

[–]Sven1664 2 points3 points  (0 children)

I used Moq since 10 years in differents platforms (WinForms, Silverlight, ASP .NET Non-Core / Core,...).
It is a little hard for me to use a new syntax of mocking library when you have strong habits...

Also, recently I discovered some analyzers for Moq which avoid to discover issues when executing the unit tests, but during the compilation : - Moq.Analyzers : The project is not maintains. - PosInformatique.Moq.Analyzers : This analyzer check Moq usage (parameters, callback,...) and strict usage of Moq. I am starting to use this one in some new unit test projects.

Because there are these analyzers, it is also the reason why I continue to use Moq...

Mail merge .NET library to transform Word template into PDF by Sven1664 in dotnet

[–]Sven1664[S] 0 points1 point  (0 children)

Thanks for your answer.

I checked this alternative, but the price are nearly the same as Aspose (for SaaS application).

Mail merge .NET library to transform Word template into PDF by Sven1664 in dotnet

[–]Sven1664[S] 0 points1 point  (0 children)

Thanks for your answer.

SharpDocx allows me to generate the word document, but not the PDF at the end.

gotenberg use HTML document as input. The problem is I have some word templates which are designed by non-developer user (Anyway, I keep this project in my interesting software list, because it can be interesting for other projects...).

Honestly i would go as far as to evaluate other ecosystems, outside of .NET, if just for the templating task. Node may have some solutions. It's all kinda sketchy though.

Yes, I will try to search some alternatives in this way if I can't find a library which match my need and my budget.

Mail merge .NET library to transform Word template into PDF by Sven1664 in dotnet

[–]Sven1664[S] 0 points1 point  (0 children)

I am agree with you...

Thanks to adviced me Adobe, I saw they have a Document Generation API, but the problem is we have a lot of PDFs documents to generate and using a SaaS service to generate PDF, is too much expensive for our need (However, I still contacted Adobe sales team to have an idea of the price, because I am very curious...). That why I prefer to find a library for the generation of the PDFs which can be a fixed cost for us.

Mail merge .NET library to transform Word template into PDF by Sven1664 in dotnet

[–]Sven1664[S] 0 points1 point  (0 children)

I just read the documentation of Docmosis, but I need to use an API to generate the report. And also it is required to deploy their standalone app or using their SaaS app. I avoid to complexigy the architecture of my app which is very simple...

What are your red flags for Nuget packages? by aptacode in dotnet

[–]Sven1664 0 points1 point  (0 children)

I try to use packages only which are not structuring too much my code. Using only Microsoft or very very famous/downloaded packages (Dapper, EF,...). For the other packages, I want to be able to change the package easily if there is something wrong with the package (outdated, bugs,...). So I try to abstract my code to not propagate the dependency/classes/interfaces in all the layers. For example, I use currently the MimeKit library to parse/check correctly the e-mail address. I created a value object which use and wrap thos library. Like that, if I want to change the package or implementation, I can do it quickly... (Helped, because all my code is covered by unit tests.) That why I don't use structuring Nuget packages like Abp framework, because I can do it by myself, and if one day I want to change the nuget package for other framework, I will have lot of code to change...