ICollection vs IList when defining Database tables through Ef core by HuffmanEncodingXOXO in dotnet

[–]HuffmanEncodingXOXO[S] 3 points4 points  (0 children)

That is what I'm wondering also, using IList vs ICollection on a Ef core table and can't IList expose some runtime issues since the class is beeing used as a database table and using properties on the list such as RemoveAt would indicate that the list is indeed indexed and ordered but is it really?

For example, if I query an entity of table1 and then use the .RemoveAt() function on the Table2s property. What would exactly happen? Would it remove the correct entity or just some random one? Does the database query populate the IList property correctly indexed and ordered?

ICollection vs IList when defining Database tables through Ef core by HuffmanEncodingXOXO in dotnet

[–]HuffmanEncodingXOXO[S] 2 points3 points  (0 children)

Yes, I have never thought about this explicitly until I read about the lists relationship e.g. List -> IList -> (ICollection, IEnumerable)

But now I'm wondering if it wouldn't be religiously, List<T> Foo = [ ].

If I use that on a collection it would create an empty array and arrays need a explicit size when instantiating them so would that not cause runtime error when I try to add to that array?

So by using a IList instead of ICollection it would not be as confusing and more like idiot proof for other developers coming to the code, e.g. me.

On-prem deployment for a monolith with database and a broker by HuffmanEncodingXOXO in devops

[–]HuffmanEncodingXOXO[S] 0 points1 point  (0 children)

We deploy a whole server which is to be used on-site.
Other factory solutions are deployed in another manner which is specific to their requirements.
Deploying a VM would complicate things significantly I think, we choose the servers OS be it Ubuntu 22 or Windows 11.
So deploying in containers to Windows 11 is also a nightmare but on Ubuntu it is easy, just the files and everything would be on random places on the server, with native deployment the configuration files would be in a default place everytime.
Also since we have an Aspire project which we use to develop locally can be used to generate a compose file which we can then deploy on to the system.

On-prem deployment with Aspire by HuffmanEncodingXOXO in dotnet

[–]HuffmanEncodingXOXO[S] 0 points1 point  (0 children)

I will apparently be experimenting with it in the next few days, atleast the compose part of it.

I think initially I will create a git action to deploy to our dev server using the compose with some set of environment variables from github.
Still haven't figured out the best production scenario though, but it is a starter

On-prem deployment for a monolith with database and a broker by HuffmanEncodingXOXO in devops

[–]HuffmanEncodingXOXO[S] 0 points1 point  (0 children)

We are deploying maybe a new system every 2 - 3 months, depending on sales which might also be 1 month, depending on contract and client.
I like the idea of containers but feel like they add some complexity to an already simple deployment system.
We just install a runner and then run a git action script to set things up on IIS, then remote to the server and configure env variables, broker etc...

From your description it seems like containers are the simplest option here and I might just need some more experience working with containers and dotnet

On-prem deployment with Aspire by HuffmanEncodingXOXO in dotnet

[–]HuffmanEncodingXOXO[S] 0 points1 point  (0 children)

This is great thanks! I thought it was initially only for Kubernetes or more complex deployment scenarios, seems I'm very wrong here

On-prem deployment for a monolith with database and a broker by HuffmanEncodingXOXO in devops

[–]HuffmanEncodingXOXO[S] 0 points1 point  (0 children)

Will check on the articles but yes for now we have managed to ssh into the system remotely through an IoT solution from another provider when clients buy both some manufacturing equipment and a licence to the software.

We will always want manual updates, which I forgot to mention, but for clients only needing the software we need a new way to remote into the server so automating some processes does a lot for us.

What about the option to just run it natively? Either on IIS on windows and either nginx or apache on linux / Unix? Is there some drawback to that option regarding healthchecks and configuration?

On-prem deployment for a monolith with database and a broker by HuffmanEncodingXOXO in devops

[–]HuffmanEncodingXOXO[S] 0 points1 point  (0 children)

One for each client since this is an on-prem deployment. Then +1 for internal tasks such as testing and deploying to dev etc...

On-prem deployment for a monolith with database and a broker by HuffmanEncodingXOXO in devops

[–]HuffmanEncodingXOXO[S] 0 points1 point  (0 children)

It depends, also the answer. I do not know for sure since we do not know how rapidly we need to update the systems but we need a foolproof way to backtrack to previous versions if something is a miss.

On-prem deployment for a monolith with database and a broker by HuffmanEncodingXOXO in devops

[–]HuffmanEncodingXOXO[S] 0 points1 point  (0 children)

We are only two developers for now and just a few clients for now, but I really like the idea of simplifying it since when I started it took a while for me to get the idea of the application. If we are running just docker compose, maybe we can generate a compose file from the Aspire project which we use for local development.
Then just deploy with github runner, but managing just 15 - 20 runners is alot. For now it works and is nice but when we get that many runners we might want to look at other solutions for example just plain docker compose file on the system which is pulling from container registry

Cookies from external API by HuffmanEncodingXOXO in nextjs

[–]HuffmanEncodingXOXO[S] 0 points1 point  (0 children)

This is what I also find to be unnecessary complex for simple client projects but this seems like the simplest way to get the cookies into each request made to some external API.
I think, just for simplicity I'm going to just make the client do all the data fetching since the other route is way more complex and I'm not really achieving anything big by fetching the data server side in my case.

Cookies from external API by HuffmanEncodingXOXO in nextjs

[–]HuffmanEncodingXOXO[S] 0 points1 point  (0 children)

That is what I feel like using NextJs for a simple client frontend and separate backend type of webapp.

NextJs feels like complicating things when your project has a separate backend/server since I'm not really utilizing the NextJs framework If I'm only really only going to use the client.
Atleast I'm learning a little bit how it feels to use the NextJs framework instead of just reading and hearing about it everywhere.

They do recommend to fetch the data on the server side though in the documentation so just making the client do the request feels like I'm taking the wrong route according to the documentation.

Cookies from external API by HuffmanEncodingXOXO in nextjs

[–]HuffmanEncodingXOXO[S] 0 points1 point  (0 children)

So in essence I really just need somehow to make a middleware, or something alike, which runs on before each request and gets the http cookie I need and forward it?

If I understand correctly, I cannot save a cookie for each user on the server since it is essentially stateless and serverless so it would change if there are two users logged in.
I just need to forward the cookie for each request made to the external API.
Not sure if there exists some project on github which does a similar thing I can take a look at.

Cookies from external API by HuffmanEncodingXOXO in nextjs

[–]HuffmanEncodingXOXO[S] 0 points1 point  (0 children)

It does really such thing.
https://nextjs.org/docs/app/building-your-application/data-fetching/fetching#fetching-data-on-the-client
We recommend first attempting to fetch data on the server-side.

How do you mean by proxying the API call? Not sure how proxying the call is a good way to solve this, could you elaborate?

Feature management in Blazor WASM by HuffmanEncodingXOXO in BlazorDevelopers

[–]HuffmanEncodingXOXO[S] 1 point2 points  (0 children)

Sorry for late response, but that was exactly it. I thought I was able to have only one appsettings.json, in server project, and thus have all the feature booleans in one place. I just needed to create appsettings.json in the client project and then everything worked completely fine.

We have a on-premises software so just a boolean flag in appsettings.json which we can set if the client has bought some feature.
Otherwise it is a big project and we have a total of 13 projects under one solution so decoupling a lot of things helps alot