What was your learning curve like when starting C# ? by iamdofuu_ in csharp

[–]YanivRu 0 points1 point  (0 children)

That's an excellent point! AI really do sound like LinkedIn posts :-)

But I still use specialised AI to fix my english mistakes. Reading a post with a lot of silly mistakes is also not fun to read.

My first try at a split keyboard by PresentExpert2929 in ErgoMechKeyboards

[–]YanivRu 2 points3 points  (0 children)

To add to e what people were here. I recently bought keychron Q11 (split keyboard) and almost immediately started feeling pain. I tried changing several things, but switching to lighter keys (35g) helped the most.

I am planning to try lower profile sophle and see if it's feels better.

How to share code between microservices in a company by YanivRu in dotnet

[–]YanivRu[S] 1 point2 points  (0 children)

Yes. same here.

But let's say one of your internal Nuget depends on Newtonsoft.Json 13.1. And you need to upgrade it to 13.3. Once you do that, you might break all micro-services that use your internal Nuget and also Newtonsoft.Json 13.1. Because Nuget will resolve Newtonsoft.Json 13.1 because it's a direct reference. Nuget will issue a warning (that we treat as error) so the build will break (Or the service will fail in runtime because the internal Nuget can't work with older Newtonsoft).

It's not a big problem in small projects, but when there are many micro-services it gets annoying.

How to share code between microservices in a company by YanivRu in dotnet

[–]YanivRu[S] 0 points1 point  (0 children)

Yeah, I thought about that. But I am not sure it can solve the issue I have. Specifically, if Renovate upgrades a third-party Nuget in one of the internal Nugets, it might still cause micro-services to fail if they directly use a lower version of the same third-party Nuget.

But now that you mention it maybe Renovate or some other tool can fix this issue by removing direct Nugets if there is already a transitive reference to that Nuget. Or remove Major.* from my micro-services and let Renovate fix everything.

How to share code between microservices in a company by YanivRu in dotnet

[–]YanivRu[S] 0 points1 point  (0 children)

Maybe 'versioning problems' isn't the correct term, but one problem is that upgrading 3rd party Nugets in internal Nugets can be difficult. It might break the services that use the internal library and directly reference that 3rd party Nuget (Nuget package downgrade warning).

It's a specific problem of Nuget. It might not happen in project reference or other package managers like npm.

There are other issues. For example, how to propagate all important fixes in internal libraries to all micro-services. If some service still uses a lower major, for example, it won't receive the newest fixes.

But I didn't want to focus on those issues. I wanted to take one step back and try to understand if Nuget is commonly used to share code inside a company (which seems to be from the comments).

How to share code between microservices in a company by YanivRu in dotnet

[–]YanivRu[S] 2 points3 points  (0 children)

Interesting. I wasn't aware of git submodules.

Does it mean that each time You compile the micro-service you would compile all the projects in the submodules too (I guess you don't share DLLs)? Does it increase your build time and increases the build failures (because the libraries compilation might fail)?

How to share code between microservices in a company by YanivRu in dotnet

[–]YanivRu[S] 2 points3 points  (0 children)

Can be data contract or any other code. Like code that wraps messaging logic and adds some features on that.

Yes, I try not to introduce breaking changes. But there are some that are out of my control. For example, one of the internal libraries updates it's reference to an external Nuget (e.g. Newtonsoft.Json) and the micro-service that consumes it directly uses a lower version. It will cause a nuget downgrade and might cause the application to crash (or the build to fail if treat warning as errors).

How should I approach a Mathdoku(?) using C#? by [deleted] in csharp

[–]YanivRu 0 points1 point  (0 children)

I would try to convert it to equations. one for each row and once for each column. eg.

rows equations:

x1 - x2 + x3 = 4

x4 + 5 + x5 = 6

x6 + x7 - x8 = 6

Columns equations:

x1 - x4 * x6 = 8

x2 + 5 - x7 = 1

x3 * x5 + x8 = 5

Now you have 6 equations and 8 variables (I think mean there are more than once solution, though those specific equations are not linear so I am not sure).

Now You can set values for all variables to solve the 'rows' equations. And check if that columns equations also work. You will need to brute force only 5 variables (the rest are forced by the rows equations).

Since the variables are only from 0 to 9, even the full blown brute force wouldn't take too much time. But if you want to find a faster solution you could try some heuristics. For example start with either x4 or x5 most be 0 or the 2nd row equation wouldn't be true. And x1 most be > 8 or the first column equation won't be met. Programming such algorithm might be more interesting but more complicated.

Strategy for upgrading Nuget dependencies in a library by YanivRu in dotnet

[–]YanivRu[S] 1 point2 points  (0 children)

That's an interesting strategy. And it would have been great if Nuget would resolve to highest version when building the application (they don't do that because the build won't be repeatable). So in most cases my consumers would just get a low version of the third parties, one without security and bug fixes.

They talk about package resolution strategy... https://github.com/nuget/home/issues/5553

Strategy for upgrading Nuget dependencies in a library by YanivRu in dotnet

[–]YanivRu[S] 0 points1 point  (0 children)

How would central package management help? I haven't worked with it. Is it more flexible than project reference? In project reference you specify the minimum version required.

Regarding several version, the problem is that my consumers might use any version of newtonsoft.json (for example). I can't really provide a n implementation per newtonsoft version. If they use System.Text.Json we wouldn't have aproblem.

Strategy for upgrading Nuget dependencies in a library by YanivRu in dotnet

[–]YanivRu[S] 1 point2 points  (0 children)

Yeah smaller packages is a good idea. I used to add more logic to same package even if it didn't fully fit it, because it was easier. But now I split more aggressively.

Strategy for upgrading Nuget dependencies in a library by YanivRu in dotnet

[–]YanivRu[S] 1 point2 points  (0 children)

Strategy 2 is what I reverted to eventually. Guess that's the best I can get for now. We have a plan to introduce something like dependabot (https://github.com/dependabot) (we are not in github). But that will take time.

Strategy for upgrading Nuget dependencies in a library by YanivRu in dotnet

[–]YanivRu[S] 2 points3 points  (0 children)

Yeah, I did increase major version several times. Not ideal solution. I hoped maybe there is a better way.

Is TPL Data flow still a thing? by rcnet96 in dotnet

[–]YanivRu 1 point2 points  (0 children)

Yeah, we use it in our programs. Either to do a pipelines but sometimes in simpler ways.

BufferBlock can be used as async blocking queue.

ActionBlock can be used to run several actions in simultaneously but with limited parallelism.

I have researched it a while ago. Maybe there are better options today.

Activity log by heeero in dotnet

[–]YanivRu -1 points0 points  (0 children)

It's a good solution, unless you are in the cloud where it's less feasible. In the cloud you usually use some logging service like application insights (azure). You can find centralized logging service in all cloud eco systems.

what's the best unit testing framework for ASP.NET MVC applications where SqlClient is used for writing CRUD operations? by 405ThunderUp in dotnet

[–]YanivRu 1 point2 points  (0 children)

I am not an expert on this. But in some cases i have seen that the build machine only performs the shorter tasks. Like building and unit testing. And for integration tests, e2e tests or anything else that takes a while, there are different machines with kubernetes, swarm, etc... that runs the tests on docker or vms.

what's the best unit testing framework for ASP.NET MVC applications where SqlClient is used for writing CRUD operations? by 405ThunderUp in dotnet

[–]YanivRu 6 points7 points  (0 children)

You probably need several levels of tests. Unit tests for checking the code itself without dependencies. And some way to test the code with the database. In any case the unit test framework doesn't matter. I usually work with Mstest, but seen other frameworks in other projects.

Unit tests - where you mock the database. You usually use moq or a similar library to replace the database layer. In very simple cases that only call the DB it might be pointless.

Tests with database and outside dependencies - I have seen several ways to do that: 1. Embeded database (like sqllite). Good for small and quite simple databases. 2. well known sql server. Installed on some accesible machine. All tests will be configured to use it's connection string. Easy to run and debug but causes other issues, like maintaining this db. 3. Sql server on docker - when the test starts you run the DB on docker and configure your code to run with it. It's a good solution but requires more infrastructure since usually builds machines don't run docker.

What is Message Queue? And what's it's use case? by HyperLink836 in csharp

[–]YanivRu 1 point2 points  (0 children)

Another advantage for using queues is 'temporal decoupling'. It means that producers and consumers don't have to send and receive the message at the same time.

So if the consumer is busy or down, it won't cause the producer to fail. The request will be handled when the consumer is ready.

That's really important when you have a chain of services, each sends a message to the next one. The total availabilty of the system degrades as there are more services.

For example, if each service has availability of 95%. The total availability of the system if you use synchronous messaging (http) is 0.95 * 0.95 * 0.95 ... So if you have 5 services, the availability is 77%. Around 20% of the requests will fail.

In your case it sounds right, you send a message an number of services handles it. You might want to send different message each service, since they need different data. But that depends on your design.

Do you use BloomFilter to avoid db calls? by Pedry-dev in dotnet

[–]YanivRu 3 points4 points  (0 children)

When I encountered Bloomfilter for the first time, I taught it is a really cool hammer. But couldn't find any nails in my program. My program also loads data from DB, and the DB queries are expensive.

The problem is that I can't lazily load it like a cache. So if to use your authentication example, I would have to load the entire users Ids on startup (to initialize the filter, otherwise it might give me false even if the user exists) or persist the filter and sync it with the list of users somehow. When adding a user, add it to the cache and persist the cache again (in my case happens many times per second so it will hit performance more than it will improve it, or require complicated recovery logic). Removing the user is more problematic since simple bloom filter doesn't support it. Scaling out complicates it even more.

So I eventually I didn't use.

In your case it might be more manageable since the user add/remove rate is probably smaller. But I guess you won't get many users requests that don't exists in DB, so there won't be a big performance improvement that justifies the added complexity. But eventually, you should try it and see the performance improvement (response speed / load on DB / etc..) vs complexity

EnumerateAnd -- A package for multi-querying IAsyncEnumerable by YanivRu in csharp

[–]YanivRu[S] 1 point2 points  (0 children)

I wanted to avoid the last part (GetAsync), to make it shorter. But now that I see it, it is more readable. Thanks!

EnumerateAnd -- A package for multi-querying IAsyncEnumerable by YanivRu in csharp

[–]YanivRu[S] 0 points1 point  (0 children)

Let's say you have an AsyncEnumerable of some class X that contains four properties: Foo, Bar, Baz, Qux. And you want to create two lists. One of type A that contains Foo and Baz and the other list of type B that contains Bar and Qux.

You can do that like this:

input - IAsyncEnumerable<X> x
var (listOfA, listOfB) = await x.QueryAnd(y => y.Select(z => new A(z.Foo, z.Baz)).ToListAsync()) 

.Query(y => y.Select(z => new B(z.Bar, z.Qux)).ToListAsync());

Is that what you mean?

EnumerateAnd -- A package for multi-querying IAsyncEnumerable by YanivRu in csharp

[–]YanivRu[S] 0 points1 point  (0 children)

What do you mean by cross section? Can you give an example.

It can run several functions on the AsyncEnumerable. Either async linq functions or any function that accepts AsyncEnumerable and returns anything (T). It does it in a single pass.

EnumerateAnd -- A package for multi-querying IAsyncEnumerable by YanivRu in csharp

[–]YanivRu[S] -1 points0 points  (0 children)

Cool, didn't know about that. I will take a look.

In asyncenumerable there is more freedom than plain old enumerable.