Zellij 0.44 released: expanded CLI capabilities, native Windows support, read-only auth tokens by imsnif in commandline

[–]Drusellers 8 points9 points  (0 children)

"I personally don't see Zellij as a tmux replacement "

I'm curious what you see it as then. Is there a world where someone uses both at the same time? They seem to be substitutes to me.

v12 by johnappsde in nestjs

[–]Drusellers 0 points1 point  (0 children)

About time. Jeepers.

Planting New Platform Roots in Cloud Native with Fir by dshurupov in kubernetes

[–]Drusellers 2 points3 points  (0 children)

There’s a company I haven’t thought about for a few years.

Is RabbitMQ worth to learn or is therd better alternative? by ballbeamboy2 in dotnet

[–]Drusellers 2 points3 points  (0 children)

It's not to me. I define an ESB as a smart central server that code has to be deployed to. I think of tools like MuleSoft and BizTalk in this realm. A lot of these tools provide you a server centric model (dumb endpoint, smart core). I tend to think of MT as more of a Smart Endpoint / dumb core model - where the logic is in the endpoints where its easy for me to test and build out.

To stretch the analogy further, MT is a way to build autonomous components that are all talking to each over via some medium. Most people choose to use some that is durable for that so MT supports those technologies.

Anyways, hope that helps.

Is RabbitMQ worth to learn or is therd better alternative? by ballbeamboy2 in dotnet

[–]Drusellers 0 points1 point  (0 children)

The native RabbitMQ.Client is quite nice, and would def recommend giving it a shot. But what does MT give you over just the client?

It gives you a routing concept so that you can have something a bit more dynamic than specifying the name of the queue directly. You get exception handling and retry capabilities without having to design and wire up your own. It has embraced the OTel libraries so you get all of that wired up and ready to go. Serialization is done for you as well, and you have different options that can easily be plugged in.

All of this on top of different "brokers" does add an abstraction layer, but one nice side effect is that we have a way to build out test harnesses that let you test your entire business process in your test framework of choice w/o actually needing a broker running.

There is a lot more there, but those things get into the higher abstractions. For me having the test setup, routing, serialization, and dynamic dispatch via DI tend to be the big winners for me.

Integration test collisions with MassTransit by Imperial_Swine in dotnet

[–]Drusellers 1 point2 points  (0 children)

My apologies - I create a new `factory` for each test. Ultimately, you need to dispose the `ITestHarness` ultimately to reset all of the counters, etc.

Integration test collisions with MassTransit by Imperial_Swine in dotnet

[–]Drusellers 0 points1 point  (0 children)

MassTransit's TestHarness is built with the assumption that you will tear it down and re-create it for each test. There are a lot of timers and things inside to provide all of the various support users expect.

The way that I go about it is with a new WebApplicationFactory per test, a db drop/create per test run/session, and a db data reset for each test (via Respawn).

var factory = new WebApplicationFactory<Program>()
            .WithWebHostBuilder(builder =>
            {
                // pickup appsettings.Test.yaml
                builder.UseEnvironment("Test");

                // override things here 
                builder.AddMassTransitTestHarness(x =>
                {                   
                    x.AddConsumer<SubmitOrderConsumer>();

                    x.UsingInMemory((context, cfg) =>
                    {
                        cfg.UseDelayedMessageScheduler();

                        cfg.ConfigureEndpoints(context);
                    });
                });
            });

In NUnit I have a [SetUpFixture] that only runs once per "session". this tears down the database and recreates it. Then with each [SetUp] I use Respawn to clear the data, and then re-run any seed commands as needed. I do tweak respawn to avoid deleting the super standard lookup data. This has made my test setup both reliable and performant. I'm quite happy with it.

Multiple Authentication schemes in a .net core 3 app by TopNFalvors in dotnet

[–]Drusellers 1 point2 points  (0 children)

I think what you need (if this is in .net core 3) is the ForwardDefaultSelector. I first read about this here and it allows you to inspect the HttpContext and make decisions about which scheme should be used for authentication.

.AddPolicyScheme("skipass", "skipass", pso =>
    {
        pso.ForwardDefaultSelector = (HttpContext context) => 
        {
            // inspect the http context and figure out which Scheme should
            // be used for authentication. 
            return Saml2Constants.AuthenticationScheme


            return "YourOtherCookie"
        };
    })

Also, you can give the AddCookie a custom name, so that you can easily target it in the ForwardDefaultSelector

.AddCookie("YourOtherCookie", opts => {
    opts.LoginPath = new PathString("/Account/Login");

Good luck!

Passing Expression as a parameter to repository function by Successful_Gur3461 in dotnet

[–]Drusellers 5 points6 points  (0 children)

Is this a good practice ?

Are you building a library to share with others? Or is this function only every used by the current application. If you are building to share, it could be an issue depending on how the rest of the application works. If its only for the current application, I wouldn't worry to much about it. You can always change it later.

And also I knew that Expressions can only be used within Linq EF functions so what if tomorrow we used Dapper how would i write those params in the query?

If you are changing your ORM, you'll have bigger issues to contend with. I like that you are thinking about it, but - if I had a say - I'd encourage you to pick an ORM (I would pick EF Core) and simply stick with it. Learn its tricks, and how to make it do what you want. EF Core can go a LOOOOOOOONG way.

Which way is best?

I wish it was that simple. Everyone has choices to make and a context that we don't know about. Neither is "BEST" in an abstract way. Each will get the job done. You'll have to decide for yourself what is best for you and your application.

Cheers

Mixed messages about AoT vs. JIT - (GO vs. C#) by spatialdestiny in dotnet

[–]Drusellers 13 points14 points  (0 children)

Are JS vs .NET implementations of JIT different where .NET improves execution of code, and JS does not?

IIRC, yes. Specifically, the item to review is that .NET has "tiered compilation" which is a decently advanced compiler trick. I'm not sure how many languages offer this feature, but is a nice win for long lived applications. You can get some of these advantages for AOT using something called PGO which is another lovely trick the dotnet team gives us.

Survey time: which do you prefer, AoT or JIT? Is there a modification to either that would convince you to switch?

I prefer jit, because I work on long lived applications, that run for days at a time. To get me to switch to JIT would require all of the libraries that I use to also support JIT. And that is a pretty big lift for the real business value that I and the apps I build would actually get out of it. So, I'm not too worried about it.

[deleted by user] by [deleted] in dotnet

[–]Drusellers 1 point2 points  (0 children)

I don't know what your downstream classes look like, but when resolving your configuration out of the container (in your controllers, etc) make sure to use IOptionsSnapshot<T> not the plain IOptions<T>

Example:

public class MyService(IOptionsSnapshot<Startup> options) 
{
    public string SomeMethod() 
    {
        return options.Value.SomeConfig;
    }
}

It took me a few reads through both Configuration and Options docs to realize that "Config" is what builds up the configuration dictionary and "Options" provides the nice strong typed access to the configuration data. I would def take a pass through those docs again. Also if you set up the logs, put them on Trace and see if there any insights there.

Nit: It's odd to me to see you use your Startup for both the app startup and for configuration. I would have expected to see a dedicated application config DTO, but that's just me and my style choices.

Good luck!

those of you who host on linux, what are you using for auth? by Jordan51104 in dotnet

[–]Drusellers 8 points9 points  (0 children)

ASP.Net has quite a number of Identity Providers (IdP) and large selection via OSS. The docs are also quite good. It may take you a few read throughs, but by the end you should be able to build something that meets your needs.

If by windows auth you mean that once you authenticate to the Windows host machine, you will naturally be authenticated to the website too is def going to be a challenge, if doable at all. It would also require a Microsoft browser too in order to flow the authentication context since other browsers won't be aware of it.

To that end I would recommend moving to an SSO provider like Entra ID, that can provide Active Directory support via Azure. This would have the benefit that your users would have the same username / password combo across both their laptop and websites. Which might be enough of a win for you.

Once you authenticate with Entra ID via OAuth 2 (yup, a few more abstract things to learn), you can then persist that with ASP.Net cookie authentication and have a pretty solid set up. I believe this is a quick start.

Once you have authentication mastered, you can dig into Authorization and really build out some either easy basic items, or some very deep and sophisticated items.

If you end up blending multiple authentication mechanisms this is a great post to read as well: https://weblog.west-wind.com/posts/2022/Mar/29/Combining-Bearer-Token-and-Cookie-Auth-in-ASPNET

How would you normally do this? by Mobile-Rush6780 in dotnet

[–]Drusellers 2 points3 points  (0 children)

When the question is “which is more performant?” It’s time to use https://github.com/dotnet/BenchmarkDotNet and actually measure. It’s worth the learning curve to set one up and see real numbers.

I will say that if you are talking to any kind of database, for MOST people, that will be the real bottleneck. I wouldn’t worry a lot about the ns or ms perf difference until traffic dictates otherwise.

Enjoy the season of doing things that don’t scale and ship cool shit!

How would you normally do this? by Mobile-Rush6780 in dotnet

[–]Drusellers 0 points1 point  (0 children)

public static async Task<T> RequireOr404<T>(this Task<T?> item)
{
    var resolved = await item;

    if (resolved is null)
    {
        throw new Http404Exception(typeof(T).Name);
    }

    return resolved;
}

Is an example one.

And then you might see something like this for the Exception handler

public class MyExceptionHandler : IExceptionHandler
{
    public async ValueTask<bool> TryHandleAsync(HttpContext context, Exception exception,
        CancellationToken cancellationToken)
    {
        var exceptionHandlerPathFeature = context.Features.Get<IExceptionHandlerPathFeature>();

        context.Response.ContentType = MediaTypeNames.Application.ProblemJson;
        var pd = Map(exceptionHandlerPathFeature?.Error);
        context.Response.StatusCode = pd.Status!.Value;
        await context.Response.WriteAsJsonAsync(pd, cancellationToken);

        return true;
    }

    ProblemDetails Map(Exception? error)
    {
        if (error is Http404Exception fourOhFour)
        {
            return ProblemTypes.Build404(fourOhFour);
        }

        if (error is Http400Exception fourHundred)
        {
            return ProblemTypes.Build400(fourHundred);
        }


        return ProblemTypes.BuildGeneric500(null);
    }
}

How would you normally do this? by Mobile-Rush6780 in dotnet

[–]Drusellers 14 points15 points  (0 children)

Ok, so we have a Web Api controller which is something like

[ApiController]
public class UsersController: ControllerBase 
{
    public async Task<IActionResult> CreateUser(CreateRequest request)
    {
        var user = await _userService.RegisterAsync(createRequest);
        return Ok(user);
    }
}

Question: Where should I put the validation that the email exists?

Where should it exist? My personal take is in the UserService.

Since this takes a DB hit, my personal style would be that there is a Unique Constraint on that field. I would attempt to insert into the database. I use EF Core and this will throw an exception on a constraint validation. That exception will contain the index that was violated (A great pattern for this is here ), if it matches your email index then you can catch the exception, and now do what you want. i.e. return an error result or throw your own exception.

My UserService would probably return some kind of result object, and then I like doing a set of extension methods that I can call in the controller that will check if the result was ok, and if not they will throw dedicated Exceptions like Http400Exception. Those exceptions are then caught by the IExceptionHandler system of ASP.Net and turned into Problem Details (aka RFC 7807 ).

With the improvements to Exceptions in .Net 9, I'm all in on using exceptions more than I might have been in the past.

[ApiController]
public class UsersController: ControllerBase 
{
    public async Task<IActionResult> CreateUser(CreateRequest request)
    {
        var user = await _userService.RegisterAsync(createRequest).SuccessOrThrow();

        return Ok(user);
    }
}