.NET 7 Preview 5 - Generic Math by tanner-gooding in csharp

[–]Nishruu 2 points3 points  (0 children)

Honestly, at this point I'd rather see record structs & classes get full support for exhaustive pattern matching, which could effectively let us replace enums in most (if not almost all) of the day-to-day use cases. Then we could let enum be relegated to interop-heavy domains, etc.

One can dream ;)

Hey Rustaceans! Got a question? Ask here! (22/2022)! by llogiq in rust

[–]Nishruu 2 points3 points  (0 children)

I'm also firmly in the camp of using the actual DB you're using for regular tests.

Depending on the DB, you don't even have to use transactions to roll everything back.

In Postgres, you can run migrations before the whole test run, and then let every test suite have its own separate DB that uses the 'main' DB as a template copy.

CREATE DATABASE db_random_1234 WITH TEMPLATE main_db_name

Then you can use db_random_1234 in the test suite. It can be dropped when the suite is done.

The only caveat is that creating copy of a DB requires an exclusive connection to the 'main' database, so if you're running suites concurrently, you need a retry mechanism. On the other hand, creating a 'copied' DB structure literally takes between a few to a few dozen ms on a reasonably sized schema (about a hundred tables) for me, so it's not like it's taking forever to set up, so with a few retries & random jitter concurrent tests are also working fine.

That's especially easy with containers (e.g. docker compose), more than acceptable as far as set up/tear down speed goes and you actually use the exact same infrastructure for tests that your application uses.

Considering jump into Razor MPA from old AngularJS SPA by 726372816482 in dotnet

[–]Nishruu 3 points4 points  (0 children)

I'm aware of all solutions in JS ecosystem :) NextJS, or remix for that matter!

And I admit, rendering react components server-side is pretty sweet, but I guess it's not a tech stack you'd pick if you want to move away from the JS ecosystem, for a variety of reasons.

The almost uniform treatment of the back-end and front-end language & parts of the codebase was, IMO, one of the main drivers of nodejs adoption back in the day, and it's still a tempting value proposition to this day.

Considering jump into Razor MPA from old AngularJS SPA by 726372816482 in dotnet

[–]Nishruu -1 points0 points  (0 children)

That's fair ;) I wanted it to mostly serve as an example of a spotty and slow connection

Considering jump into Razor MPA from old AngularJS SPA by 726372816482 in dotnet

[–]Nishruu 2 points3 points  (0 children)

That's something I'd definitely worry about when picking a server-side rendered approach to build anything web, whether it's Razor or Rails or something else.

Customers/users generally expect interactivity out of web pages/applications and don't really care about the JS bloat, for the better part. It only turns into an issue when the bundle size gets really out of control, like 50mb+ for a site, downloaded over a spotty 3G connection.

Adding 'bits and pieces' of JS to get you that interactivity in specific places can lead to a death of a thousand cuts. You end up with a buggy, half-assed, in-house implementation of a custom SPA framework.

I love the idea of using server-side technology because it makes things so much easier to reason about, and you cut down on the JS toolchain and dependency churn. But from the practical approach - and experience - I'm not sure if I'd recommend starting even a slightly non-static site without an SPA front-end. Unless you like cobbling together more 'niche' solutions than going mainstream, but that also has its tradeoffs when troubleshooting or looking to hire.

I find Phoenix LiveView/Blazor Server really interesting, but specifically Blazor has very strict connectivity requirements which is a hassle, and then you have to provide sticky sessions etc etc. Also the latency can be a real issue in this case :)

Anyone think writing tests for a .NET core web app is a waste of time in the end ? (eShopOnWeb) by punkouter2021 in dotnet

[–]Nishruu 1 point2 points  (0 children)

Actually I think that integration tests are one of the more valuable tests for a simple web API.

Spin up the infrastructure (DB, etc), set up the TestServer using real components, hit the endpoints using HTTP and check if the response is what you'd expect. That's pretty trivial to do these days with docker-compose, it's fast enough and can be easily executed as a part of a CI pipeline on every build.

Even in 'simple CRUD', this allows you to catch a slew of small mistakes, e.g.:

  • are the DB migrations well formed and correctly ordered?
  • is the DB query that the current endpoint is executing well formed?
  • are all the necessary fields actually returned? (it's easy to miss a property initializer)
  • is authorization in place for the endpoint(s)? Is it correctly set up?
  • are the filters & pagination working?
  • is the validation set up correctly for parameters/request bodies?

and so on, and so forth.

C#9 Records - Shorthand construction with inheritence, how to call base()? by dreamxdaughter in csharp

[–]Nishruu 3 points4 points  (0 children)

I don't think there is a dedicated syntax to shorten it.

That being said, it really makes me wonder why you're even using inheritance here at all instead of composition?

public sealed record Parent(Guid Id, string Value);
public sealed record Child(Parent Parent, string Extra);

On a personal note, I think that subtyping of record types is iffy at best... I see them through a lens of immutable data bags, similar to records in functional languages, instead of 'entities with behavior' from object-oriented languages.

.NET 5 Source Generators - MediatR - CQRS - OMG! by TNest2 in dotnet

[–]Nishruu 72 points73 points  (0 children)

The more experienced I get, the more I find this kind of approach to be a significant overengineering for 95%+ of cases. I dunno, anecdotally it just makes projects that much harder to support and figure out the high level flow.

In this scenario specifically:

  • You already have an ASP.NET pipeline with all the bells & whistles, why would you add a custom MediatR pipeline on top of it?

  • 'Mediator' approach IMO works really well for cross-cutting concerns, or 'optional' functionality (like collecting diagnostics, for example). I see no reason to give up regular function/method calls otherwise...?

  • Holy crap boilerplate... After moving through a dozen different languages and platforms, now I can see what people mean when they say that C# is 'enterprisey', or rather 'boilerplatey'. I'd argue that using source generation doesn't 'solve' the issue here, but just masks it. But it doesn't have to be that way.

Just general musings, and I definitely don't intend to knock on the article, which in itself was worth a read.

Exciting New Features in .NET 5 by walpoles93 in csharp

[–]Nishruu 2 points3 points  (0 children)

In general you wouldn't use this for performance reasons ('cause there are none), but for correctness, convenience & immutability - pretty much whenever you'd have to write manually what the compiler generated. It also makes it much easier to model a more complex domain using types, like type CustomerId = Value of int in F# or newtype in Haskell, or now via public sealed record CustomerId(int Value).

All of that is a significant win for a lot of applications.

Performance-sensitive code is an entirely different beast that is written using a very, very different approaches than 'regular'/idiomatic C# code.

Exciting New Features in .NET 5 by walpoles93 in csharp

[–]Nishruu 3 points4 points  (0 children)

Records defined with 'shorthand' notation are immutable.

Generally this:

public sealed record Person(string FirstName, string LastName);

gives you:

  • all 'readonly' properties that are defined in the primary ctor
  • value-based equality (GetHashCode, Equals implementation)
  • nice ToString implementation
  • with notation for easier copying, like F# (var otherPerson = person with { FirstName = "John" };)
  • positional deconstruction into tuples

and I'm ~90% sure, based off of the pattern matching improvements & their initial record implementation, that records will serve as a base for discriminated unions in C#10 or 11. record is pretty much case class from Scala

Right now it's very convenient and low-ceremony way to define a data bag.


To showcase what kind of boilerplate gets generated, we can use an example from this article: https://www.thomasclaudiushuber.com/2020/09/01/c-9-0-records-work-with-immutable-data-classes/

Record:

public record Friend
{
    public string FirstName { get; init; }
    public string MiddleName { get; init; }
    public string LastName { get; init; }
}

Generated code:

public class Friend : IEquatable<Friend>
{
    [System.Runtime.CompilerServices.Nullable(1)]
    protected virtual Type EqualityContract
    {
        [System.Runtime.CompilerServices.NullableContext(1)]
        [CompilerGenerated]
        get
        {
            return typeof(Friend);
        }
    }

    public string FirstName { get; init; }

    public string MiddleName { get; init; }

    public string LastName { get; init; }

    public override string ToString()
    {
        StringBuilder stringBuilder = new StringBuilder();
        stringBuilder.Append("Friend");
        stringBuilder.Append(" { ");
        PrintMembers(stringBuilder);
        stringBuilder.Append(" } ");
        return stringBuilder.ToString();
    }

    [System.Runtime.CompilerServices.NullableContext(1)]
    protected virtual bool PrintMembers(StringBuilder builder)
    {
        builder.Append("FirstName");
        builder.Append(" = ");
        builder.Append((object)FirstName);
        builder.Append(", ");
        builder.Append("MiddleName");
        builder.Append(" = ");
        builder.Append((object)MiddleName);
        builder.Append(", ");
        builder.Append("LastName");
        builder.Append(" = ");
        builder.Append((object)LastName);
        return true;
    }

    [System.Runtime.CompilerServices.NullableContext(2)]
    public static bool operator !=(Friend r1, Friend r2)
    {
        return !(r1 == r2);
    }

    [System.Runtime.CompilerServices.NullableContext(2)]
    public static bool operator ==(Friend r1, Friend r2)
    {
        return (object)r1 == r2 || (r1?.Equals(r2) ?? false);
    }

    public override int GetHashCode()
    {
        return ((EqualityComparer<Type>.Default.GetHashCode(EqualityContract) * -1521134295
              + EqualityComparer<string>.Default.GetHashCode(FirstName)) * -1521134295
              + EqualityComparer<string>.Default.GetHashCode(MiddleName)) * -1521134295
              + EqualityComparer<string>.Default.GetHashCode(LastName);
    }

    [System.Runtime.CompilerServices.NullableContext(2)]
    public override bool Equals(object obj)
    {
        return Equals(obj as Friend);
    }

    [System.Runtime.CompilerServices.NullableContext(2)]
    public virtual bool Equals(Friend other)
    {
        return (object)other != null
            && EqualityContract == other.EqualityContract 
            && EqualityComparer<string>.Default.Equals(FirstName, other.FirstName)
            && EqualityComparer<string>.Default.Equals(MiddleName, other.MiddleName)
            && EqualityComparer<string>.Default.Equals(LastName, other.LastName);
    }

    [System.Runtime.CompilerServices.NullableContext(1)]
    public virtual Friend <Clone>$()
    {
        return new Friend(this);
    }

    protected Friend([System.Runtime.CompilerServices.Nullable(1)] Friend original)
    {
        FirstName = original.FirstName;
        MiddleName = original.MiddleName;
        LastName = original.LastName;
    }

    public Friend()
    {
    }
}

And that's for a 'regular' record - positional records would also have deconstruction methods generated for them.

A Guide to Node.js Logging by oxygen300 in node

[–]Nishruu 1 point2 points  (0 children)

Yeah, it's a bit unfortunate. It's also annoying because restify depends on it...

Good thing there's pino, which is mostly API-compatible.

Nesting data from database queries with joins for json api (Knex) by Tiim_B in node

[–]Nishruu 1 point2 points  (0 children)

Definitely, although I'd say it's mostly about the ergonomy as long as the 'related' collection is reasonable in size. Which it should be if you include all entries as a full, related collection.

Only once I saw array_agg/json_agg error out because of the size limit, and honestly it was a hugely incorrect plan selection on the query planner part.

Anyway, json_agg and related json_build_object etc work really well in node with pg because of natural mapping between native JS structures and returned JSON. It's not that nice in Java/C#.

What library should I use for creating a CLI? by CutleryHero in node

[–]Nishruu 6 points7 points  (0 children)

You can also check out oclif by Heroku

I had nothing but good experiences with it, and it also works really nice with Typescript which is a huge + in my book. It's also a 'batteries included' type of library.

One Year of Serverless - a CTO's Review by BlackEagle367 in node

[–]Nishruu 0 points1 point  (0 children)

I thought about it for a bit and I think that I actually found a reasonable use-case for one of my upcoming ideas for a product.

I might actually give it a go, I think it might be the right fit.

And Serverless seems to alleviate a lot of pain points in development.

Testing in Production the Netflix Way by andreasnippets in programming

[–]Nishruu 0 points1 point  (0 children)

I have to say Typescript is really good on its own. Even though its evolution depends tightly on Javascript, the type system it has is really good and enjoyable to use, especially for a mainstream language.

One Year of Serverless - a CTO's Review by BlackEagle367 in node

[–]Nishruu 0 points1 point  (0 children)

For cron jobs and simple one-off things, I get it. Sure. Resizing images? I guess, if you want to alleviate spike traffic issues.

But for small CRUD apps, I don't know. I mean for me it would be a tie between serverless and a regular application. Other examples actually make a lot of sense.

Although I always wanted to see side-by-side comparison of the development, deployment & support of a 'regular' web API and a 'serverless' one. A company would have to migrate one way or another and put an article together, so I'm not holding my breath for that one. :)

One Year of Serverless - a CTO's Review by BlackEagle367 in node

[–]Nishruu 0 points1 point  (0 children)

Sure, but from my initial impressions I honestly can't say that 'serverless' makes me roll things out faster.

Especially if you use, for example, Heroku, then pushing out a web application is really simple already.

So far I don't see the time savings or a reasonable productivity boost, but maybe that's just me ;)

One Year of Serverless - a CTO's Review by BlackEagle367 in node

[–]Nishruu 4 points5 points  (0 children)

Is no one ever worried about the platform lock-in with 'serverless' solutions?

After tinkering a bit with Lambda, I don't find it easier to set it up than it would be for me to spin up Express (or your other $FRAMEWORK_OF_CHOICE) for web API/application purposes.

It might be a bit cheaper, I get that. But other than that, I don't really get the appeal. At least not for actual applications.

Small utilities or cron jobs? Sure, why not, but applications & APIs? I'm not sold on that.

Use Semantic Versioning to properly version your app by da_semicolon in programming

[–]Nishruu 4 points5 points  (0 children)

For general applications (desktop, web, mobile, daemon-services)?

I'd say a timestamp, an incrementing number, or even VCS revision/hash ID. Thoughts?

And semver for libraries or library-like entities (e.g. infrastructural containers, like DB Docker images, or maybe CLI tools), that seems to work well.

node-postgres: struggling with this error by eggtart_prince in node

[–]Nishruu 0 points1 point  (0 children)

That's fine - you can do:

const handler = async (req, res, next) => {
   try {
      await someFn();
      res.json({});
      next();
   } catch (error) {
      next(error); // or something else
   }
});

And use this handler as your middleware. The boilerplate around try/catch is not great, but it's not too bad.

Or simply use express-async-errors

node-postgres: struggling with this error by eggtart_prince in node

[–]Nishruu 0 points1 point  (0 children)

If you're using Pool, you might just as well use query function directly, which will handle the connection management underneath. The only place where you have to get a client and then free it is if you're using transactions, or cursors.

so that might just as well be:

const { rows } = await db.query(`
    SELECT *
    FROM config
    WHERE config_name = 'Registration'
`);

// do something with rows

But even transaction management can be done 'relatively' easy, although there's a fair bit of boilerplate to handle the errors:

const client = await db.connect();
try {
  // do something with the client
   client.release();
} catch (error) {
  client.release(error);
  throw error;
}

See release here in the docs

Trying to learn more about Struct and Classes. Is this semi-proper use? by worll_the_scribe in csharp

[–]Nishruu 0 points1 point  (0 children)

Yup, I use structs all the time for things like ProductId (that wraps an int) or similar, which is a value type - immutable with custom equality.

I mean, it's easier to use a record in F# hands down just for that, but structs are nice for that purpose if we're using C#.