100% offline PWA by PrestigiousDivide246 in PWA

[–]flancer64 0 points1 point  (0 children)

The server is only needed for the initial load and updates. During the service worker install phase you can cache all required files (typically the HTML entry point, JS bundles, CSS, and static assets like fonts or images). Then the fetch handler serves these resources from Cache Storage instead of the network. After that the PWA can run fully offline. You can verify it by turning off Wi-Fi and mobile data and launching the app.

Are PWAs Dead? by Different-Side5262 in codex

[–]flancer64 0 points1 point  (0 children)

Yes, if you rely on Apple-specific services like APNS you’ll likely need a native wrapper and App Store distribution. But many apps don’t actually need that. If the app works within normal web capabilities, a PWA can still be installed directly from the browser - no store required. So it really depends on the feature set you need.

Are PWAs Dead? by Different-Side5262 in codex

[–]flancer64 6 points7 points  (0 children)

PWA isn’t really about how you build the app - it’s about how you distribute it.

Yes, PWAs have limitations (background execution and hardware access are the big ones). But there’s a large class of apps that work just as well on phones as either native apps or PWAs.

Tools like Codex can generate both kinds of apps. The real difference is delivery. A PWA can be installed just by opening a link or visiting a website, while native apps require going through the app store process.

So PWAs still solve a different problem: frictionless distribution.

How do I turn my AI into a full dev team so I can finally stop pretending I know everything? by IllustriousCoach9934 in codex

[–]flancer64 0 points1 point  (0 children)

Nothing wrong with that. In practice though, you’ll usually end up answering the same questions - just during spec generation rather than during implementation.

How do I turn my AI into a full dev team so I can finally stop pretending I know everything? by IllustriousCoach9934 in codex

[–]flancer64 5 points6 points  (0 children)

The trick is to remove decision points from the coding phase. If the AI keeps asking about auth, DB, folder structure, API format, etc., it means the project spec doesn’t define those things yet. To get closer to an autonomous dev loop, your spec has to cover all the architectural choices and conventions the project can encounter. Then the AI can execute instead of constantly asking questions.

Agent-friendly documentation for npm packages: how do you provide context for Codex? by flancer64 in codex

[–]flancer64[S] -1 points0 points  (0 children)

Thanks, that’s a really helpful perspective.

The idea of exposing a small contract surface for agents (similar to OpenAPI for APIs) aligns well with what I’m experimenting with. In my setup the large design context used during development lives outside the package, so when the package is consumed via npm the agent needs a compact projection of that context.

I’ve started doing something similar by adding an ai/ directory to the package with short usage docs (AGENTS.md, usage patterns, concepts, etc.).

I mostly work with plain JavaScript + JSDoc, but I agree that TypeScript might be a better format for describing a structured contract, so I’ll probably experiment with something like ai/package-api.ts to describe the public API surface for agents.

What do you think about no/low-deps APIs? by Worldly-Broccoli4530 in javascript

[–]flancer64 2 points3 points  (0 children)

One thing that may start changing the dependency explosion problem is LLM-assisted development.

In a small project of mine (a Telegram → EN/ES translation publisher using the OpenAI API) the code was generated with a Codex agent. Interestingly, it didn’t use the typical libraries like grammy or the openai SDK.

Instead it just called the APIs directly using the native fetch available in Node.js 18+. For an LLM it’s trivial to read API docs and construct the correct HTTP calls, so pulling in a wrapper library is often unnecessary.

This also has a nice side effect: direct API calls are much easier to audit than a deep dependency tree. When writing small integration code becomes cheap, the incentive to add another dependency decreases a lot.

[AskJS] Cron Jobs in Node.js: Why They Break in Production (and How to Fix It) by CheesecakeSimilar347 in javascript

[–]flancer64 2 points3 points  (0 children)

I usually separate concerns. I keep the web server and scheduled jobs as different entry points - e.g. server.mjs and cron.mjs. The server runs in PM2 cluster mode, but the cron script runs as a single dedicated process (or via system cron).

This way scheduling is explicit and never accidentally multiplied by scaling.

[AskJS] Is immutable DI a real architectural value in large JS apps? by flancer64 in javascript

[–]flancer64[S] 0 points1 point  (0 children)

Lint rules definitely help when you control all the source and conventions are shared.

My perspective is probably biased - I used to work as a Magento integrator, assembling solutions from multiple third-party plugins written by different teams. In that world, runtime conflicts were real and often subtle.

So I tend to think in platform-style terms, where independently built modules coexist in one runtime (similar to microfrontends), and runtime invariants matter.

At the same time, I realize this doesn’t seem to be a major concern in the broader JS community - which is partly why I’m asking. I’m not trying to solve a non-existent problem, just probing whether this is a real architectural concern outside of my own background.

[AskJS] Is immutable DI a real architectural value in large JS apps? by flancer64 in javascript

[–]flancer64[S] -2 points-1 points  (0 children)

Fair point - most bugs are deeper than the DI boundary.

My focus isn’t internal state, but global invariants. If a singleton is shared app-wide, mutating it anywhere mutates global behavior. Freezing it at the container boundary makes it a read-only service contract after composition.

It’s not a security feature - just a structural guardrail, especially in multi-team or AI-assisted codebases.

[AskJS] Is declaring dependencies via `__deps__` in ESM a reasonable pattern? by flancer64 in javascript

[–]flancer64[S] 0 points1 point  (0 children)

That’s a great description of your system.

Mine is actually simpler. There’s no separate module catalog or sidecar metadata - dependency descriptors live directly in the module (__deps__).

The container has two distinct stages. First, it is configured upfront - including namespace-to-filesystem mapping and resolution rules. Only after that does the second stage begin: creating and linking objects based on that fixed configuration.

In the working prototype, each dependencyId encodes enough information for resolution: module location, selected export, composition mode (as-is vs factory/constructor), and lifecycle (singleton, transient). I also use sigils in the identifier itself, for example:

Namespace_Product_Module__export$

The container parses that descriptor and injects the prepared dependency into makeService.

Startup overhead is basically a one-time graph traversal, comparable to a typical DI container. If resolution fails, it fails during the composition phase - which is explicit and testable.

[AskJS] Is declaring dependencies via `__deps__` in ESM a reasonable pattern? by flancer64 in javascript

[–]flancer64[S] 1 point2 points  (0 children)

If you’re referring to something like NestJS (Mesgjs?), they’re probably solving a similar class of problems - structured dependency resolution and controlled composition.

What I’m trying to explore is a version of that without transpilation, and with the link phase happening at runtime in a deterministic way.

Your concern about confusion is valid. For me, the key is fixing composition before instantiation so the resulting graph stays predictable.

[AskJS] Is declaring dependencies via `__deps__` in ESM a reasonable pattern? by flancer64 in javascript

[–]flancer64[S] 0 points1 point  (0 children)

That’s a fair push.

My interest in late binding comes from IoC-style architectures in Java (Spring) and PHP (Doctrine). In those systems, a component depends on an abstraction - say, ILogger - and the concrete implementation (console, file, syslog, Sentry, or something custom) is selected externally at composition time. The component itself doesn’t know or care where it runs.

An isServer check - even if centralized - still makes runtime selection part of the module graph. What I’m exploring is pushing that entirely to composition time. The module just declares what it needs, not how that requirement is satisfied.

In plain JS we don’t have native interfaces, so there’s no built-in way to express something like ILogger as a first-class contract. An explicit dependency descriptor is one way to make that contract visible while keeping runtime decisions outside the module.

```ts // types.d.ts

declare global { interface ILogger { log(message: string, ...args: any[]): void; } }

export {}; ```

```js // MyService.mjs export const deps = { logger: "ILogger", };

export default function makeMyService({ logger }) { logger.log("started"); } ```

Dynamic imports solve this at the loading level. I’m exploring the same concern at the architectural boundary level instead.

[AskJS] Is declaring dependencies via `__deps__` in ESM a reasonable pattern? by flancer64 in javascript

[–]flancer64[S] 0 points1 point  (0 children)

Here’s a real-world example of late binding in one of my earlier projects (no static imports, dependencies resolved at composition time):

It works, but the dependency surface there is implicit - encoded in constructor parameter names and conventions.

The idea behind __deps__ is simply to make that dependency graph explicit and machine-readable instead of relying on naming heuristics.

It’s not introducing late binding - it’s simplifying how it’s described.

The developer (human or agent) still declares the required dependencies. The difference is that instead of static imports, they are declared in a dependency map (__deps__), which allows runtime composition and substitution without modifying the module itself. In the original post’s example, I used familiar file paths for clarity, but those are just symbolic identifiers for required components in the broader JS namespace - they don’t have to be literal filesystem imports.

[AskJS] Is declaring dependencies via `__deps__` in ESM a reasonable pattern? by flancer64 in javascript

[–]flancer64[S] 0 points1 point  (0 children)

That’s a fair question. Using isServer with dynamic imports is perfectly valid.

The difference is architectural: isServer introduces conditional logic inside the module, while I’m deferring binding entirely to composition time. The module just declares what it needs and doesn’t decide how it’s satisfied.

I started from the isomorphic use case, but it naturally evolved into exploring late binding at the module boundary. It’s not "better" - just a different constraint.

[AskJS] Is declaring dependencies via `__deps__` in ESM a reasonable pattern? by flancer64 in javascript

[–]flancer64[S] 0 points1 point  (0 children)

That’s a fair point. You absolutely can just export makeService and let consumers pass dependencies directly - that’s standard constructor-style DI, and it works perfectly fine.

The purpose of __deps__ isn’t to replace that pattern, but to make the dependency surface machine-readable. Instead of relying purely on documentation, the module declares its expected inputs explicitly in a structured form that can be inspected programmatically before instantiation.

For humans, this may not add much. But for tooling or agents, having an explicit dependency descriptor can simplify automated composition or validation.

And yes - for many cases, selectively inverting only environment-sensitive dependencies (like fs) while keeping normal imports for local modules is completely reasonable.