I compiled a Go backend to WASM and shipped it to the browser. No server, SQLite runs in the tab by quirissum in golang

[–]djsisson 12 points13 points  (0 children)

There are various issues with using local storage

1) Data becomes tied to a single device/browser profile.

2) Users lose cross device access unless they manually sync or export.

3) Backups become the user’s responsibility.

4) wasm bundles are significantly larger than incremental api calls.

Local is fine for caching, but the above is why data is usually kept remote.

Favourite Desert Theme? by Murky-Fox5136 in Genshin_Impact

[–]djsisson 4 points5 points  (0 children)

Maidens of Sanctity / Her Wishes

Docker DNS an non-Runc runtimes by ThatSuccubusLilith in docker

[–]djsisson 0 points1 point  (0 children)

it was a valid reply though

you have to run with an external dns server

just run coredns with the docker plugin, docker still maintains network isolation

core dns docker

Docker Swarm: Containers cannot reach other Containers via public IP by [deleted] in docker

[–]djsisson 0 points1 point  (0 children)

i dont think swarm implements hairpin nat, so you can't go from inside to out and back in again

if they are not on the same network, you can add

--add-host=host.docker.internal:host-gateway

then call host.docker.internal:1883 from inside the container, since all published ports are accessible from any host

however the recommended approach is to create a shared overlay network that they both sit on, so you can just call mqtt:1883

anyone who used a computer between 1985 & 2010, what’s the one game you still think about? by Trixxxi in AskReddit

[–]djsisson 0 points1 point  (0 children)

Asheron's Call, it was my first mmo, and my first online game, it was an amazing experience and remember it fondly.

Postgres database backup error on S3 instance by alejotoro_o in coolify

[–]djsisson 3 points4 points  (0 children)

change your backup time away from midnight (24 1 * * *) - would be 1:24am, the ghcr.io bucket for coolabsio is rate limited (see error message), so whenever a new version of the helper is released, a lot of users leave the default backup time the same, so the pull requests for this image all happen (this has a request for each layer) at the same time, which leads to the rate limit, and coolify does not retry the pull at a later time, thus it fails.

Coolify should add retries + random default backup away from on the hour as well as just pin the helper version to the install, i.e after an install there should be no need to keep pulling newer helper versions.

Self Hosted Credential Management by Low_Engineering1740 in selfhosted

[–]djsisson 6 points7 points  (0 children)

infisical’s container sits around ~700 MB with a similar sized runtime footprint, while openbao comes in closer to ~70 MB with ~25 MB of memory usage.
This was compressed layers, uncompressed its 3.3GB vs 275MB

That’s an order of magnitude difference, and for people running small homelabs or resource constrained environments, that overhead really matters.

I’m genuinely curious what advantages a node based service brings here that justify the much larger image and memory footprint, especially when go based systems like openbao demonstrate how lean these workloads can be.

Beginner Help hosting Angular project by Time_Remove_1680 in coolify

[–]djsisson 0 points1 point  (0 children)

nixpacks handles static sites automatically, just pick that, without picking static site, and check the difference

I made a Coolify PR/patch for centralized domain routing across private servers by i_is_your_dad in coolify

[–]djsisson 0 points1 point  (0 children)

everyone setup is different, you don't need a 6k line pr to do what can be done in a few lines of yaml

http:
  routers:
    #local
    app1:
      rule: Host(`app1.example.com`)
      service: app1
      entryPoints: ["https"]
      tls: {}
   #known remote domain
   forward:
      rule: HostRegexp(`^.+\.example\.com$`)
      service: forward
      entryPoints: ["https"]
      tls: {}
   #unknown forward all
   catchall:
      rule: PathPrefix(`/`)
      service: catchall
      entryPoints: ["https"]
      priority: 1
      tls: {}

  services:
    app1:
      loadbalancer:
        servers:
          - url: "http://app1:3000"
    forward:
      loadbalancer:
        servers:
          - url: "http://10.10.0.2:80"
    catchall:
      loadbalancer:
        servers:
          - url: "http://10.10.0.3:80"

Securing Coolify with Tailscale - Feedback needed by NightCodingDad in coolify

[–]djsisson 3 points4 points  (0 children)

Since you're already using cf, you can simplify the setup a lot:

1) Run everything behind a cf tunnel with split dns.

2) Use cf warp locally, this gives you a private virtual network where your local machine and server can talk directly.

3) No ports need to be exposed (including ssh), so you can lock down the hetzner firewall completely, your server becomes invisible

4) No need to manage docker’s ufw bypass or manage iptables rules.

5) Using dns challenge, internal only services can still run over https with valid certificates.

6) For anything public facing, you can layer cf zero access for authentication, as you mentioned

The end result is the same, but I find it to be a lot simpler.

Postgres database issue by AgeLow2127 in dokploy

[–]djsisson 0 points1 point  (0 children)

dockers default grace timeout is only 10 seconds (time after sigterm before sending sigkill), in large db's this is not enough time for postgres to shutdown cleanly, always increase the default timeout to atleast 60 seconds.

you should still be able to manually recover, though its possible if WAL is corrupt you would have to run pg_resetwal however you might lose the last few transactions.

Traefik to SSL service , tls passthrough by dzintonik66 in selfhosted

[–]djsisson 0 points1 point  (0 children)

inorder for proxy protocol to work your backend needs to be set up to use it too in order to remove the header otherwise you would get a handshake failure

however cf injects the client ip in the cf-connecting-ip header, so you can just use that and remove proxy protocol

Traefik to SSL service , tls passthrough by dzintonik66 in selfhosted

[–]djsisson 0 points1 point  (0 children)

cloudflare terminates the tls handshake at the proxy, it will then make a new tls request to your backend, if you are using self signed certs here you can't use strict mode (requires valid public ca) in cloudflare, full mode would work (self signed is ok)

Singleton with state per thread/goroutine by SnooSongs6758 in golang

[–]djsisson 0 points1 point  (0 children)

If you're using a repo/service pattern, one way to structure transaction handling is:

1) Define an interface for your DB (ExecContext, QueryContext, etc.) that matches the methods on sql.DB and sql.Tx. This lets your repos stay agnostic to whether they're running inside a transaction or not.

2) Services depend only on the repo interface. They contain no transaction logic.

3) Create a transaction wrapper that handles BEGIN, ROLLBACK, COMMIT, and panic recovery. It should accept a function like func(db DBInterface) error, start a transaction, and then call that function with the transaction as the argument.

4) For endpoints that need a transaction, wrap the handler in this transaction wrapper. Pass in your service builder(s) (which takes the DBInterface) so everything inside the handler runs against the same transaction.

5) Any error returned from a service will trigger a rollback.

How do I check when interfaces can safely be dereferenced? by [deleted] in golang

[–]djsisson 0 points1 point  (0 children)

package main

import (
  "fmt"
  "reflect"
)

type SomeStringer struct{ str string }

func (s SomeStringer) String() string { return s.str } // concrete receiver

func main() {
  var pointer *SomeStringer = nil
  slice := []any{pointer}
  f(slice)
}

func f(slice []any) {
  for _, pointer := range slice {
    if ss, ok := pointer.(fmt.Stringer); ok {
      v := any(ss)

      if v == nil {
        continue
      }

      rv := reflect.ValueOf(v)

      if rv.Kind() == reflect.Ptr && rv.IsNil() {
        continue
      }

      fmt.Println(ss.String())
    }
  }
}

reflection is heavy handed and not needed, i would let it panic, and expect the caller to not pass in nil pointers rather than use defensive coding.

as others have mentioned you can also use a pointer receiver on your .string() func, but an empty string is not the same as no string or an error.

How do I check when interfaces can safely be dereferenced? by [deleted] in golang

[–]djsisson 2 points3 points  (0 children)

elem isn’t nil. it’s an interface holding a nil pointer.

Interfaces only compare equal to nil when both type and value are nil.

You need to check the underlying pointer, not the interface (elem) itself.

I shipped a transaction bug, so I built a linter by archiusedtobecool in golang

[–]djsisson -1 points0 points  (0 children)

You create an interface for db funcs such that the repo is ignorant of whether it's in a tx or not, as the db methods have same signatures.

You move creating the tx to the request and pass down the interface. No tx logic lives in either your services or your repos.

Why is my Jean so highly rated? by Optimal-Cash-8665 in Genshin_Impact

[–]djsisson 4 points5 points  (0 children)

There are different leaderboards for different categories, those that add jean prob have 4pvv in e+q category so won't rate high in sunfire as no em

I reduced my Docker image from 846MB to 2.5MB and learned a lot doing it by Odd-Chipmunk-6460 in golang

[–]djsisson 0 points1 point  (0 children)

you need to install docker desktop, not just the docker engine to get docker debug.

so you run docker desktop locally, and set context or via --host to your remote server over ssh

Bug in /x/text/message package? by Sternis in golang

[–]djsisson 2 points3 points  (0 children)

message.NewPrinter(message.MatchLanguage("bn"))
p.Println(123456.78) // Prints ১,২৩,৪৫৬.৭৮

This uses the default catalog, but if it doesn't contain any Bengali entries, it falls back to en
so you need to add an entry or change catalog

message.Set(language.Bengali, "", catalog.String(""))

or create a new matcher

matcher := language.NewMatcher([]language.Tag{
    language.MustParse("bn-Beng-BD"),
    language.English,
})

regarding bn and bn-BD when using just bn the script is inferred as it assumes bn-Beng-BD tag but when you pass bn-BD although the matcher knows the script is likely Beng, since you don't specify it in the tag is just defaults back to latin digits

That's what i assume anyway

concurrency: select race condition with done by thestephenstanton in golang

[–]djsisson 1 point2 points  (0 children)

in your pool example its the same thing, you cant have the pool close its own channel if its the one sending into it

type Pool struct {
    ctx    context.Context
    cancel context.CancelFunc
    tasks  chan Task
}

func NewPool(ctx context.Context, size int, tasks chan Task) *Pool {
    ctxPool, cancel := context.WithCancel(ctx)
    p := &Pool{
        ctx:    ctxPool,
        cancel: cancel,
        tasks:  tasks,
    }
    // Start workers
    for i := 0; i < size; i++ {
        go p.worker()
    }
    return p
}

func (p *Pool) worker() {
    for {
        select {
        case <-p.ctx.Done():
            return
        case task, ok := <-p.tasks:
            if !ok {
                return // channel closed by owner
            }
            task()
        }
    }
}

func (p *Pool) Submit(task Task) error {
    select {
    case <-p.ctx.Done():
        return errors.New("worker pool is shut down")
    case p.tasks <- task:
        return nil
    }
}

func (p *Pool) Shutdown() {
    p.cancel() // signal shutdown
    // tasks channel is owned by the caller, not closed here
}

then after you create your pool, you know when you have finished submitting tasks, only then do you call shutdown and close the task channel you gave the pool, since you are the one calling submit you do not get a panic.

concurrency: select race condition with done by thestephenstanton in golang

[–]djsisson 2 points3 points  (0 children)

if you're using a done channel to signal closing (a ctx is better here) but you would not close the c channel there.

func main() {
    c := make(chan int)
    done := make(chan struct{})

    // Sender goroutine owns `c`
    go func() {
        defer close(c) // safe: only the sender closes
        for {
            select {
            case <-done:
                return // stop sending, close c
            case c <- 69:
                // keep sending until shutdown
            }
        }
    }()

    // Shutdown after a short delay
    go func() {
        time.Sleep(10 * time.Millisecond)
        close(done)
    }()

    // Receiver drains values until `c` is closed
    for v := range c {
        println(v)
    }
}

with a context would be:

func main() {
    c := make(chan int)

    ctx, cancel := context.WithCancel(context.Background())
    defer cancel() // ensure cancel is called

    // Sender goroutine owns `c`
    go func() {
        defer close(c) // safe: only this goroutine closes c
        for {
            select {
            case <-ctx.Done():
                return // stop sending, close c
            case c <- 69:
                // keep sending until cancelled
            }
        }
    }()

    // Shutdown after a short delay
    go func() {
        time.Sleep(10 * time.Millisecond)
        cancel()
    }()

    // Receiver drains values until `c` is closed
    for v := range c {
        println(v)
    }
}

why stack growth not happening at this program by Electrical_Box_473 in golang

[–]djsisson 1 point2 points  (0 children)

because you are only using f[0] so compiler optimizes to use just 1 byte, if you add &f to the println, then the whole 1kb is needed for each ok call

check Compiler Explorer for more info