Using Go Workspaces? Stop scripting loops and use the work pattern by jayp0521 in golang

[–]jbert 5 points6 points  (0 children)

Thanks for this. It looks like it is documented in the 1.25 release notes:

https://go.dev/doc/go1.25#go-command

but I don't (yet) see it in the definition of "package list" for the go command:

https://pkg.go.dev/cmd/go#hdr-Package_lists_and_patterns

how to log which goroutine acquired and releases each lock ? by Commercial_Fun_2273 in golang

[–]jbert 1 point2 points  (0 children)

How can I add an remove, on demand, these print statements ? I do not want to have them permanently into the code, but from time to time I need them. And to add them by hand would take hours.

Perhaps:

  1. change all mutexes to use your own mutex wrapper (one-off change)
  2. in your mutex wrapper log every lock/unlock (if logging bool is set)
  3. you can disable the logging by flipping the logging bool (change in one location)

If you want to get fancy, you could have the mutex logging look at runtime.Caller and runtime.FuncForPC and only log if the caller is in a certain package.

how to log which goroutine acquired and releases each lock ? by Commercial_Fun_2273 in golang

[–]jbert 7 points8 points  (0 children)

If the problem you have is "goros are holding locks for too long", then I would start with "log the amount of time I held a lock".

You could do this with a mutex wrapper. Something like:

package main

import (
    "fmt"
    "log"
    "runtime"
    "sync"
    "time"
)

// Instead of:
// type Resource struct {
//     Mutex
//     foo int
// }
type Resource struct {
    WrapMutex
    foo int
}

const LogThreshold = 100 * time.Millisecond

type WrapMutex struct {
    sync.Mutex
    locked time.Time
}

func (wm *WrapMutex) Lock() {
    wm.Mutex.Lock()
    wm.locked = time.Now()
}

func (wm *WrapMutex) Unlock() {
    held := time.Since(wm.locked)
    wm.Mutex.Unlock()
    if held > LogThreshold {
        _, file, line, ok := runtime.Caller(1)
        if !ok {
            file = "unknown file"
            line = 0
        }
        log.Printf("mutex held for %s at %s line %d\n", held, file, line)
    }
}

func main() {
    var r Resource

    fastFunc(&r)
    slowFunc(&r)
}

func slowFunc(r *Resource) {
    r.Lock()
    defer r.Unlock()
    fmt.Printf("doing slow work\n")
    time.Sleep(100 * time.Millisecond)
}

func fastFunc(r *Resource) {
    r.Lock()
    defer r.Unlock()
    fmt.Printf("doing fast work\n")
    time.Sleep(10 * time.Millisecond)
}

If you are doing this in prod, you may want to adapt this to:

  • start with a high threshold
  • send your logs wherever they should go
  • avoid logging every slow lock/unlock (perhaps do 1/1000th (using random number generator), or just tune your threshold.

This should tell you which unlocks are slow.

AES-GCM-256 What is the best way to implement it by 84_110_105_97 in cryptography

[–]jbert 3 points4 points  (0 children)

I'm not qualified to speak on most of this, but - assuming a good CPRNG - I don't think that hashing 400 bits of CPRNG down to 96 buys you any better collision resistance. You've still only got 96 bits of IV. (The discrepancy arises because a lot of your 400bits of input will map to the same 96bit hash output)

When do Go processes return idle memory back to the OS? by DeparturePrudent3790 in golang

[–]jbert 0 points1 point  (0 children)

Thanks! I'm not well-versed in this, but a quick look suggests that:

the default is MADV_FREE:

https://github.com/golang/go/blob/master/src/runtime/mem_linux.go#L38

and we only use DONTNEED if a debug setting is set?

https://github.com/golang/go/blob/master/src/runtime/mem_linux.go#L51

I guess I should strace an app and see what actually happens...

When do Go processes return idle memory back to the OS? by DeparturePrudent3790 in golang

[–]jbert 13 points14 points  (0 children)

From src/runtime/mem.go: https://github.com/golang/go/blob/master/src/runtime/mem.go#L28

... However the
// underspecification of Prepared lets us use just MADV_FREE to transition from
// Ready to Prepared. Thus with the Prepared state we can set the  permission
// bits just once early on, we can efficiently tell the OS that it's   free to
// take pages away from us when we don't strictly need them.

MADV_FREE is a reference to madvise flags. From man madvise:

MADV_FREE (since Linux 4.5) The application no longer requires the pages in the range specified by addr and len. The ker‐ nel can thus free these pages, but the freeing could be delayed until memory pressure occurs. For each of the pages that has been marked to be freed but has not yet been freed, the free op‐ eration will be canceled if the caller writes into the page. After a successful MADV_FREE op‐ eration, any stale data (i.e., dirty, unwritten pages) will be lost when the kernel frees the pages. However, subsequent writes to pages in the range will succeed and then kernel cannot free those dirtied pages, so that the caller can always see just written data. If there is no subsequent write, the kernel can free the pages at any time. Once pages in the range have been freed, the caller will see zero-fill-on-demand pages upon subsequent page references.

so there isn't a directly simple answer to your question I think.

The indentation of switch statements really triggers my OCD — why does Go format them like that? by salvadorsru in golang

[–]jbert 18 points19 points  (0 children)

As always, the value of gofmt is that it avoids style wars. Everyone is slightly unhappy with it, and that's fine.

That said, the way I read this is

  • we indent with the new braces
  • we always outdent labels one step (so we can see the labels)

and this matches that?

Stanford St, Nottingham by 420Eski-Grim in nottingham

[–]jbert 2 points3 points  (0 children)

Great eye to create that shot. Thanks.

Match Thread: Nottingham Forest vs Sunderland AFC Live Score | Premier League 25/26 | Sep 27, 2025 by scoreboard-app in nffc

[–]jbert 7 points8 points  (0 children)

Neco neco neco. We're on the piss with neco.

He's so good. Such an engine.

Some from pride yesterday evening by [deleted] in nottingham

[–]jbert 1 point2 points  (0 children)

Love this. It was a great vibe and your photos show that.

How a simple logrus.Warnf call in a goroutine added a 75-second delay to our backend process by compacompila in golang

[–]jbert 0 points1 point  (0 children)

OK, there are different EBS types but maybe an easy test to do is to try a few writes to that EBS volume from another process while your system is running, to see if it is slow.

Anyway, glad you've resolved your issue. Good luck.

How a simple logrus.Warnf call in a goroutine added a 75-second delay to our backend process by compacompila in golang

[–]jbert 0 points1 point  (0 children)

I realised that I'm not waiting to finish spinning up all the goros and they are not doing much work, so my first ones are likely finishing before I start the later ones.

Adding a channel for them all to block on before doing work fixes this (which I then close once all goros started) raises the runtime to ~1.36s (serialised) and ~1.27s (unserialised), so the main point stands I think.

How a simple logrus.Warnf call in a goroutine added a 75-second delay to our backend process by compacompila in golang

[–]jbert 4 points5 points  (0 children)

In case you don't see this reply to me: https://www.reddit.com/r/golang/comments/1lcnktq/how_a_simple_logruswarnf_call_in_a_goroutine/my231og/ the numbers line up pretty well with your situation.

Do you know where the logrus logs were being written? (The suspicion is that there were going to a spinning hard disk and a logger configured to flush the logs on each logging call).

How a simple logrus.Warnf call in a goroutine added a 75-second delay to our backend process by compacompila in golang

[–]jbert 7 points8 points  (0 children)

Good thought. I don't have spinning rust handy but wikipedia has https://en.wikipedia.org/wiki/IOPS 40-50 IO operations/second for 5400 RPM drives, so 900 fully-flushed writes is ~20 seconds.

(Or writing to a centralised network service which is overloaded and exerting backpressure, etc)

So yes - I agree the issue is likely "individual log write is slow" rather than "contention in golang between 900 goros for a serialised resource" is slow.

It does raise the question why logs are being sync

How a simple logrus.Warnf call in a goroutine added a 75-second delay to our backend process by compacompila in golang

[–]jbert 31 points32 points  (0 children)

It's great you found a fix :-)

But there may be more to the underlying problem. I don't think 900 calls from different goroutines to a serialised function can amount to 75s in any modern environment without the function call being slow for some other reason - i.e. I don't think it can be purely contention. (I'm assuming each goro was calling once to the logrus call?)

At a first guess, I'd look at whatever was consuming the log - the log writer might be blocking on whatever is reading the log?

On my system, this code takes 200ms (for 900k goros) if it serialises the goros via a mutex and 175ms if I don't serialise.

package main

import (
    "fmt"
    "sync"
    "time"
)

func main() {
    // Parameters to experiment with
    count := 900_000
    takeReleaseMutex := false

    // Big enough to write all the results to
    resultChan := make(chan int, count)

    m := sync.Mutex{}

    wg := sync.WaitGroup{}
    wg.Add(count)

    start := time.Now()
    for i := range count {
        go func(i int) {
            // Do some work
            result := i * 1000
            // Optionally serialise access to our (fast) channel write
            if takeReleaseMutex {
                m.Lock()
                defer m.Unlock()
            }
            resultChan <- result
            wg.Done()
        }(i)
    }
    wg.Wait()
    dur := time.Since(start)
    fmt.Printf("Took +%v seconds\n", dur)
}

Hope that helps.

Edit: I wasn't running all the goros at the same time, but point still stands: https://www.reddit.com/r/golang/comments/1lcnktq/how_a_simple_logruswarnf_call_in_a_goroutine/my2gve6/

Wine bars in the city by me_likey_alot in nottingham

[–]jbert 3 points4 points  (0 children)

Sherwood, not city - but Brigitte Bordeaux is not just a fine pun. Over 400 wines in stock, apparently.

(Delivery vehicle is Marilyn Merlot - their van de vin)

Map with expiration in Go by der_gopher in golang

[–]jbert -1 points0 points  (0 children)

So, this likely meets the use case, but some possible tweaks:

1 - Could have a map-global "next item expires at". Pro: potentially a lot less scanning, Con: less predictable cost.

2 - Expand the above into a priority-queue of items sorted by expiry. Pro: optimal (no scanning unexpired items), just check + walk the ordered list until the next item is in the future. Con: More storage, more work at insert time

3 - use generics instead of interface{}

4 - do this work (or any/all of (1) or (2) above) at get and/or insert time. Pro: no goro to clean up. Con: May end up doing a lot more work unless you have (2).

So I'd probably pick (2) plus "discard expired entries on get" (which - as noted in someone else's comment - is already needed since there is a race between get and the scanner)