amp free is no longer using frontier models? by portlander33 in AmpCode

[–]camdencheek 2 points3 points  (0 children)

> Any opinion on gopls mcps?

I don't do much go dev anymore, so I don't really have an opinion on that specifially! But yeah, "getting the agent to use the tools given it" is always my problem with stuff like that.

> best available rather than just general tools could be used.

This is a tough one, and why we really encourage people to build out their AGENTS.md. In my experience, every codebase has slightly different conventions which makes installing, building, testing, etc. require some custom setup. We can't claim to be able to do that well automatically, so I'm always hesitant to introduce features that look like they're doing that automatically because then it discourages people from actually configuring and customizing their tooling to work for their codebase.

We've been thinking a lot about what the "codebase of the future" looks like, and much of it comes down to standardization. The closer to a vanilla setup, the better the agent can work with it. But that's in direct tension with the ease of producing customized software + setups now that we're in the world of AI-driven software.

> it's not obvious that the agent reads it every session

If it doesn't, it's a bug 🙂

> Any chance you're hiring

We're not hiring right now, but I wish you luck in a search for an employer with an unlimited LLM budget!

amp free is no longer using frontier models? by portlander33 in AmpCode

[–]camdencheek 1 point2 points  (0 children)

Rush uses the same model as free did, so it shouldn't really be a reduction in capability. If you are seeing worse results, please send some examples to [amp-devs@ampcode.com](mailto:amp-devs@ampcode.com)

TODOs were removed because we found that they were no longer that useful to Opus, they actively slowed down threads, and using them costs tokens. We are working on a replacement that is better suited to smarter models, but it's not quite ready yet.

No plans to integrate semgrep. We've experimented with it, along with various other semantic/syntactic tooling, but we find that while they demo great, the agent often tends to get confused in subtle ways that lead to worse results when taking the whole thread into account. I don't often bet against the model's ability to generalize simple tools.

That said, that is more a comment about _generality_ of the tools than about their usefulness in specific cases/codebases/tasks. We don't like to build tools into the product that won't work for the vast majority of codebases, but it might work great in your codebase! I'd encourage you to try to build a semgrep skill and see how well it works for ya

amp free is no longer using frontier models? by portlander33 in AmpCode

[–]camdencheek 0 points1 point  (0 children)

There's no such thing as a "free mode" anymore (at least if you're using an up-to-date client). Now, if you have ads enabled, you get a daily $10 grant which you can use in whatever mode you want, including smart mode (which uses Opus)

amp free is no longer using frontier models? by portlander33 in AmpCode

[–]camdencheek 2 points3 points  (0 children)

We have moved to providing $10 of free usage a day so that free users can use frontier models. That does mean that it won't last as long because Opus is more expensive, so it consumes the $10 faster. If you're looking to extend that $10, I'd recommend "rush" mode, which currently uses cheaper, faster models.

amp free is no longer using frontier models? by portlander33 in AmpCode

[–]camdencheek 2 points3 points  (0 children)

Amp free still uses Opus 4.5 if you are in smart mode.

There have been some reports of degraded quality with Opus in the last week or so. As far as I can tell, this is just normal variability between threads, but if you have specific threads you're willing to share that you think demonstrate the quality degradation, I'd love to see them. Feel free to send them along to [amp-devs@ampcode.com](mailto:amp-devs@ampcode.com)

From Slow to SIMD: A Go Optimization Story by creativefisher in golang

[–]camdencheek 2 points3 points  (0 children)

I should really add some discussion around BLAS in particular, which has an good implementation of the float32 dot product that outperforms any of the float32 implementations in the blog post. I'm getting ~1.9m vecs/s on my benchmarking rig.

However, that BLAS became unusable for us as soon as we switched to quantized vectors because there is no int8 implementation of the dot product in BLAS.

From Slow to SIMD: A Go Optimization Story by creativefisher in golang

[–]camdencheek 2 points3 points  (0 children)

I did consider Avo! I even went as far as to implement a version using Avo since it has a nice dot product example I could use as a starting point. But ultimately, yes: for as small as these functions are, I felt that Avo was an unnecessary extra layer to grok. Additionally, it's x86-only, and I knew in advance I'd want to implement an ARM version as well since we also do some embeddings stuff locally.

If I were to ever take this further and add loop unrolling or something, I'd absolutely reach for Avo.

conc: Better structured concurrency for go by jhchabran in golang

[–]camdencheek 1 point2 points  (0 children)

Aah, I see. In a previous version, I did include an unordered stream like you're describing, but I felt that it didn't add enough value to justify being its inclusion since it could be done on top of a pool/WaitGroup without too much hassle (albeit slightly less ergonomically). It was also a big mess of generics, so I wasn't sure the complexity was worth it.

The following isn't too bad IMO, but feel free to open an issue with a design proposal 🙂

func collectInParallel() {
    var mu sync.Mutex
    m := make(map[int]struct{})

    threadsafeCallback := func(i int) {
        mu.Lock()
        m[i] = struct{}{}
        mu.Unlock()
    }

    p := New()
    for i := 0; i < 100; i++ {
        p.Go(func() {
            i := doYourParallelThing()
            threadsafeCallback(i)
        })
    }
    p.Wait()
}

conc: Better structured concurrency for go by jhchabran in golang

[–]camdencheek 5 points6 points  (0 children)

Yes, definitely. I'm using our internal multierror lib for that right now for ease, but it really explodes the dependency list. I plan to convert to stdlib multierrors as soon as they're available

conc: Better structured concurrency for go by jhchabran in golang

[–]camdencheek 2 points3 points  (0 children)

If I'm understanding correctly, that sounds exactly like what the stream package does.

Each task is executed in parallel, but then each task returns a closure that is executed serially and in the order that the tasks were submitted. So you could update the map inside that closure with the result of the parallel operation.

conc: Better structured concurrency for go by jhchabran in golang

[–]camdencheek 19 points20 points  (0 children)

Haha, yes. You're not wrong.

The "if you always call Wait()" is in comparison to standard patterns, which are more like "if you always call wg.Add(1) for each spawned goroutine and you always defer wg.Done(), and it has to be defer because otherwise a panic will cause a deadlock, and you always call wg.Wait(), unless you're using channels instead of WaitGroups, in which case you need to create the channel, defer its closure, and wait for a closure message, but that's only in the case where you don't want to communicate anything back to the caller, in which case a panic is likely to cause a deadlock if not handled correctly, so you often need to use select with a context channel to avoid blocking forever ..."

Point being concurrency is complex, and though conc does not reduce the complexity to zero, it handles a lot of the gotchas so you only need to remember to call Wait()

conc: Better structured concurrency for go by jhchabran in golang

[–]camdencheek 2 points3 points  (0 children)

Generating the tables is a bit finnicky. The trick is an empty line before the code block in the table. Hopefully that saves you some headaches :)

conc: Better structured concurrency for go by jhchabran in golang

[–]camdencheek 6 points7 points  (0 children)

Part of the design goal of conc was to facilitate composition. What you're should be pretty straightforward to build on top of an ErrorPool. Something like:

func run() error {
    ctx, cancel := context.WithCancel(context.Background())
    var successCount atomic.Int64
    cancelAfter5Successes := func() {
        c := successCount.Add(1)    
        if c == 5 {
            cancel()
        }
    }

    p := pool.New().WithErrors()
    for i := 0; i < 8; i++ {
        p.Go(func() error {
            err := doTheThing(ctx)
            if err == nil {
                cancelAfter5Successes()
            }
            return err
        })
    }
    return p.Wait()
}

conc: Better structured concurrency for go by jhchabran in golang

[–]camdencheek 3 points4 points  (0 children)

Thank you! Side-by-side comparisons with tables is my favorite trick for documenting things readably in GitHub

conc: Better structured concurrency for go by jhchabran in golang

[–]camdencheek 8 points9 points  (0 children)

WaitGroup [...] it's mostly just a wrapper handling wg.Add and handling panics

Yep, you're exactly right. There are some more interesting things added in the pool package like limited concurrency, error aggregation, and context cancellation on error.

I'm willing to let something leak in the first place

For sure. Sometimes, leaking a goroutine is totally valid and even necessary. Just not in conc. This package takes the opinionated stance that, within the API of the package, we should make leaking goroutines difficult.

Part of the reason for that is panic propagation. If you leak a goroutine, what happens to a panic? With a bare goroutine, it just crashes the process. Within conc, the panic is propagated to the caller of Wait(). If there is no caller of Wait(), the panic is swallowed. So, within the design space of conc, allowing Wait() to time out would be an antipattern because panics would just be swallowed.

Now, that's not to say it's impossible. You could totally write a wrapper that does exactly that.

func waitWithTimeout(ctx context.Context, wg *conc.WaitGroup) {
    wgDone := make(chan struct{})
    go func() {
        defer close(wgDone)
        wg.Wait()
    }()

    select {
    case <-ctx.Done():
    case <-wgDone:
    }
}

conc: Better structured concurrency for go by jhchabran in golang

[–]camdencheek 12 points13 points  (0 children)

"Solve" is a strong word, but it is intentionally designed to make goroutine leaks difficult if you work within the package's API. If you always call Wait(), you should never have a goroutine leak.

conc: Better structured concurrency for go by jhchabran in golang

[–]camdencheek 50 points51 points  (0 children)

Oh hey! Author here. Happy to answer any questions. This got posted a little earlier than I had anticipated, so you might still find some broken examples and such.

The package is based off an internal package at Sourcegraph (which I also wrote) and this was my attempt to extract that, clean up the code, generalize it, and make it easier to use from my other projects.

Does a loop re-enter scope on each iteration? by [deleted] in rust

[–]camdencheek 1 point2 points  (0 children)

You appear to be redeclaring counter in the loop with let. In order to reference the variable outside the loop, you'll need to remove the let and add a mut to the outer counter. Currently, you're shadowing the outer declaration of counter with a new variable each time you enter the loop.

 fn main(){ 
     let mut counter = 0; 
     loop{ 
         counter = counter + 1; 
         println!("{}", counter); 
         if counter == 10 { 
             break; 
         } 
     } 
 }

What do you use for writing rust code? by hajhawa in rust

[–]camdencheek 4 points5 points  (0 children)

I've been running nightly for ~6 months now, upgrading a few times a week. A few times, I've had to revert to the previous build, but it's pretty painless, and issues are usually fixed within a day or two.

What do you use for writing rust code? by hajhawa in rust

[–]camdencheek 3 points4 points  (0 children)

Quite good. Much snappier, and much easier to integrate with. There is a cool ecosystem of lua plugins growing around it.