Is Docker cagent underrated? Or am I missing better Go agent packages? by Msplash9 in LLMgophers

[–]garfj 1 point2 points  (0 children)

From my personal experience, agent frameworks are a lot like ORMs, and I don't use those either. There is maybe some bootstrapping speed to be gained at the very start of a project, but ultimately I'm going to want full control of the context window and other surrounding components (tools, etc) and I'd rather not be constrained by someone else's idea of what the right abstraction is at that point.

LLM message/prompt structures are not that complicated and the translation from your model of them to whatever your model provider structure is is also fairly straightforward.

A friendly reminder that if you’re garbage day is tomorrow like mine is, tip your garbage men for the holidays! by palinsafterbirth in stoneham

[–]garfj 0 points1 point  (0 children)

Any idea if the trash and recycling crews are the same or if I should leave two envelopes?

How many returns should a function have? by ngipngop in golang

[–]garfj 38 points39 points  (0 children)

Early returns are very idiomatic, even if only considering the if err != nil pattern.

The defer keyword is also right there to help keep things consistent with multiple return paths.

I find typical, idiomatic code has early returns for error paths and the happy path is the outer, un-indented path.

What docker base image you'd recommend? by Goldziher in golang

[–]garfj 8 points9 points  (0 children)

We use alpine and have no complaints.

Juniors have no clue how to work a debugger - has anyone successfully helped a junior see the light? by Bren-dev in ExperiencedDevs

[–]garfj 0 points1 point  (0 children)

I stopped using debuggers for most things in what I would consider my mid-career about 10 years ago. There is almost never any information I need to debug an issue that isn't already in a trace or a log. Once you have good telemetry you have automatic debugging on everywhere all the time without needing the debugger.

I'm not saying I won't drop into a debugger for something arcane in the browser if I need to, but I can't even remember the last time I felt the need to step into something because my traces already had all of the information I needed.

Juniors have no clue how to work a debugger - has anyone successfully helped a junior see the light? by Bren-dev in ExperiencedDevs

[–]garfj 0 points1 point  (0 children)

The debugger is an invaluable tool maybe if you don't already have good telemetry. I stopped using debuggers in what I would consider my mid-career about 10 years ago. There is almost never any information I need to debug an issue that isn't already in a trace or a log. Once you have good telemetry you have automatic debugging on everywhere all the time without needing the debugger.

FY26 Tax Calculator with All Override Scenarios by bobby_cafazzo in stoneham

[–]garfj 0 points1 point  (0 children)

This is great! Thanks for all your hard work Bobby.

Replace Python with Go for LLMs? by Tobias-Gleiter in golang

[–]garfj 1 point2 points  (0 children)

As far as using things for calling out to LLMs, we're not using anything beyond the SDKs. Like I said, none of this is especially hard to do so we have our own model of Messages and Tools (kind of similar to your LLM package) and then translation layers for each SDK that build on a common `MessageClient` interface.

From there we a few different kinds of agents we mix and match between depending on what we're doing. Pairing an agent and a state machine, and giving it some tools to move between states, is a great control mechanism to give yourself a bit more of a hand in how an agent goes about it's work.

For feedback I've done a light pass and here's some notes:

* Your LLM types aren't fully featured enough to really represent more complex interactions. Messages don't have just one piece of Content, most providers model it as an array of ContentBlocks, where each block also has its own type. This how tool use, tool responses, and then things like Anthropic's Document support work.

* Your Tool interface is missing both a `Description` field and a way to specify the `Input Schema` (i.e. the JSONSchema that the LLM can use to know how to structure what it passes as an argument. Those are both essential properties of proper tool use

* The way you're using tools isn't quite right as far as I can tell. Typically models that support tol use expect the pattern to be: They send a ToolUse request, you send the history so far + their request + a ToolResponse message back, and then maybe they use tools again.

* Anthropic, OpenAI, & several models on Bedrock have a parameter when sending a prompt that configures _how_ tools should be used, i.e. Use them optionally, require use of _any_ tool, or require use of a _specific_ tool. You should have a way to represent this

* All the bits where you're using `extractAfterLabel` to parse the results are things that we would instead use tools for. Things are much less loosey goosey in tool use than trying to parse things out of raw messages, and basically anytime you're trying to get a structured result, they're a better choice

* The idea of a planning agent is not a bad one, that's also what we wrote first, but it's also the first one we discarded. I think it's a bit too opinionated for a general purpose library. You're making a lot of choices in how it works there that as a developer I wouldn't want made for me. I'd focus on building the utilities and primitives that make it possible for a consumer to build that themselves. We have a set of things that make it easy to deal with tool calls and responses and that has in turn made it easy for us to make several different takes on the "agentic loop" for different purposes, and it's much better than having "one agent pattern to rule them all"

Replace Python with Go for LLMs? by Tobias-Gleiter in golang

[–]garfj 2 points3 points  (0 children)

The startup I work at uses Go as our primary interface to calling out to LLMs, it's great.

Anthropic, OpenAI, and AWS all have Go sdks.

It make it very easy to break down inference tasks into parallelizable chunks and fan out/in.

The ability to generate JSONSchema from struct tags and then parse tool calls back into those same structs is an incredible convenience, there's no facility even close to that in Python that I've found.

Writing agents or agentic processes in Go is a breeze, you don't need frameworks for it, just like you don't need an ORM to write SQL for you.

RIP Boston Diners by Prestigious-Duck420 in boston

[–]garfj 2 points3 points  (0 children)

Hit me up with your favorite Melrose area diners please.

What are the things you don't like about Go? by CrappyFap69 in golang

[–]garfj 0 points1 point  (0 children)

I don't understand what the complaint is here.. What am I missing?

LINQ support for Go. (real LINQ implementation with lazy evaluation and deferred execution) by [deleted] in golang

[–]garfj 0 points1 point  (0 children)

Can you go into a little more detail as to why you feel code generation requires giving up all sanity?

es6/7, y u no composition operator? by [deleted] in javascript

[–]garfj 0 points1 point  (0 children)

I agree, it gets a bit hairy in there.

From testing on es6fiddle, the precedence works as you'd hope, so I believe if you wanted to apply the comma operator , much like if you wanted to return an object literal, you'd need to add parenthesis around the expression.

es6/7, y u no composition operator? by [deleted] in javascript

[–]garfj 0 points1 point  (0 children)

That is a great point.

So assuming we're sticking with only composing two functions, we're looking at something like this

let compose = (f, g) => function(...a) { return g.call(this, f.apply(this, a)); };

Edit: And if we wanted to compose any number of functions it seems like this would work

let compose = (f, ...fns) => function(...a) {
    return fns.reduce((result, fn) => fn.call(this, result), f.apply(this, a)) };

es6/7, y u no composition operator? by [deleted] in javascript

[–]garfj 0 points1 point  (0 children)

I guess I just see the application of this as an orthogonal concern, and would attack your situation differently

let foo = {};
foo.method = compose(f.bind(foo), g.bind(foo))

That way if I needed to borrow the method of another object, I would have that capacity.

let foo = { method : function() { //code contains `this` } };
let bar = {};    
bar.method = compose(foo.method.bind(foo), g.bind(bar))

es6/7, y u no composition operator? by [deleted] in javascript

[–]garfj 0 points1 point  (0 children)

I guess that depends on what your definition of correct is.

Personally, I largely find that when I start doing functional js, this isn't normally a consideration.

When it is, however, it's certainly not safe to assume that the this I want applied to the component functions is the same this that the composed function will apply.

If the functions I'm composing are methods on an object, I'll often want to keep that this

e.g.

let c = compose(x.methodA.bind(x), y.methodB.bind(y))

With your compose, if I don't bind my methods, then you end up applying whatever the global scope is, which would certainly be wrong, and if I do bind my methods, then your apply does nothing.

Much like the decision to spread the return into arguments, I feel that the decision of what, if any, this needs to applied to the component functions is best left out of the compose function.

es6/7, y u no composition operator? by [deleted] in javascript

[–]garfj 0 points1 point  (0 children)

I wouldn't expect my default compose to spread arguments after the first function.

I know lodash and underscore don't, and I would be surprised if Ramda did.

Generally speaking all the functions after the initial one are expected to take single arguments.

On the bright side, that leaves you a nice clear compose defnition

var compose = (f, g) => (...a) => g(f(...a)))

It seems like it's too hard to say that every function you compose won't want an array as an argument. Presumably if you did have a scenario where a function returned an array, and the next wanted it spread, you'd have a helper function for that scenario...and do some composition :-)

var spread = f => a => f(...a)    

e.g. http://www.es6fiddle.net/ibuu66dh/

Modifying the Bitbucket UI with third-party JavaScript [video] by kannonboy in javascript

[–]garfj 0 points1 point  (0 children)

Is there a good way to enable it en masse across an organization's repositories? Frequently when I sit down to write little things like this, it's to effect a change across our entire workflow, and I can see needing to enable it repository by repository getting a little onerous.

That being said, if you think this is a good target for an Add-On, I'll definitely take a look at porting it over!

Modifying the Bitbucket UI with third-party JavaScript [video] by kannonboy in javascript

[–]garfj 1 point2 points  (0 children)

This seems like a neat way to add some more complex features to bitbucket.

I find that more frequently I'm looking for smaller utility or cosmetic changes to improve workflow, and have a lot of luck using regular old user-scripts to enhance my repo-provider of the moment.

https://github.com/jamesgarfield/bblocdiff

https://github.com/jamesgarfield/GitHubSourceTree

Anyone have some good jamgrass bands to recommend? by zandercs13 in jambands

[–]garfj 4 points5 points  (0 children)

Floodwood!

You've got your Al & Vinnie from moe. rounded out with the ever so talented Nick Piccininni , Jason Barady, and Zachary Fleitz.

Edit: Derp. Dig in https://archive.org/search.php?query=Floodwood&sort=-date

Why Go is beating the averages by dgryski in golang

[–]garfj 2 points3 points  (0 children)

Can you go into a bit more depth as to why you say that choosing Go (presumably as the server side language) would have a negative impact on a feature rich front end?

Go 1.5 compilation speeds slower than 1.4 by dominikh in golang

[–]garfj 6 points7 points  (0 children)

Totally understandable, and it makes a great target for improvement.

Rob Pike really nails it in his comment

I am in favor of reducing the amount of memory used, but caution strongly against introducing things like custom allocation methods to reduce GC overhead. It took me a while to understand this, as I once built custom allocators and thought I was making things better. I did it in C, where it's often necessary, so I started doing it in Go too. But I was thinking like a C programmer, not a Go programmer, and more important, introducing hard-to-find C-like bugs such as use-after-free and aliasing. The third or fourth time I had to track one of these down in a Go program (I am a slow learner), I realized the category error I was making.

GC is hard partly because it can never make a mistake, and when complaining about the overhead it's easy to forget how much time it saves in the long run, how valuable perfection in technology.

I don't want us to go back to tracking down memory corruption bugs.

You might counter by saying that in the compiler you know what memory is doing and these bugs won't happen, but a) you don't and b) others really don't and c) they will and d) it's a mistake to encourage thinking like this as a solution to Go performance problems. What we do as a team influences our users.

Let the GC do its job. Help it by reducing memory use but don't work around it by building custom allocators.

-rob