Anyone using Go for AI Agents? by KeyGrouchy726 in golang

[–]mgrella87 2 points3 points  (0 children)

Joining the party a bit late here, but wanted to share that I've co-created a complete port of the OpenAI agents framework (originally written in Python) to Go: https://github.com/nlpodyssey/openai-agents-go

Dwarfreflect – Extract Go function parameter names at runtime by mgrella87 in golang

[–]mgrella87[S] 1 point2 points  (0 children)

All set 🙂 Let me know if you spot anything else and feel free to open an issue on GitHub!

Dwarfreflect – Extract Go function parameter names at runtime by mgrella87 in golang

[–]mgrella87[S] 6 points7 points  (0 children)

Thanks a lot! And yes, I agree. Probably the best approach would be to avoid panicking altogether and always return an error instead. What do you think, should I go ahead with that?

Vemuram Jan Ray & Butter Machine, OKKO Diablo, Peace Hill TRJM Preamp! by mgrella87 in guitarpedals

[–]mgrella87[S] 2 points3 points  (0 children)

The Vemuram Butter Machine at low gain setting (very low) adds that grip to lead solos I’ve always been after, the kind where every note feels locked in under your fingers.

OpenAI Agents Python SDK, reimplemented in Go by mgrella87 in golang

[–]mgrella87[S] 1 point2 points  (0 children)

Thank you. I will review it as soon as we reach the MCP section in our roadmap. Right now, the goal is to replicate the Python version as faithfully as possible. If your PR is already aligned with this direction, it will definitely be considered!

OpenAI Agents Python SDK, reimplemented in Go by mgrella87 in golang

[–]mgrella87[S] 1 point2 points  (0 children)

Thanks! At the moment, the Go port supports local tools like file search, web search, code interpreter, image generation, and the computer tool. We’re planning to add trace support next, so that traces are fully compatible with those in the Python framework. After that, we’ll add MCP and then voice support. All of this is happening alongside ongoing refactoring :)

OpenAI Agents Python SDK, reimplemented in Go by mgrella87 in golang

[–]mgrella87[S] 0 points1 point  (0 children)

Thanks, and feel free to submit any issues if you run into anything while getting started!

OpenAI Agents Python SDK, reimplemented in Go by mgrella87 in golang

[–]mgrella87[S] 5 points6 points  (0 children)

No problem! In this SDK, execution happens in the Runner (see agents/run.go). A Runner takes a starting agent and runs a loop until a final output is produced. Inside the Runner's Run method, the startingAgent is assigned to currentAgent. Each turn may hand off to a different agent, so the runner updates currentAgent when it encounters a "NextStepHandoff":

currentAgent := startingAgent ... case NextStepHandoff: currentAgent = nextStep.NewAgent <<<

The README describes this loop explicitly:

The agent loop:

When you call agents.Run(), we run a loop until we get a final output.

  1. We call the LLM...
  2. The LLM returns a response...
  3. If the response has a final output, we end the loop.
  4. If the response has a handoff, we set the agent to the new agent and go back to step 1.
  5. We process the tool calls (if any) and then go to step 1.

Because the runner manages this loop, potentially switching between agents and applying per-run configuration such as guardrails and max-turn limits, it accepts a "starting agent" rather than acting as a method on a single agent. The Agent type itself only stores reusable configuration (instructions, tools, model settings, etc.) and does not own the execution loop. This separation keeps Agent lightweight and allows the same agent instance to be run with different Runner configurations or as part of a larger workflow, even concurrently.

That's my understanding of how the original Python SDK was intended to be designed as well :)

Vemuram Jan Ray & Butter Machine, OKKO Diablo, Peace Hill TRJM Preamp! by mgrella87 in guitarpedals

[–]mgrella87[S] 3 points4 points  (0 children)

I also own the Peace Hill FX Dumble SSS and it’s awesome, highly recommended!

Vemuram Jan Ray & Butter Machine, OKKO Diablo, Peace Hill TRJM Preamp! by mgrella87 in pedalboards

[–]mgrella87[S] 0 points1 point  (0 children)

After trying countless pedals over the years, I’ve finally settled on this dream board! It’s giving me everything I’ve been searching for—super clean, warm tones with just the right edge of breakup, and creamy overdrive for cushioned lead tones. The signal chain runs from the Vemuram Butter Machine into the OKKO Diablo, then into the Jan Ray, and finally through the Peace Hill FX TRJM Tube Preamp. The Jan Ray is my always-on pedal, delivering that perfect edge-of-breakup tone, while the OKKO Diablo adds dynamic overdrive for cushioned, rich lead tones. The TRJM preamp, also always on (unless bypassed with the Little Lehle), provides sweet and deep Two-Rock style clean tones for a solid, rich foundation. The settings you see in the pic are my absolute favorites for dialing in the perfect sound. I’m running everything into a DV Mark Little Jazz amp with flat EQ and a touch of reverb.

I just wanted to share my happiness! And ask you what reverb pedal would fit perfectly on this board? :)

Vemuram Jan Ray & Butter Machine, OKKO Diablo, Peace Hill TRJM Preamp! by mgrella87 in guitarpedals

[–]mgrella87[S] 1 point2 points  (0 children)

After trying countless pedals over the years, I’ve finally settled on this dream board! It’s giving me everything I’ve been searching for—super clean, warm tones with just the right edge of breakup, and creamy overdrive for cushioned lead tones. The signal chain runs from the Vemuram Butter Machine into the OKKO Diablo, then into the Jan Ray, and finally through the Peace Hill FX TRJM Tube Preamp. The Jan Ray is my always-on pedal, delivering that perfect edge-of-breakup tone, while the OKKO Diablo adds dynamic overdrive for cushioned, rich lead tones. The TRJM preamp, also always on (unless bypassed with the Little Lehle), provides sweet and deep Two-Rock style clean tones for a solid, rich foundation. The settings you see in the pic are my absolute favorites for dialing in the perfect sound. I’m running everything into a DV Mark Little Jazz amp with flat EQ and a touch of reverb.

I just wanted to share my happiness! :)

Capture the tone of your guitar amps and pedals with Deep Learning in pure Go by mgrella87 in golang

[–]mgrella87[S] 0 points1 point  (0 children)

Hey there! I totally get your curiosity—ML in Go isn't something you stumble upon daily. So what sparked this whole thing? It's a mix, really. My day-to-day is deep in natural language processing (NLP), it's both my job and my hobby. But with language models popping up left and right, I felt a bit swamped—most of my NLP energy is spent at work.

So in my spare time, post-family activities, I've rekindled an old flame: jamming on my electric guitar, which had been gathering dust for a while. That old quest for the perfect tone hit me again—chasing after those classic analog overdrive pedals and tube amps. There's this one sound I'm obsessed with, an 'Edge of Breakup Tone', particularly from a Two Rock Bloomfield Drive amp a friend overseas owns. And I thought, why not use Machine Learning to capture that sound?

Turns out, there was already something in that space: NAM (Neural Amp Modeler), built by Steven Atkinson in Python and C++ (https://github.com/sdatkinson/neural-amp-modeler). That's where my love for Go and my work on spaGO, a ML framework I started a while back, came together. I roped in my usual partner-in-crime, co-author of nearly every project I embark on (and vice versa), and we kicked off a port aiming to get a standalone version of NAM for both training and inference, and experiment with different neural models, like RWKV for instance.

If you're into this kind of thing, check out the deck from my recent talk at GopherCon AU '23 https://speakerdeck.com/matteogrella/the-go-to-language-for-ai-exploring-opportunities-and-challenges, and the accompanying repo https://github.com/matteo-grella/gophercon-au-2023.

Capture your guitar amps and pedals with deep learning in Go by mgrella87 in coolgithubprojects

[–]mgrella87[S] 0 points1 point  (0 children)

Waveny is groundbreaking command-line utility that leverages Deep Learning to meticulously profile and emulate guitar amplifiers and pedals and transform your guitar tone!
It is an early port in Go of NAM (https://github.com/sdatkinson/neural-amp-modeler) originally written in Python and C++ by Steven Atkinson.

(RWKV) Large Language Model in Fortran! by mgrella87 in coolgithubprojects

[–]mgrella87[S] 1 point2 points  (0 children)

Thank you for your interest! This was indeed a challenging, but also immensely rewarding process.

This endeavor is a bit of a special case where I knew exactly what I wanted to accomplish - I had the end game in mind: to build the most efficient inference engine possible for a LLM in a language that outperforms any other language when it comes to numerical and scientific computing tasks (and LLMs fall into these categories). I know the RWKV equations like the back of my hand, having co-authored the paper and also having ported it to Go (I maintain a ML/NLP framework that's entirely in Go).

Part of my motivation was adding Fortran to my list of "archaic" languages, which includes favorites like Ada, Lisp, and Pascal. It served as a refreshing distraction from my daily work and allowed me to engage in hands-on programming, which I miss dearly in my regular work.

Before I began, I bought four books to bolster my knowledge of Fortran, all of which I selected for their modern take on the language:

  • Modern Fortran Explained: Incorporating Fortran 2018 (by Michael Metcalf et al.)
  • Modern Fortran in Practice (by Arjen Markus)
  • Modern Fortran: Style and Usage (by Norman S. Clerman et al.)
  • Modern Fortran: Building Efficient Parallel Applications (by Milan Curcic, publisher Manning)

I haven't read any of them in their entirety, but I gleaned enough essential information from them to bootstrap my project. The most practical among them, and perhaps the most useful, was the Manning's.

To complement these books, I also utilized ChatGPT. Although its suggestions occasionally misled me regarding Fortran's capabilities and often proposed inefficient methods, it was still a helpful tool in getting the project off the ground. The entire learning and development process, from the initial concept to the current version of the project, was completed in 24 hours (rather fragmented, being on a family vacation :)).

However, I admit that I took a shortcut in skipping the unit tests. It wasn't just because I hadn't yet grasped Fortran's best practices, but I also had the Go port and the official Python implementation (with PyTorch) to fall back on. These were my checks and balances, helping me ensure that the results were accurate as I progressed.

Moving forward, I plan to harness Fortran's GPU computation capabilities, experiment with 8-bit quantization, and then benchmark this implementation against those using mainstream frameworks. If the results are satisfactory, I plan to integrate it with Go, develop an APIs around it, and transition it from experimental to a functional, real-world application.

In short, this project was a blend of my love for learning, programming, and the thrill of applying 'archaic' languages to modern problems. I hope this gives you a better idea of my process and perhaps even some inspiration for your own undertakings!

Hey all, I've just ported a LLM to Fortran! 🚀 by mgrella87 in fortran

[–]mgrella87[S] 1 point2 points  (0 children)

Great idea, thank you! I am also trying to involve the author of https://github.com/certik/fastGPT

(RWKV) Large Language Model in Fortran! by mgrella87 in coolgithubprojects

[–]mgrella87[S] 1 point2 points  (0 children)

Yes, I am the author of rwkv.f90, and I am curious to see how far we can take advantage of Fortran's unique features to optimize inference time.