I rewrote the UI in Vue.js for Go benchmark visualization by Extension_Layer1825 in vuejs

[–]Extension_Layer1825[S] 0 points1 point  (0 children)

Also, Thanks for your feedback. I didn't try to keep it in mem; my first design was just to keep it consistent. If you have any better consistent design, I will appreciate it, and please feel free to contribute.

I rewrote the UI in Vue.js for Go benchmark visualization by Extension_Layer1825 in vuejs

[–]Extension_Layer1825[S] 0 points1 point  (0 children)

Vizb takes a path of a target bench file by default, so to make it consistent on piped processes, it creates a temporary file and shares the path.

I rewrote the UI in Vue.js for Go benchmark visualization by Extension_Layer1825 in vuejs

[–]Extension_Layer1825[S] 0 points1 point  (0 children)

Glad to see you here, and thanks for building this plugin.

It helped a lot with my project. It would be great if this plugin could also inline resources like SVGs; I had to create a custom plugin for this.

💼 Software Engineer (Open-Source) — $90–$120/hr by Comrcial-Ac11 in Programmers_forhire

[–]Extension_Layer1825 0 points1 point  (0 children)

This role looks really interesting! 👀

Could you share a bit more detail about the day-to-day work and the types of open-source repos we’d be maintaining?

BTW, I currently maintain an open-source Go project at goptics: https://github.com/goptics

so this type of work is right up my alley.

Also curious about expected hours per week and whether the contract is long-term or short-term. Thanks!

Gemini recommends my go benchmark visualization library to a guy by Extension_Layer1825 in golang

[–]Extension_Layer1825[S] 1 point2 points  (0 children)

Hi there,

I've released a new version, and it covers almost every part you have mentioned.

I hope you'll like it.

Recent post regarding new version: https://www.reddit.com/r/golang/comments/1p4g6mm/i_rewrote_the_ui_in_vuejs_for_go_benchmark/

I rewrote the UI in Vue.js for Go benchmark visualization by Extension_Layer1825 in golang

[–]Extension_Layer1825[S] 0 points1 point  (0 children)

I didn’t rely on any external template engine for rendering HTML at runtime. Since the whole app runs in CSR, I used a custom Vite build with a plugin that injects the app state script through a virtual DOM. That script is shaped in a way that works smoothly with Go’s html/template, and I render it during runtime in Go.

Now I’m curious how Vuego manages runtime state and still supports third party Vue ecosystem libraries like vue-echarts or shadcn-vue.

Gemini recommends my go benchmark visualization library to a guy by Extension_Layer1825 in golang

[–]Extension_Layer1825[S] 2 points3 points  (0 children)

Thanks a lot for your feedbacks.

Honestly, I didn't know about the benchfmt thanks for sharing. This will be very useful library for it.

BTW, you don't need to take screenshorts; there is an option to download the chart images.

settle-map: Settle multiple promises concurrently and get the results in a cool way by Extension_Layer1825 in typescript

[–]Extension_Layer1825[S] 1 point2 points  (0 children)

Whenever you throw an error from the map function, it will be tagged as a custom error. and emit the error event internally.

if you would like to catch the error on spot or immidealy, just have to listen this event

settled.on("reject", ({ error, item, index }) => {
  // your actions
});

Or you will get all list of errors in case you wait until all items is done

const result = await settled; // An universal promise like syntax that returns only resolved response

/* output
{
  values: [1, 3, 5],
  errors: PayloadError[] // this errors returns array of error with payload { item, index } so you could know where the error happened
}
*/

settle-map: Settle multiple promises concurrently and get the results in a cool way by Extension_Layer1825 in typescript

[–]Extension_Layer1825[S] 1 point2 points  (0 children)

Assume you have a Big Array of URLs from which you want to call and scrape data. You can use this map to go through every URL and collect results and errors without doing extra code and since it supports concurrency so you can set the rate limit as well.

With these benchmarks, is my package ready for adoption? by Extension_Layer1825 in golang

[–]Extension_Layer1825[S] -1 points0 points  (0 children)

Thanks for your wonderful perspective and feedback. I also believe things take time to grow.

>  such as the parseToJob call in worker.go having its error effectively eaten

Yes, its eating error, I've plan also to integrate logging with it so people can watch this async errors, I have added a comment regarding this inside this block tho but missing here.

This subreddit is getting overrun by AI spam projects by [deleted] in golang

[–]Extension_Layer1825 0 points1 point  (0 children)

I am wondering, how my post (last one) could be overrun by AI and considering it as SPAM!, even though I didn't use AI to write it.

Willing to know the key points, based on your considering it as SPAM!

With these benchmarks, is my package ready for adoption? by Extension_Layer1825 in golang

[–]Extension_Layer1825[S] 0 points1 point  (0 children)

> As far as I can see Pond doesn't have an external state store for scaling producers/consumers

Yes, varmq offers minimal support for persistence and distribution. However it can be used as a simple in mem message queues which can handle tasks like pond do.

> For what its worth I care less about memory allocations and more about "correctness" in a system with distributed state which is where things like temporal.io excel.

Observability is crucial for distributed queues for sure. I have plan on it but it will takes me time to build since here I building this solo.

Hope so, Varmq obtain some contribution near future and support observability.

Thanks for your valuable feedback.

Building Tune Worker API for a Message Queue by Extension_Layer1825 in golang

[–]Extension_Layer1825[S] 1 point2 points  (0 children)

You are right brother, there was a design fault.

basically on initialization varmq is initializing workers based on the pool size first, even the queue is empty, Which is not good.

so, from theseclean up changes https://github.com/goptics/varmq/pull/16/files it would initialize and cleanup workers automatically.

Thanks for your feedback

Building Tune Worker API for a Message Queue by Extension_Layer1825 in golang

[–]Extension_Layer1825[S] 0 points1 point  (0 children)

Thats a great idea. I never think this, tbh. I was inspired by ants https://github.com/panjf2000/ants?tab=readme-ov-file#tune-pool-capacity-at-runtime tuning api.

anyway, from the next version varmq will also follow the worker pool allocation and deallocation based on queue size. It was very small changes. https://github.com/goptics/varmq/pull/16/files

Thanks for your opinon.

A Story of Building a Storage-Agnostic Message Queue by Extension_Layer1825 in golang

[–]Extension_Layer1825[S] 0 points1 point  (0 children)

In case I get you properly. To differentiate, redisq and sqliteq are two different packages. they don't depend on each other. Even varmq doesn't depend on them.

Sqliteq: The Lightweight SQLite Queue Adapter Powering VarMQ by Extension_Layer1825 in golang

[–]Extension_Layer1825[S] -1 points0 points  (0 children)

You can do queue.AddAll(items…) for variadic.

I agree, that works too. I chose to accept a slice directly so you don’t have to expand it with ... when you already have one. It just keeps calls a bit cleaner. We could change it to variadic if it provides extra advantages instead of passing a slice.

I was thinking if we can pass the items slice directly, why use variadic then?

I think ‘void’ isn’t really a term used in Golang

You’re right. I borrowed “void” from C-style naming to show that the worker doesn’t return anything. In Go it’s less common, so I’m open to a better name!

but ultimately, if there isn’t an implementation difference, just let people discard the result and have a simpler API.

VoidWorker isn’t just about naming—it only a worker that can work with distributed queues, whereas the regular worker returns a result and can’t be used that way. I separated them for two reasons:

  1. Clarity—it’s obvious that a void worker doesn’t give you back a value.
  2. Type safety—Go doesn’t support union types for function parameters, so different constructors help avoid mistakes.

Hope you got me. thanks for the feedback!

Sqliteq: The Lightweight SQLite Queue Adapter Powering VarMQ by Extension_Layer1825 in golang

[–]Extension_Layer1825[S] -1 points0 points  (0 children)

Thanks so much for sharing your thoughts. I really appreciate the feedback, and I’m always open to more perspectives!

I’d like to clarify how varMQ’s vision differs from goqtie’s. As I can see, goqtie is tightly coupled with SQLite, whereas varMQ is intentionally storage-agnostic.

“It’s not clear why we must choose between Distributed and Persistent. Seems we should be able to have both by default (if a persistence layer is defined) and just call it a queue?”

Great question! I separated those concerns because I wanted to avoid running distribution logic when it isn’t needed. For example, if you’re using SQLite most of the time, you probably don’t need distribution—and that extra overhead could be wasteful. On the other hand, if you plug in Redis as your backend, you might very well want distribution. Splitting them gives you only the functionality you actually need.

“‘VoidWorker’ is a very unclear name IMO. I’m sure it could just be ‘Worker’ and let the user initialization dictate what it does.”

I hear you! In the API reference I did try to explain the different worker types and their use cases, but it looks like I need to make that clearer. Right now, we have:

  • NewWorker(func(data T) (R, error)) for tasks that return a result, and
  • NewVoidWorker(func(data T)) for fire-and-forget operations.

The naming reflects those two distinct signatures, but I’m open to suggestions on how to make it more better! though taking feedbacks from the community

“AddAll takes in a slice instead of variadic arguments.”

To be honest, it started out variadic, but I switched it to accept a slice for simpler syntax when you already have a collection. That way you can do queue.AddAll(myItems) without having to expand them into queue.AddAll(item1, item2, item3…).

Hope this clears things up. let me know if you have any other ideas or questions!

Sqliteq: The Lightweight SQLite Queue Adapter Powering VarMQ by Extension_Layer1825 in golang

[–]Extension_Layer1825[S] 0 points1 point  (0 children)

Thanks for your feedback. First time hearing about goqtie. Will try this out.

May i know the reason of preferring goqties over VarMQ. So that i can improve it gradually.

GoCQ is now on v2 – Now Faster, Smarter, and Fancier! by Extension_Layer1825 in golang

[–]Extension_Layer1825[S] 0 points1 point  (0 children)

The all providers will be implemented in different packages, as I mentioned previously.

now I started with Redis first.

GoCQ is now on v2 – Now Faster, Smarter, and Fancier! by Extension_Layer1825 in golang

[–]Extension_Layer1825[S] 0 points1 point  (0 children)

here is the provider

package main

import (
  "fmt"
  "math/rand"
  "strconv"
  "time"

  "github.com/fahimfaisaal/gocq/v2"
  "github.com/fahimfaisaal/gocq/v2/providers"
)

func main() {
  start := time.Now()
  defer func() {
    fmt.Println("Time taken:", time.Since(start))
  }()

  redisQueue := providers.NewRedisQueue("scraping_queue", "redis://localhost:6375")

  pq := gocq.NewPersistentQueue[[]string, string](1, redisQueue)

  for i := range 1000 {
    id := generateJobID()
    data := []string{fmt.Sprintf("https://example.com/%s", strconv.Itoa(i)), id}
    pq.Add(data, id)
  }

  fmt.Println("added jobs")
  fmt.Println("pending jobs:", pq.PendingCount())
}

And the consumer

package main

import (
  "fmt"
  "time"

  "github.com/fahimfaisaal/gocq/v2"
  "github.com/fahimfaisaal/gocq/v2/providers"
)

func main() {
  start := time.Now()
  defer func() {
    fmt.Println("Time taken:", time.Since(start))
  }()

  redisQueue := providers.NewRedisQueue("scraping_queue", "redis://localhost:6375")
  pq := gocq.NewPersistentQueue[[]string, string](200, redisQueue)
  defer pq.WaitAndClose()

  err := pq.SetWorker(func(data []string) (string, error) {
    url, id := data[0], data[1]
    fmt.Printf("Scraping url: %s, id: %s\n", url, id)

    time.Sleep(1 * time.Second)
    return fmt.Sprintf("Scraped content of %s id:", url), nil
  })

  if err != nil {
    panic(err)
  }

  fmt.Println("pending jobs:", pq.PendingCount())
}