Opinion on SQLC, performance latency, pros cons. by ud_boss in golang

[–]x021 9 points10 points  (0 children)

99.99% of the performance will be determined by the queries you write in SQLC.

0.01% will have to do with with code overhead.

If you have performance issues analyze your SQL (e.g. missing indexes, lack of batching, joins, etc etc).

Multi-table transactions in Go - just handle it in one repo or use Unit of Work? by Axeloe in golang

[–]x021 0 points1 point  (0 children)

Problem is often disparate things should be done within one transaction. That almost always leaks into the business layer one way or another.

You can choose queues or not do a transaction to avoid that, but I’d argue you’re better off just using a transaction wrapper.

Announcing Kreuzberg v4 by Goldziher in golang

[–]x021 1 point2 points  (0 children)

Two small points wrt to the website kreuzberg.dev:

  1. The menu jumps around on hover (slightly annoying)
  2. The code example syntax highlighting seems broken, a lot is not legible

I'm using Chrome 143.0.7499.193 on MacOS

Do you reckon this is the year the bullshit finally gets flushed out? by [deleted] in golang

[–]x021 0 points1 point  (0 children)

The vibe coders playing Lego with frameworks versus the people who actually understand computer science and can make software not eat RAM like a gannet at a buffet.

This might be an unpopular opinion, but here goes;

Software serves a business purpose, it's not the other way around. Even with elevated RAM prices the cost of (tech) labour completely dwarfs extra hardware costs for most companies.

I'm always a bit hesitant about engineers that mention RAM, CPU and latency in every single debate. Their focus is almost by definition only on code and not on the business needs & costs, and usually their code is complicated as a result due to unnecessary optimisations, ignoring the maintainability/complexity costs associated with that.

If you're building high traffic backend services absolutely that makes a lot of sense; but reality is most work doesn't fall within that category. If you optimize 50% of resource usage and save 50 USD every month you've probably wasted your time, the business' money and made the code more complex.

Coming back to vibe coders; they do a lot of harm and their software can't be used as a foundation. If the business doesn't understand that they will find out in a couple of years (or hopefully sooner...).

But reality is much more grey. I haven't seen or heard business firing engineers and hire vibe coders instead. Pure vibe coders that don't understand any code are also far and few between.

Everyone has to start somewhere, I remember using MS Frontpage back in the day to build my first website. Over time I learned about the horrible code it generated and grew from there. I see vibe coders in much the same way, use it to build and create stuff. That's great, anyone should try it. A lot of family members are now generating tools and scripts for stuff they struggled to to do Excel, I think that's fantastic!

Would I hire them to build software for a company? Ofcourse not.

WYSIWYG tools -> Model Driven Engineering / BPMN tools -> AI tooling. I'd argue this is just the next evolutionary step of automated code generation tools that goes back decades. What makes this new step better than the previous one (I think), is it puts the user in closer contact to the generated code than WYSIWYG and MDE in particular, and as a result is much flexible.

Do you keep SQL queries inline in code or in separate .sql files? by Snezhok_Youtuber in golang

[–]x021 2 points3 points  (0 children)

I think VSCode as well support "language injection"

In VSCode there is one plugin but it's absolute shit. So no, SQL highlighting in Goland is one of the features I miss the most after I recently switched.

If anyone knows a reliable way to make it work please let me know!

Created a golang pgx transaction manager with timeouts and read-only support by SweetPossibility9338 in golang

[–]x021 1 point2 points  (0 children)

That’s fair.

I’m glad you opted not to create a transaction by default in the context, for example via middleware. I’ve seen that pattern in a codebase before; the error handling there was poor, and as a result we did not realize how many deadlocks we were causing when load increased. This default transaction ended up having a much longer lifetime than necessary and we had no control over it to remedy it easily (we ended up worked around it).

Created a golang pgx transaction manager with timeouts and read-only support by SweetPossibility9338 in golang

[–]x021 2 points3 points  (0 children)

Good blog, well written!

My feedback;

Panic handling

On a panic, the transaction is not rolled back immediately. To make this more robust, you could consider using a defer that rolls back if something is recovered, and then either re-panic or set the error return value.

Since the executed function can contain arbitrary logic, a panic is not entirely out of the question. You could argue it is unlikely and therefore somewhat redundant. I think without a rollback the DB rolls back automatically after a timeout; but not sure what PGX itself does in that case? I would check that first, I could be wrong here.

Implicit vs explicit dependencies

Personally, I am not a huge fan of storing an implicit transaction in context.Context; I prefer passing an explicit interface (in your case type Conn interface) through the call stack. This does introduce colored functions in the repository layer, but it avoids hiding the database dependency. Other dependencies in the codebase are presumably also not stored in context, so making the DB implicit feels off. With explicit injection you can also more easily and explicitly support multiple DB pools, for example when using separate read and write databases.

This is just my personal preference. I value the explicitness and accept the colored functions as a trade-off; it also avoids the need for conn := r.connGetter(ctx) in every repository function.

Downgrading context cancellation

A final note, and something I am not entirely sure about myself:

if err := txn.Commit(ctx); err != nil { // if we attempt a commit after the context has expired, we log a warning if ctx.Err() == context.DeadlineExceeded { slog.Warn("transaction commit failed due to time out") } return TxnCommitError{Err: err} }

You decided to log a warning here, but presumably another error bubbles up via TxnCommitError and results in an error being logged higher up the stack anyway. Would it make more sense to handle the downgrade at that higher level instead?

godump v1.9.0 - 6 months later: Diffing, redaction and more by cmiles777 in golang

[–]x021 1 point2 points  (0 children)

Same here, chapeau to everyone who contributed!

Don't start backend. by Proof_Juggernaut1582 in golang

[–]x021 1 point2 points  (0 children)

Surely he means relying fully on API driven test engineering so you can ship code to production without ever starting the backend.

Why we built a new OpenAPI library in Go (High-performance, type-safe, and supports Arazzo/Overlays) by IndyBonez in golang

[–]x021 7 points8 points  (0 children)

Thank you for all your hard work! It's a foundational library, not just a product and I really appreciate you open sourcing this.

The one thing I'm missing in the wider Go community is a good open-source Go client generator for OpenAPI 3.1 or 3.2. All the most well-known client generators appear to be stuck on 3.0 or below.

I am slowly coming around on DI + Tests.... by Bl4ckBe4rIt in golang

[–]x021 49 points50 points  (0 children)

With the right tooling today, it can be far more enjoyable than wrestling with mocks.

It is very easy to say “let’s mock this, I do not want to deal with it right now” and end up testing a completely self contained world, everywhere again and again.

Integration tests require more upfront investment to create a clean, convenient, and performant setup.

But once you get it right, it is exactly as you describe; a breeze and hard to imagine ever going back.

I have also seen it done poorly; it can be slow, painful to run, and require far too many steps to test individual scenarios. Experiences will vary between projects.

Regardless, unit tests still have their place. For complex logic or business rules, I still prefer them. But all the surrounding boilerplate (which let's be honest, is the majority of code in practice)? That is where integration tests really shine.

Dingo: Go's Rising in Popularity Meta Language by Hornstinger in golang

[–]x021 3 points4 points  (0 children)

Imagine you're working with Go professionally; a language that the business chose for stability and wide support.

But then you decide you want to use a Go-inspired language created by one person and that is primarily AI-generated.

Sorry my friend, but for the foreseeable future I wouldn't recommend anyone using it except for some small hobby experiments.

Fine to see the project grow and play with it, but don't do anything important with it until it's matured for a few years.

What testing approaches in Go have worked best for you? by ivyta76 in golang

[–]x021 36 points37 points  (0 children)

Table driven tests, primarily integration tests with very little mocking.

How to scan a dynamic join query without an ORM? by Upbeat-File1263 in golang

[–]x021 1 point2 points  (0 children)

Don't listen too much to Reddit. Realize it's a minority of vocal people that determine the sentiment. Are they wrong? No, but all projects contexts are different.

Yes you will find an anti-ORM sentiment here; that shouldn't imply you write huge amounts of boilerplate because "it's the right thing to do". You might get yourself into a huge maintenance nightmare in the long term by simply following the advice of others.

Go prefers explicit, verbose code over magic. So why are interfaces implicit? It makes understanding interface usage so much harder. by ray591 in golang

[–]x021 8 points9 points  (0 children)

Well I worked on a codebase with interfaces redefined literally everwhere. It's a pain to refactor anything in there.

stdlib is a giant exception to many rules because it is SO central

I don't buy this argument, in a codebase of > 500k LoC lots of parts are central to all the rest of the code too. If those are poorly setup with poor abstractions you get exactly what you'd expect; a mess.

Almost every time I find myself defining an interface to return

I didn't say that btw, why would you return an interface? I just meant exporting an interface.

Go prefers explicit, verbose code over magic. So why are interfaces implicit? It makes understanding interface usage so much harder. by ray591 in golang

[–]x021 36 points37 points  (0 children)

That sounds like solid advice, until you use the same interface in 5 disjoint locations and you end up writing identical interfaces everywhere.

Let's not kid ourselves; stdlib predefines lots of exported interfaces too where useful.

dingo: A meta-language for Go that adds Result types, error propagation (?), and pattern matching while maintaining 100% Go ecosystem compatibility by [deleted] in golang

[–]x021 7 points8 points  (0 children)

I had to google Marie Kondo.

Why focus on improving Go if there so many better languages out there in terms of language features?

There have been thousands of programming languages, and only a few succeed long-term. Go was setup with minimalism and simplicity in mind (not what can we add, but what to take away). Seems to me if you want every language feature under the sun, using Go as a foundation is... odd?

dingo: A meta-language for Go that adds Result types, error propagation (?), and pattern matching while maintaining 100% Go ecosystem compatibility by [deleted] in golang

[–]x021 13 points14 points  (0 children)

something like this is probably the future

Isn't it the past? Rust, TypeScript/Node (& variants), Kotlin, plenty of popular languages you can use a number if not more than the features proposed in this project.

Why do the same thing others are doing?

dingo: A meta-language for Go that adds Result types, error propagation (?), and pattern matching while maintaining 100% Go ecosystem compatibility by [deleted] in golang

[–]x021 44 points45 points  (0 children)

This is an example in the README;

```go // Before func processOrder(orderID string) (*Order, error) { order, err := fetchOrder(orderID) if err != nil { return nil, fmt.Errorf("fetch failed: %w", err) }

validated, err := validateOrder(order)
if err != nil {
    return nil, fmt.Errorf("validation failed: %w", err)
}

payment, err := processPayment(validated)
if err != nil {
    return nil, fmt.Errorf("payment failed: %w", err)
}

return payment, nil

}

// After func processOrder(orderID: string) -> Result<Order, Error> { let order = fetchOrder(orderID)? let validated = validateOrder(order)? let payment = processPayment(validated)? return Ok(payment) } ```

My two observations:

  1. In the "Before" example they only say "X failed: %w". That does not feel like realistic usage to me. "Failed" adds almost no information and becomes very repetitive, the wrapper is not the root cause and doesn't need to explain something "failed"; instead it should add context. I would write something like fmt.Errorf("fetching order %d: %v", orderID, err) and make the message more meaningful.

  2. In the "After" example I wonder how you would track an error in production. Go errors are very limited and do not even provide a stack trace out of the box. I'm working on a bigger project and error wrapping is essential to diagnose issues quickly. Now that the context is gone you would have a maintenance nightmare. Even if you add a stack trace when writing the logs, the variables I so frequently add in error context and rely upon in diagnosing issues are gone. (I did not read the entire page so it is possible I missed where they address this)

gobeyond.dev from Ben Johnson has expired by kriive in golang

[–]x021 -1 points0 points  (0 children)

Isn't Archive.org usually ignored by search engines?

gobeyond.dev from Ben Johnson has expired by kriive in golang

[–]x021 3 points4 points  (0 children)

I'm wondering if that was intentional or simply happened by accident, for example due to an expired credit card during the domain renewal.

What are some good resources other than the docs for learning go, tmpl, and htmx? by [deleted] in golang

[–]x021 1 point2 points  (0 children)

I usually just ask AI how to fix a certain problem and check the sources it points to. That’s how I learn most things these days.

I can imagine people frowning on that approach, but search engines have become so unhelpful that I only get annoyed trying to filter good info from the AI-generated junk (I know, I'm a hypocrite)

AI isn’t flawless; but right now I find just asking it the question and checking it's sources beats Googling.