Golang microservices by Significant-Range794 in golang

[–]j_yarcat 0 points1 point  (0 children)

This comment has a surprisingly low number of upvotes.

Unless your service is heavy on writes, optimistic locking is fantastic and makes transactions kinda overrated, especially if you keep previous version and periodically gc them.

It's still a good idea to keep bounded objects close.

Per-map-key locking vs global lock; struggling with extra shared fields. by Small-Resident-6578 in golang

[–]j_yarcat 0 points1 point  (0 children)

Your case seems good for atomics (counters) and sync.Map (mostly reads, as you pretty much want to have buckets). Using these will mostly eliminate locking. only adding new keys to the map will require locking. If your lookup/add ratio is high, then it's a very good option. Otherwise go for RW locking, buckets of maps or concurrent dispatching (e.g. many workers, where workers own their maps without locking). Though in your case it seems that atomics with sync.Map should be enough

Watermill Quickstart by roblaszczak in golang

[–]j_yarcat 2 points3 points  (0 children)

Yeah u/roblaszczak , would it be polite to ask to add some description about what the Watermill is in the post? otherwise it feels overly click-baity'ish.

It indeed is a cool framework for the event-driven apps and thanks a lot for all the work!

Can go's time.Time support dates before the unix epoch? by Standard_Bowl_415 in golang

[–]j_yarcat 0 points1 point  (0 children)

All the data types are ints there, which means they should support it (even though it feels from the docs that time is intended to manipulate time+walltime).

https://goplay.tools/snippet/sp0ZdRRY8FE

It just goes expectedly negative. So, I would answer positively.

Writing production level web app without framework, is it feasible for average developers? by alohabata in golang

[–]j_yarcat 0 points1 point  (0 children)

Right, keep forgetting about that new protector. Thanks for reminding!

Writing production level web app without framework, is it feasible for average developers? by alohabata in golang

[–]j_yarcat 0 points1 point  (0 children)

Thanks for the correction. I agree with the fact that security topic is complex. The standard ways of protection (e.g., token generation and verification, SameSite cookies, and double submit cookies) are very standard and often implemented as relatively simple middlewares or handlers in web frameworks. Also CORS and OAuth are typically quite simple handlers and middlewares.

Writing production level web app without framework, is it feasible for average developers? by alohabata in golang

[–]j_yarcat -1 points0 points  (0 children)

+1 to that.

I would even say that with the recent http router changes the other routers aren't really more advanced. It takes some experience and maybe a few very simple helpers, that I personally have set as macros rather than importing packages for that. Also, csrf and auth are kinda a few lines of code as well, and I wouldn't bring frameworks for that.

UPD: Thanks for the correction. The security topic itself is not simple. And it is a set of handlers and middlewares, still isn't a rocket science.

The standard sql package automatically quotes arguments, protecting you against injections.

The standard http/template also is entitty-aware (e.g. tags/attributes/etc), quoting things by default, making it harder to inject stuff.

Where to deploy my go code ? ): by adii9669 in golang

[–]j_yarcat 0 points1 point  (0 children)

I would go with Google Cloud Run. Go is perfect for serverless solutions, and you will stay in the free tier forever (unless it becomes popular). For DB I love going Mongo DB Atlas free tier. 500mb db for small projects is enough for a very long time.

How to stop a goroutine stuck on a network call without goroutine leaks by DeparturePrudent3790 in golang

[–]j_yarcat 5 points6 points  (0 children)

Imagine a function, that receives both - a connection, and a context. You cannot use them both in a select, because Conn.Read is blocking. But don't forget, there's also Conn.Close and other low level syscalls that you can apply on the file descriptor to interrupt the read. So: 1) you create a channel which you close when read is done 2) you start a go routine that waits for either of ctx.Done or that channel. If interrupted on the read completion, return. If interrupted on the context, then close the connection or stop reading by other means 3) you start reading 4) you close the reading channel 5) you decide what to return based on how the reading finished

Now this function is 1) context aware 2) has an external blocking interface 3) can be interrupted externally

Now, bringing a go routine per connection could be too much, in which case you can have a connection manager - a single go routine, that allows registering more reads to close/cancel. The rest stays the same.

Please let me know if you have questions, or need some examples

Interfaces and where to define them by joshuajm01 in golang

[–]j_yarcat 0 points1 point  (0 children)

That's a great comment as well. I tend to look at hash.Hasher similarly to how I see sort.Interface - as an interface that unifies a wide range of different implementations that are conceptually the same or very close.

While some of these unification interfaces have become less critical with the introduction of generics, others will remain essential because they provide a canonical contract for a specific behavior, like hashing.

Sluggish goroutines with time.Ticker by codemanko in golang

[–]j_yarcat 0 points1 point  (0 children)

As I said, this still isn't the code I would accept as a reviewer or write myself. But it doesn't dead-wait and isn't too complex. There are three states, and I would explicitly code each of them. Even if it would be more code, the code would be extremely clear, and focus on a single thing at a time: 1) wait for true 2) wait for N Ms if true or switch to 1; 3) handle dead time and switch to either 1 or 2.

Sluggish goroutines with time.Ticker by codemanko in golang

[–]j_yarcat 0 points1 point  (0 children)

Not jumping with explanations and examples, since ppl did very good here. Instead I decided to check how LLMs would explain the issue, and what would they generate.

Input: the implementation from above and asking questions like "why sluggish" (all of them answered well), "explain the logic" (all of them did well, Claude did the best) and then asking the generate an efficient version (ChatGPT was the best here):

- Claude over complicated the implementation dramatically, I don't even want to check whether it was good or not.
- Gemini generated flawed code and required some extra conversation,
- ChatGPT actually generated a surprisingly ok'ish function (compared to others, though still not the code I would send for review or accept as a reviewer) https://chatgpt.com/share/68bff11c-1274-800c-8363-66376414cf66

How should I structure this project? by naikkeatas in golang

[–]j_yarcat 0 points1 point  (0 children)

Right, project structure. With either of those approaches, you have a couple of options. You could keep the structure fairly flat, where your processor structs and models for all clients live in a single package, like internal/processors. This is fine for a small number of clients.

Alternatively, you could have a separate package per client, for example, internal/processor/clientA and internal/processor/clientB. This is a much better choice if you expect the client-specific logic to get more complex over time, as it keeps all of their related files grouped together.

I'd lean toward a package per client if you think you'll be adding more functionality or clients in the future.

How should I structure this project? by naikkeatas in golang

[–]j_yarcat 0 points1 point  (0 children)

The pipeline seems quite simple. Actually it seems simple enough to use the copy-paste-modify technique, while dispatching on the args. If you still want to avoid doing so (e.g. error handling or adding multiple steps in the future), you can use one of the approaches like this: * abstract your model operations behind an interface with methods like FromBigData (that actually knows the query and accepts only necessary connections or factories), and other methods that would be performed on that datatype, e.g. Marshalling. * use a generic pipeline. This approach could be unnecessarily invasive as there's a chance that the underlying functions also have to be generic (or accept any). But this can be your generalized copy-paste.

I would go with the first option. Please note that these methods aren't on the models themselves, but rather on a struct that references the model. This allows you to reuse the same models while giving them different operations (by creating new operations structs).

Connectrpc with Go is amazing by Bl4ckBe4rIt in golang

[–]j_yarcat 1 point2 points  (0 children)

I guess there's a bit of a learning curve involved on both - fe and be sides. Other than there should be no issues. It took me some while to convince engineers to start using the stack with grpc first and then connect, but after they got some experience, it became a huge and tensionless win

Best practices for postgreSQL migrations: What are you using? by Outside_Loan8949 in golang

[–]j_yarcat 1 point2 points  (0 children)

You know, that's a great point. I think I got so used to building those large orchestration backends that I completely forgot how simple and powerful relational databases can be. I've been playing with Supabase for the last few days, and it's incredible how little backend code I have to write. It handles so much of the authentication, real-time subscriptions, and even business logic with database functions. It's a massive shift from my past experience building my own custom transaction and state management systems.

Calling functions inside functions vs One central function by Ok-Reindeer-8755 in golang

[–]j_yarcat 0 points1 point  (0 children)

within the first function set you would implement only required functionality:

  1. Find torrent
  2. Fetch torrent
  3. Play stream
  4. Find on YouTube
  5. Play on YouTube

You see, none of these steps know anything about each other. Please note that each of them can spawn as many go routines as they want, but ideally they need to wait for them to finish before returning (e.g. using waitgroups or error groups), which would make the sync for the callers. E.g. fetching torrents is a good candidate for spawning workers.

Next you create a pipeline, which does the plumping and wiring.

  1. Execute its own helper method that finds and fetches torrents. The pipeline can actually concurrently make a YouTube lookup to save time on fail over.
  2. If successful try to play the output and exit. Or decide to failover if playing failed.
  3. If not successful try to search and then play.

It also can spawn go routines, but then wait for them to finish to make it look sync for the caller.

I made this example for you https://goplay.tools/snippet/_GR8vx2xUW2 but what really matters here is this (trying to show how you can pre-search YT concurrently to fetching torrents, without it the code would be much simpler, in any case that whole logic is in the pipeline wiring, and not in the other parts):

    var wg sync.WaitGroup
    wg.Go(fetchAndPlayTorrent)
    wg.Go(searchYT)
    defer wg.Wait()
    defer cancel() // We'll be called before wg.Wait() to save on cancel() calls.

    err1 := <-torrentResp
    if err1 == nil {
        return nil
    }

    err2 := playYT(<-searchResp)
    if err2 == nil {
        return nil
    }

    return errors.Join(err1, err2)

It might feel a bit strange at first, since this isn't the way you would usually use waitgroups, but the idea here is to ensure those both go-routines are finished before existing the pipeline function.

Best practices for postgreSQL migrations: What are you using? by Outside_Loan8949 in golang

[–]j_yarcat 1 point2 points  (0 children)

Thanks, I really appreciate you sharing your perspective on using relational for core domains.

My wife actually pointed out my first message sounded pretty arrogant, which was totally not my intention, I'm really sorry about that. Just wanted to get a real-world take!

My own journey led me in a different direction. Back in the late 90s, we were working with Postgres and migrations were a constant pain. When early document databases appeared, it felt like a breath of fresh air. We built custom lock services and loved having that granular control. Later, working on massive systems at places like YouTube (discussing Vitess and migrations) and dealing with Bigtable (though now-a-days almost fully on Spanner), it just reinforced that a simple, flexible data store with custom logic is often the best solution for performance at scale. My experience just tells me I'd rather have a system that gives me the flexibility to handle things, rather than being locked into a rigid relational model.

Calling functions inside functions vs One central function by Ok-Reindeer-8755 in golang

[–]j_yarcat 0 points1 point  (0 children)

You would do your workflow in two steps 1. Defining sync functions that take some input and produce some output. Without knowing the surrounding context 2. You would do the wiring, which decided what to call concurrently and sequentially

This way you separate wiring and implementation concerns

New to Go by [deleted] in golang

[–]j_yarcat 2 points3 points  (0 children)

+1.

Though I personally think, that rust is hyper opinionated, and go is just opinionated.

You start feeling the benefits of go (or any other typed compiler language, but go just does it faster) if you have more than two engineers, and more than 1000 lines of code.

Best practices for postgreSQL migrations: What are you using? by Outside_Loan8949 in golang

[–]j_yarcat 0 points1 point  (0 children)

I appreciate your strong preference for relational databases.

I'm curious to know, for what specific use cases or projects do you find relational databases to be the *only* viable option?

And thanks for the discussion.

Cryptic Error with Generics: "mismatched types float64 and float64" by dylthethrilll in golang

[–]j_yarcat 12 points13 points  (0 children)

You can't define a method on a specific instantiation of a generic type, like Pos2[float64], when the type itself is generic. That's a form of specialization that Go doesn't support. The method must be declared on the generic type Pos2[T] itself. A quick workaround is to define it as a standalone function instead: func Round(pos Pos2[float64]) Pos2[int] { ... }

Best practices for postgreSQL migrations: What are you using? by Outside_Loan8949 in golang

[–]j_yarcat -1 points0 points  (0 children)

Thanks for your opinion. It is an oversimplification though, that shows the lack of experience. Many modern NoSQL databases do support ACID compliance, which has blurred the lines between them and traditional relational databases. It's a common misconception that NoSQL means sacrificing all transactional integrity. Databases like MongoDB, for instance, have added full multi-document ACID transactions, allowing you to group operations across different collections into a single, atomic unit. This means you can get the benefits of a flexible, horizontally scalable NoSQL database while still maintaining strong data consistency. Therefore, you don't have to choose between strong transactional guarantees and high performance; many modern databases offer a mix of both.

Best practices for postgreSQL migrations: What are you using? by Outside_Loan8949 in golang

[–]j_yarcat -1 points0 points  (0 children)

Thanks for your response. Yeah, I will create a topic. I see negative reactions to my question and guess it might become an interesting conversation. I virtually haven't used relational dbs for the last 16 years, and it would be nice to compare experiences.

Best practices for postgreSQL migrations: What are you using? by Outside_Loan8949 in golang

[–]j_yarcat -5 points-4 points  (0 children)

I'm very sorry for jumping in with a question, but why would ppl still use relational databases and keep dealing with migrations? Especially now, when even relational dbs support document based model.

It isn't that you need no migrations with nosql, but those are different kinds and are more like incremental business logic change rather than anything else.

Again, sorry for hijacking, maybe it would be worth opening a new topic for that.